Pushkin. Lights, camera, innovation. Walt Disney Studios chose advanced 5G solutions from T-Mobile for Business to transform the moviemaking process. Together, they kept a remote production hub in Hawaii in sync with a team in California to bring Lilo and Stitch to theaters this summer.
This is picture-perfect collaboration. This is Walt Disney Studios with T-Mobile for Business. Take your business further at T-Mobile.com slash now. Have you ever gotten sick on a very expensive, very non-refundable family trip? Amazon One Medical has 24-7 virtual care, so you can get help no matter where you are. And with Amazon Pharmacy, your meds can get delivered right to your hotel fast.
It's kind of like the room service of medical care. Thanks to Amazon, health care just got less painful. Where do you see your career in 10 years? What are you doing now to help you get there? The sooner you start enhancing your skills, the sooner you'll be ready. That's why AARP has reskilling courses in a variety of categories like marketing and management to help your income live as long as you do. That's right.
AARP has a bevy of free skill-building courses for you to choose from, because the steps you choose to take today will help you love what you do in the future. That's why the younger you are, the more you need AARP. Learn more at aarp.org slash skills. Nick Jacobson wanted to help people with mental illness, so he went to grad school to get his Ph.D. in clinical psychology.
But pretty quickly, he realized there just were nowhere near enough therapists to help all the people who needed therapy. If you go to pretty much any clinic, there's a really long wait list. It's hard to get in. And a lot of that is organic in that there's just a huge volume of need and not enough people to go around. Since he was a kid, Nick had been writing code for fun.
So in sort of a side project in grad school, he coded up a simple mobile app called Mood Triggers. The app would prompt you to enter how you were feeling so it could measure your levels of anxiety and depression. And it would track basic things like how you slept, how much you went out, how many steps you took. And then in 2015, Nick put that app out into the world and people liked it.
A lot of folks just said that they learned a lot about themselves and it was really helpful in actually changing and managing their symptoms. So I think it was beneficial for them to learn, hey, maybe actually it's on these days that I'm withdrawing and not spending any time with people that it might be good for me to go and actually get out and about, that kind of thing. And I had
a lot of people that installed that application. So about 50,000 people installed it from all over the world, over 100 countries in that one year.
I provided an intervention for more than what I could have done over an entire career as a psychologist. I was a graduate student at the time. This is something that was just amazing to me, the scale of technology and its ability to reach folks. And so that made me really interested in trying to do things that could essentially have that kind of impact.
I'm Jacob Goldstein, and this is What's Your Problem, the show where I talk to people who are trying to make technological progress. My guest today is Nick Jacobson. Nick finished his PhD in clinical psychology, but today he doesn't see patients. He's a professor at Dartmouth Medical School, and he's part of a team that recently developed something called Therabot. Therabot is a generative AI therapist. Nick's problem is this.
How do you use technology to help lots and lots and lots of people with mental health problems? And how do you do it in a way that is safe and based on clear evidence? As you'll hear, Nick and his colleagues recently tested Theropod in a clinical trial with hundreds of patients. And the results were promising. But those results only came after years of failures and over 100,000 hours of work by Team Theropod.
Nick told me he started thinking about building a therapy chatbot based on a large language model back in 2019. That was years before ChatGPT brought large language models to the masses. And Nick knew from the start that he couldn't just use a general-purpose model. He knew he would need additional data to fine-tune the model to turn it into a therapist chatbot. And so the first iteration of this was thinking about, okay, where is there widely accessible data?
And that would potentially have an evidence base that this could work. And so we started with peer-to-peer forums. So folks interacting with folks surrounding their mental health. So we trained this model on hundreds of thousands of conversations that were happening on the internet. So you have this model, you train it up, you sit down in front of the computer. Yep.
What do you say to the chatbot in this first interaction? So we say, I'm feeling depressed. What should I do? Okay. And then what does the model say back to you? I'm paraphrasing here, but it was just like this. I'm...
I feel so depressed every day. I have such a hard time getting out of bed. I just want my life to be over. So literally escalating. So your therapist is saying they're going to kill themselves. Right. So it's escalating, talking about kind of really thoughts about death. And it's clearly like the profound mismatch between what we were thinking about and what we were going for. What did you think when you read that?
So I thought this is such a non-starter, but I think one of the things that I think was clear was it was picking up on patterns in the data, but we had the wrong data.
Yeah. I mean, one option then is give up. It would have been. Absolutely. Like literally the worst therapist ever is what you have built. I mean, it really, I couldn't imagine a worse, yeah, a worse thing to actually try to implement in a real setting. So this went nowhere in and of itself.
But we had a good reason to start there, actually. So it wasn't just that there's widely available data. These peer networks actually do... There is literature to support that having exposure to these peer networks actually improves mental health outcomes. It's a big literature in the Cancer Survivor Network, for example, where folks that are struggling with cancer and hearing from other folks that have gone through it can really build this resilience and it promotes a lot of mental health outcomes that are positive. So we had a good reason to start, but gosh, did it not go well. So...
Okay, the next thing we do is switch gears the exact opposite direction. Okay, we started with very laypersons trying to interact with other laypersons surrounding their mental health. Let's go to what providers would do.
And so we got access to thousands of psychotherapy training videos. Interesting. These are how psychologists are often exposed to the field on what they would really learn how therapy is supposed to work and how it's supposed to be delivered. And in these, these are like dialogues between sometimes actual patients that are consenting to be part of this and sometimes simulated patients where it's an actor that's trying to mimic this.
And there's a psychologist or a mental health provider that is like really having a real session with this. And so we train our second model on that data. Seems more promising. You would think. You'd say, I'm feeling depressed. What should I do? As like the initial way that we would test this. The model says, mm-hmm.
It's not wrong. Literally, mm-hmm. Like it writes out M-M space H-M-M? You got it. What did you think when you saw that? And so I was like, oh gosh, it's picking up on patterns in the data. And so you continue these interactions and then the next responses go on from the therapist. So within about five or so turns, we would often get a...
model that would respond about their interpretations of their problems stemming from their mother or their parents more generally. So like...
It's kind of like if you were to try to think about what a psychologist is, this is like every trope of what a, like in your mind, if you were going to like think about... Like the stereotypical, I'm lying on the couch and a guy's wearing a tweed jacket sitting in a chair. And hardly says anything of that could be potentially helpful, but is reflecting things back to me. And telling me it goes back to my parents. Yeah. Well, this is, so let's just pause here for a moment because as you say, this is like the...
stereotype of the therapist yeah but you trained it on real data yeah so maybe it's the stereotype for a reason yeah i think what to me was really clear was that we were we had data that the models were emulating patterns they were seeing in the data so the models weren't the problem
The problem was the data. We had the wrong data. But the data is the data that is used to train real therapists. Like, it's confusing that this is the wrong data. It is. It is. Why is it the wrong data? This should be exactly the data you want. Well, it's the wrong data for this format. In our conversation, when you might say something, me nodding along or saying, mm-hmm, or go on, me contextually be like,
completely appropriate. Syntactically, in a conversational dialogue that would happen via chat,
this is not like a medium that works very well. Like this kind of thing. Yeah. It's almost like a translation, right? It doesn't translate from a human face-to-face interaction to a chat window on the computer. And not the right setting. Yeah. So that, I mean, that goes to the like nonverbal subtler aspects of therapy, right? Like presumably when the therapist is saying, there is body language, there's everything that's happening in the room, which is,
a tremendous amount of information, emotional information. And that is a thing that is lost in this medium and maybe speaks to a broader question about the translatability of therapy. Yeah, absolutely. So I think to me, it was at that moment that I kind of knew that we needed to do something radically different. Neither of these was working well.
About one in 10 of the responses from that chatbot based on the clinicians would be something that we would be happy with. So something that is both personalized, clinically appropriate, and dynamic. So you're saying you've got it right 10% of it. Exactly. So really, you know, that's not a good, like... No, it's not a good therapist. No, we would never think about implement, like actually trying to deploy that.
So then what we started at that point was building our own, creating our own data set from scratch in which we, how the models would learn would be exactly what we want it to say. That seems...
That seems wild. I mean, how do you do that? How do you generate that much data? We've had a team of 100 people that have worked on this project over the last five and a half years at this point. And they've spent over 100,000 human hours kind of really trying to build this. Just specifically, how do you build a data set from scratch? Because the data set is the huge problem in AI, right? Yes, absolutely. So psychotherapy, when you would test it,
is based on something that is written down in a manual. So when you're a psychologist, when they're in a randomized controlled trial trying to test whether something works or not,
To be able to test it, it has to be replicable, meaning it's like repeated across different therapists. So there are manuals that are developed. In this session, you work on psychoeducation. On this section, we're going to be working on behavioral activation, which are different techniques that are really a focus at a given time. And these are broken down to try to make it
translational so you can actually move it. So the team would read these empirically supported treatment manuals. So the ones that had been tested in randomized control trials. Yeah. And then what we would do is we would take that content chapter by chapter, because this is like session by session, take the techniques that would work well via chat, of which most things in cognitive behavioral therapy would.
Um, and then we would create a, uh, like an artificial dialogue between we'd act as like, what is the patient's presenting problem, what they're bringing on, what the personality is like, and we're kind of constructing this. And, um, and then what is what we would want our system to be the gold standard response for every kind of input and output that we'd have. So we're, we're writing both the,
the patient end and the therapist end. Right. It's like you're writing a screenplay. Exactly. It really is. It's a lot like that. But instead of a screenplay that might be written, like in general, it's like not like not just something general, but like where is something that's really evidence based based on content that we know works, works in this setting. And so what you write, the equivalent of what?
Thousands of hours of sessions? Hundreds of thousands. There was postdocs, grad students, and undergraduates within my group that were all part of this team that are creating this. Just doing the work, just writing the dialogue. Yeah, exactly. And not only did we write them, but every dialogue before it would go into something that our models are trained would be reviewed by another member of the team. So it's all not only crafted...
by hand, but we would review it, give each other feedback on it, and then like make sure that it is the highest quality data. And that's when we started seeing dramatic improvements in the model performance. Um, so we, we continued with us for years, um, six months before chat TPT was relaunched. Um,
We had a model that in today's standards would be so tiny that was delivering about 90% of the responses that were output. We were evaluating as exactly what we'd want. It's this gold standard evidence-based treatment. So that was fantastic. We were really excited about it. So we've got like the
We've got the benefit side down of the equation. The next two years, we focus on the risk, the risk side of it. Well, because there's a huge risk here, right? The people who are using it are by design quite vulnerable. Absolutely. Or by design putting a tremendous amount of trust into this bot and making themselves vulnerable to it. Absolutely. It's a...
It's quite a risky proposition. And so tell me specifically, what are you doing? So we're trying to get it to endorse elements that would make mental health worse. So a lot of our conversations were surrounding trying to get it to, for example, I'll give you an example of one that nearly almost any model will struggle with that's not tailored towards the safety side. Yeah, what is it? Is if you tell a model that you want to lose weight,
it will generally try to help you do that. And if you want to work in an area related to mental health,
trying to promote weight loss without context is so not safe. So you're saying it might be a user with an eating disorder who is unhealthily thin who wants to be even thinner. And the model will help them to often actually get into a lower weight than they already are. So this is like not something that we would ever want to promote, but this is something that we certainly at earlier stages, we're seeing these types of characteristics within the model. What are,
other, like, that's an interesting one, and it makes perfect sense when you say it. I would not have thought of it. What's another one?
A lot of it would be like we talk about the ethics of suicide, for example, somebody who is who thinks, you know, they're in a midst of suffering. And, you know, it's like that they should be able to end their life or they're thinking about this. Yes. And what do you want the model? What does the model say that it shouldn't say in that setting before you get off? And in these settings, we want to make sure that they don't. And the model does not promote or endorse elements that would promote someone's
a worsening of suicidal intent. We want to make sure we're providing not only not the absence of that, actually, some benefit in these types of scenarios. That's the ultimate nightmare for you. Yeah. Right? Like, let's just be super clear. The very worst thing that could happen is you build this thing and it contributes to someone killing them. Absolutely. That is a plausible outcome and a
Disastrous nightmare. It's everything that I worry about in this area is exactly this kind of thing. And so we essentially, every time we find an area where they're not implementing things perfectly, some optimal response, we're adding new training data. And that's when things continue to get better. Until we do this and we don't find these holes anymore. That's when we finally, we're ready for the randomized control trial. Right, so you decide after...
After, what, four years, five years? This was about four and a half years. Yeah, that you're ready to have people use the model, albeit in a kind of... You're going to be the human in the loop, right? Yeah. So you decide to do this study. You recruit people on Facebook and Instagram, basically. Yeah, exactly. Is that right?
So what are they signing up for? What's the big study you do? So it's a randomized controlled trial. The trial design is essentially that folks would come in, they would fill out information about their mental health across a variety of areas. So depression, anxiety, and eating disorders, depression.
For folks that screen positive for having clinical levels of depression or anxiety, they would be included, or folks that were at risk for eating disorders would be included in the trial. We tried to have at least 70 people in each group. So we had 210 people that we were planning on enrolling within the trial group.
And then half of them were randomized to receive Therabot and half of them were on a wait list in which they would receive Therabot after the trial had ended.
The trial design was to try to ask folks to use Therabot for four weeks. They retained access to Therabot and could use Therabot for the next four weeks thereafter. So eight weeks total. But we asked them to try to actually use it during that first four weeks. And that was essentially the trial design. Okay, so people signed up. They start like, what's actually happening? Are they just like...
Chatting with the bot every day? So they install a smartphone application. That's the Therabot app. They are prompted once a day to try to have a conversation starter with the bot. And then the bot
From there, they could talk about it when and wherever they would want. They can ignore those notifications and kind of engage with it at any time that they'd want. But that was the gist of the trial design. And so folks, in terms of how people used it, they interacted with it throughout the day, throughout the night. So for example, folks that would have trouble sleeping, that was like a way that folks during the middle of the night would engage with it.
fairly often. Um, they, in terms of the, the types of what the topics that they described, um,
It was really the entire range of something that you would see in psychotherapy. We had folks that were dealing with and discussing their different symptoms that they were talking about. So the depression, their anxiety that they were struggling with, their eating and their body image concerns, those types of things are common because of the groups that we were recruiting. But relationship difficulties, problems like folks, some folks were really like
had ruptures in their, um, you know, somebody was going through a divorce. Other folks were like going through breakups, problems at work. Um, some folks were unemployed. Um, and during this time, so like the range of kind of personal dilemmas and difficulties that folks were experiencing was a lot of what we would see in like a real setting where it's like, uh, kind of a whole host of different things that folks were describing and experiencing. Yeah.
And presumably, had they agreed as part of enrolling in the trial to let you
read the transcripts? Oh, absolutely. Yeah, it was very clear when we did an informed consent process where folks would know that we were reading these transcripts. And are you personally, like, what was it like for you seeing them come in? Are you reading them every day? More than that. So, I mean, this is something that is, so you alluded to that this is one of these concerns that anybody would have is like a nightmare scenario where something is
the bad happens and somebody actually acts on it. So this is like, I think of this in a way that I take... So this is not a happy moment for you. This is like you're terrified that it might go wrong? Well, it's certainly like I see it going right, but I have every concern that it could go wrong. Right? Like that...
And so for the first half of the trial, I am monitoring every single interaction sent to or from the bot. Other people are also doing this on the team, so I'm not the only one. But I did not get a lot of sleep in the first half of this trial, in part because I was really trying to do this in near real time. So usually for nearly every message, I was getting to it within about an hour. So it was a barrage of nonstop kind of communication that was...
happening. So were there were there any slip ups? Did you ever have to intervene as a human in the loop? That we did. And the thing that that was something that we as a team did not anticipate was
what we found was really unintended behavior was a lot of folks interacted with Therabot. And in doing that, there was a significant number of people that would interact with it and talk about their medical symptoms. So for example, there was a number of folks that were experiencing symptoms of a sexually transmitted disease, and they would describe that in great detail and ask it, you know, how they should medically treat that. And instead of Therabot,
They're about saying, hey, go see a provider for this. This is not my realm of expertise. It responds as if... And so all of the advice that it gave was really fairly reasonable, both in the assessment and treatment protocols, but we would not have wanted to act that way. So we contacted all of those folks to recommend that they actually...
contact a physician about that. Folks did interact with it related to crisis situations. So we had also had Therabot in these moments provided appropriate contextual crisis support, but we reached out to those folks directly
further escalate and make sure that they had further support available in those types of times too. There were things that were certainly areas of concern that happened, but nothing that was concerning from the major areas that we had intended all kind of really went pretty well. Still to come on the show, the results of the study, and what's next for Therabock.
Together, T-Mobile for Business and industry leaders are innovating with our advanced 5G solutions. For Walt Disney Studios, we transformed movie making by syncing teams in California with a remote production hub in Hawaii, enabling picture-perfect collaboration to help bring Lilo & Stitch to theaters this summer.
For PGA of America, we deliver pro-level efficiency with connected security and ticketless entry for smoother operations, seamless transactions, and better fan experiences from gate to green. And for Tractor Supply, we put 5G business internet to work across 2,200 stores, cultivating AI-driven customer experiences to keep things running seamlessly inside, curbside, and countryside.
We're helping industries redefine what's possible because with a partner that's as committed to your business as you are, there are no limits. Discover how our advanced 5G solutions can take your business further at T-Mobile.com slash now. Have you ever gotten sick on a very expensive, very non-refundable family trip? Amazon One Medical has 24-7 virtual care so you can get help no matter where you are. And with Amazon Pharmacy, your meds can get delivered right to your hotel fast.
It's kind of like the room service of medical care. Thanks to Amazon, health care just got less painful. You probably think it's too soon to join AARP, right? Well, let's take a minute to talk about it. Where do you see yourself in 15 years? More specifically, your career, your health, your social life? What are you doing now to help you get there? There are tons of ways for you to start preparing today for your future with AARP.
That dream job you've dreamt about? Sign up for AARP reskilling courses to help make it a reality. How about that active lifestyle you've only spoken about from the couch? AARP has health tips and wellness tools to keep you moving for years to come. But none of these experiences are without making friends along the way. Connect with your community through AARP volunteer events. So it's safe to say it's never too soon to join AARP.
They're here to help your money, health, and happiness live as long as you do. That's why the younger you are, the more you need AARP. Learn more at aarp.org slash wise friend. What were the results of the study? So this is one of the things that was just really fantastic to see was that we had, we looked at our main outcomes for what we were trying to look at were young
the degree to folks reduce their depression symptoms, their anxiety symptoms, and their eating disorder symptoms among the intervention group relative to the control group. So based on the change in self-reported symptoms in the treatment group versus the control group. And we saw these really large differential reductions, meaning a lot more reductions in
and changes that happened in the depressive symptoms, anxiety symptoms, and the eating disorder symptoms in the Therabot group relative to the weightless control group.
And the degree of change is about as strong as you'd ever see in our randomized controlled trials of outpatient psychotherapy that would be delivered within cognitive behavioral therapy. With a human. With a real human delivering this, an expert. You didn't test it against therapy. No, we didn't. But you're saying results of other studies...
using real human therapists show comparable magnitudes of benefit. That's exactly right. Yes. You gonna do a head-to-head? I mean, that's the obvious question. Like, why not randomize people to therapy or Therabot? So the main thing when we're thinking about the first origins of
point is we want to have some kind of effect of how this works relative to the absence of anything. Relative to nothing. Well, because, I mean, presumably the easiest case to make for it is not it's better than a therapist. It's a huge number of people who need a therapist don't have one. Exactly. And that's the unfortunate reality. That's right. And a bot is better than nothing. It doesn't have to be better than a human therapist. It just has to be better than nothing. That's right. But so, yes, the
We are planning a head-to-head trial against therapists as the next trial that we run. Yeah. In large part because I already think we are not inferior. So it'll be interesting to see if that actually comes out. But that is something that we have...
outstanding funding proposals to try to actually do that. So one of the other things that I haven't gotten to within the trial outcomes that I think is really important on that end, actually, is two things. One is the degree that folks formed a relationship with
therabod and so in psychotherapy one of the most well-studied constructs is the ability that you and your therapist can get together and work together on common goals and trust each other that you as a it's a relationship exactly human relationship it's a human relationship and so this in the literature is called the working alliance and so it's this ability to form this bond um
We measured this working alliance using the same measure that folks would use with outpatient providers about how they felt about their therapist. But instead of the therapist, now we're talking about Therabot. Yeah. And
And folks rated it nearly identically to the norms that you would see on the outpatient literature. So we gave folks the same measure, and it's essentially equivalent to how folks are rating human providers in these ways. This is consistent with where we're seeing people having a relationship with chatbots in other domains. Yes. I'm old enough that it seems weird to me. I agree.
I don't know. Does it seem weird to you? That part, this is more of a surprise to me that it was as the bonds were as high as they were that they would actually be about what humans would be. And I will say like one of the other surprises within the interactions was the number of people that would interact.
kind of check in with Therabot and just say, hey, just checking in as if Therabot is like a... I don't know. I would only have anticipated folks would use this as a tool. So like not... Oh, like they went to hang out with Therabot? Like almost that way. It's like initiating a conversation that isn't, I guess,
It doesn't have an intention in mind. I say please when I'm using chat GPT still. I can't help my, is it because I think they're going to take over or is it a habit or what? I don't know, but I do. I do. Yeah. I wouldn't, I would say that this was more surprising the degree to that
folks establish this level of a bond with it. I think it's actually really good and really important that they do. And in large part because that's one of the ways that we know psychotherapy works is that folks can come together and trust this and develop this working relationship. So I think it's actually a necessary ingredient for this to work to some degree. I get it. It makes sense to me intellectually what you're saying. Does it give you any pause or do you just think it's great? It...
It gives me pause if we weren't delivering evidence-based treatment. Uh-huh. Well, this is a good moment. Let's talk about the industry more generally. This is not a... You're not making a company. This is not a product, right? You don't have any money at stake. But there is a
something of a therapy bot industry. There is, yes. In the private sector. Tell me, what is the broader landscape here like? So there's a lot of folks that have jumped in predominantly since the launch of ChatGPT. Yeah. And a lot of folks that have learned that you can call a foundation model fairly easily. When you say call, you mean just sort of like, you sort of take a foundation model, right?
like GPT and then you kind of put a wrapper around it. Exactly. And the wrapper, it's like, it's basically GPT with a therapist wrapper. Yeah. So it's a lot of folks within this industry are saying, Hey,
you act like a therapist and then kind of off to the races. It's otherwise not changed in any way, shape or form. It's literally like a system prompt. So if you were interacting with ChatGPT, it would be something along the lines of, hey, act as a therapist and here's what we go on to do. They may have more directions than this, but this is kind of the light touch nature. So super different from what we're doing, actually.
Yes. So we conducted the first randomized controlled trial of any generative AI for any type of clinical mental health problem. Yeah. And so I know that these folks don't have evidence. Right.
that this kind of thing works. I mean, there are non-generative AI bots that people did randomized control trials of, right? Just to be clear. Yes, there are non-generative. Absolutely, that have evidence behind them. The generative side is very new. And there's a lot of folks in the generative space that have jumped in. Yeah.
And so a lot of these folks are not psychologists and not psychiatrists. And in Silicon Valley, there's a saying, move fast and break things. This is not the setting to do that. Like move fast and break things.
move fast and break people is what you're talking about here. You know, it's like the, and the amount of times that these foundation models act in profoundly unsafe ways would be unacceptable to the field. So like the, we tested a lot of these models alongside when we were developing all of this. So it's like, I know that they don't, they don't work in this kind of way and a real safe environment. So, um,
Because of that, I'm really hugely concerned with the field at large that is moving fast and doesn't really have this level of dedication to trying to do it right. And I think one of the things that's really concerning within this is it always looks polished. So it's harder to see when you're getting exposed to things that are dangerous. But the field, I think, is in a spot where
there's a lot of folks that are out there that are acting and implementing things that are untested. And I suspect a lot of them are really dangerous.
How do you imagine Therabot getting from the experimental phase into the widespread use phase? Yeah. So we want to essentially have at least one larger trial before we do this. It's a pretty decent-sized first trial for being a first trial, but it's not something that I would want to see out in the open just yet. We want to have continued oversight, make sure it's safe and effective.
But if it continues to demonstrate safety and effectiveness, this is one of those things that why I got into this is to really have an impact on folks' lives. And this is one of those things that could scale really effective personalized carers in real ways. So yeah, we intend to, if evidence continues to show that it's safe and effective to make this out into the open market. In terms of the
The thing that I care about in terms of the ways that we could do this is trying to do this in some ways that would be scalable. So that we're considering a bunch of different pathways. Some of those would be delivered by philanthropy or nonprofit models. We are considering also like just...
a strategy that would just not for me to make money, but just to scale this under some kind of for-profit structure as well. Um, but really just to try to get this out into the open so that folks could actually use it, um, because ultimately we'll need some kind of revenue, um, in some ways to, um,
be part of this that would essentially enable the servers to stay on and to scale it. And presumably you have to pay some amount of people to do some amount of supervision. Absolutely. Forever. Yeah. So we, and the, and the real deployment setting, we hope to have essentially a,
the decreasing levels of oversight relative to these trials, but not an absence of oversight. So exactly. You're not going to stay up all night reading every message. Exactly. That won't be sustainable for the future, but we will have like flags for things that should be seen by humans and intervened upon. Let's talk about this, um,
other domain you've worked in, in terms of technology and mental health, right? And so in addition to your work on Therabot, you've done a lot of work on, it seems like, basically diagnosis, monitoring people, essentially using mobile devices and wearables to track people's mental health, to predict outcomes. Tell me about your work there and the field there.
So essentially, it's trying to monitor folks within their freestanding conditions, so like in their real life, through using technology.
So in ways that are not, don't require burden. The starting point is like your phone is collecting data about you all the time. What if that data could make you less depressed? Yeah, exactly. What if we could use that data to know something about you so that we could actually intervene? And so like thinking about a lot of mental health symptoms, I think one of the challenges of them is that
They are not, like, all or nothing. LeField, actually, I think it's this really wrong. And...
When you would talk to anybody who has experiences of clinical problem, they have changes that happen pretty rapidly within their daily life. So they will have better moments and worse moments within a day. They'll have better and worse days. And it's not like it's all this like it's always depressed or not depressed. It's like these fluctuating states of it. And I think one of the things that's really interesting
important about these types of things is if we can monitor and predict those rapid changes, which I think we can, we have evidence that we can, is that we can then intervene upon the symptoms before they happen in real time. So like trying to predict the ebbs and the flows of the symptoms, not to like say, I want somebody to never be able to be stressed within their life, but so that they can actually be more resilient and cope with it. And so,
What's the state of that art? Like, is there somebody who's, can you do that? Can somebody do that? Is there an app for that, as we used to say? Yeah. I mean, we have this, the science surrounding this is about 10 years old. We've done about 40 studies in this area across a broad range of symptoms. So anxiety, depression, post-traumatic stress disorder, schizophrenia, biopsy.
bipolar disorder, eating disorders. So a lot of different types of clinical phenomenon. And we can predict a lot of different things in ways that I think are really important. But I think to really move the needle on something that would make it into population-wide ability to do this, I think the real thing that would be needed for like...
the ability to do this is to pair this with intervention that's dynamic. So something that's actually ability, has an ability to change and has like a boundless context of intervention.
So I'm going to actually loop you back. Like Therabot? That's exactly right. So these two things that have been distinct arms of my work are like so natural compliments to one another. Now think about, okay, let's come back to Therabot in this kind of setting. So give me the dream. So this is the dream. So you have Therabot, but instead of like a psychologist that's completely unaware of what happens is reliant on the patient to tell them everything that's going on in their life. Yeah. Yeah.
all of a sudden Therabut knows them. Knows, hey, oh, they're not sleeping very well for the past couple of days. They haven't left their home this week. And this is a big deviation from them and how they normally would live life. Like, this can be targets of intervention that don't wait for this to be some sustained pattern in their life that becomes entrenched and hard to change. Like,
No, let's actually have that as part of the conversation where we don't have to wait for someone to tell us that, that they didn't get out of bed. We kind of know that they haven't left their house. And we can actually make that a content of the intervention. So that's like, I think these...
This ability to intervene proactively in these risk moments and not wait for folks to come to us and tell us every aspect of their life that they may not know. And so because of this, that's where I think there's a really powerful pairing of these two. I can see why that combination would be incredibly powerful and helpful. Do you worry at all about having that much
information and that much sort of personal information on so many dimensions about people who are by definition vulnerable? Yeah. I mean, in some ways, I think it's the real ways that folks are already collecting a lot of this type of data already on these same populations. And now that we could put it to good use, do I worry about kind of
yet falling into the wrong hands? Absolutely. I mean, we have like really big tight data security kind of protocols surrounding all of this to try to make sure that only folks that are established members of the team have any access to this data. And so, yeah, we are really concerned about it. But yeah, no, if there was a breach or something like that, that could be hugely impactful, something that would be greatly worrying. We'll be back in a minute with The Lightning Round.
Together, T-Mobile for Business and industry leaders are innovating with our advanced 5G solutions. For Walt Disney Studios, we transformed movie making by syncing teams in California with a remote production hub in Hawaii, enabling picture-perfect collaboration to help bring Lilo and Stitch to theaters this summer. For PGA of America, we deliver pro-level efficiency with connected security and ticketless entry for smoother operations, seamless transactions, and
and better fan experiences from gate to green. And for tractor supply, we put 5G business internet to work across 2,200 stores, cultivating AI-driven customer experiences to keep things running seamlessly inside, curbside, and countryside. We're helping industries redefine what's possible because with a partner that's as committed to your business as you are, there are no limits.
Discover how our advanced 5G solutions can take your business further at T-Mobile.com slash now. Have you ever gotten sick on a very expensive, very non-refundable family trip? Amazon One Medical has 24-7 virtual care, so you can get help no matter where you are. And with Amazon Pharmacy, your meds can get delivered right to your hotel fast. It's kind of like the room service of medical care. Thanks to Amazon, health care just got less painful.
You probably think it's too soon to join AARP, right? Well, let's take a minute to talk about it. Where do you see yourself in 15 years? More specifically, your career, your health, your social life? What are you doing now to help you get there? There are tons of ways for you to start preparing today for your future with AARP.
That dream job you've dreamt about? Sign up for AARP reskilling courses to help make it a reality. How about that active lifestyle you've only spoken about from the couch? AARP has health tips and wellness tools to keep you moving for years to come. But none of these experiences are without making friends along the way. Connect with your community through AARP volunteer events. So it's safe to say it's never too soon to join AARP.
They're here to help your money, health, and happiness live as long as you do. That's why the younger you are, the more you need AARP. Learn more at aarp.org slash wise friend. Okay, let's finish with the lightning round. Okay. On net, have smartphones made us happier or less happy? Less happy.
You think you could change that? You think you could make the net flip back the other way? I think that we need to meet people where they are. And so this is, we're not like trying to keep folks on their phones, right? Like we're trying to actually start with where they are and intervene there, but like push them to go and experience life in a lot of ways. Yeah. Freud, overrated or underrated? Overrated.
Still? Okay. Who's the most underrated thinker in the history of psychology? Oh, my. I mean, to some degree, Skinner was like really operant conditioning is like at the heart of most clinical phenomenon that deal with emotions. And I think it's probably one of the most impactful. Like, it's so simple in some ways that
behavior is shaped by both positive essentially benefits and like drawbacks so rewards and punishments and these these types of things are
The simplicity of it is so simple, but how meaningful it is in daily life is so profound. We still underrate it. I mean, the little bit I know about Skinner, I think of the black box, right? The like, don't worry about what's going on in somebody's mind. Just look at what's going on on the outside. Yeah, yeah, absolutely. With behavior. I mean, in a way, it sort of maps to your...
wearables, mobile devices thing, right? Like, just look, if you don't go outside, you get sad. So go outside. Sure, sure. Exactly. I am a behaviorist at heart. So this is part of how I view the world. I mean, I was actually thinking briefly before we talked, I wasn't going to bring it up, but since you brought it up, it's interesting to think, like the famous thing people say about Skinner is like, the mind is a black box, right? We don't know what's going on on the inside and don't worry about it. It makes me think of
the way large language models are black boxes, and even the people who build them don't understand how they work, right? Yeah, absolutely. I think psychologists in some ways are best suited to understand the behavior of large language models because it's actually the science of behavior absence, the ability to like potentially understand what's going on inside. Like neuroscience is a natural compliment, but in some ways a different
different lens in which you view the world. So like trying to develop a predictable system that is shaped, I actually think we're, we're not so bad in terms of folks to be able to take this on. Um, what's your go-to karaoke song? Oh, don't stop believing. I am a big karaoke person too. Somebody just sent me that just the vocal from don't stop believing. Ah, yeah, no, it's, it's amazing. It's like a meme. It's amazing. It is. Uh,
What's one thing you've learned about yourself from a wearable device? One of the things that I would say, like my ability to understand, recognize when I've actually had a poor night's sleep or a good night's sleep has gotten much better over time. Like I think as humans, we're not very well calibrated to it, but as you actually start to wear them and
get, understand you can, you are, you become a better self reporter. Actually, I sleep badly. I assume it's because I'm middle age. Uh,
I do most of the things you're supposed to do, but give me one tip for sleeping well. I get to sleep, but then I wake up in the middle of the night. Yeah, I think one of the things that a lot of people will do is they'll worry, particularly in bed, or use this as a time for thinking. So a lot of the effective strategies surrounding that are to try to actually...
Give yourself that same time that would be that unstructured time that you would be dedicated that you might experience in bed. You tell me I should worry at 10 at night instead of three in the morning. If I worry, if I say at 10 at night, OK, worry now that I'll sleep through the night. There's literally evidence surrounding scheduling your worries out.
I love that. And during the day. And it does work. So, yeah, it's got some worry. I'm going to worry at 10 tonight. And I'll let you know tomorrow morning if it works. Just don't do it in bed. Yeah. Okay. Okay. If you had to build a chatbot based on one of the following fictional therapists or psychiatrists, which fictional therapist or psychiatrist would it be? A. Jennifer Melfi from The Sopranos. B.
Dr. Krokowski from the Magic Mountain, C, Frazier from Frazier, or D, Hannibal Lecter? Oh, God. Okay. I would probably go with Frazier. Okay. Very different style of therapy then, but I think his demeanor is at least generally decent. So, yeah. And mostly appropriate with most of his clients from what I remember in the show. Okay. It's a very thoughtful response to an absurd question. Yeah.
Um, anything else we should talk about? You've asked wonderful questions. One thing I will say maybe for folks that might be listening is a lot of folks are already using generative AI for their mental health treatment. And so my, I will, I'll give a recommendation if folks are doing this already.
That they just treat it with the same level of concern they would have the internet. There may be benefits they can get out of it. Awesome. Great. But just don't work on changing something within your daily life surrounding particularly your behavior based on what these models are doing without some real thought on making sure that that is actually going to be a safe thing for you to do.
Nick Jacobson is an assistant professor at the Center for Technology and Behavioral Health at the Geisel School of Medicine at Dartmouth. Today's show was produced by Gabriel Hunter Chang. It was edited by Lydia Jean Cott and engineered by Sarah Bruguier. You can email us at problem at pushkin.fm. I'm Jacob Goldstein, and we'll be back next week with another episode of What's Your Problem?
Do you know who wrote All of Me by John Legend? Or If I Were a Boy by Beyonce? Or Bickle's Song Cry by Fergie? That's me, Toby Gadd. I'm a songwriter and have a brand new podcast called Songs You Know with Grammy-winning guests like Hosea producer Jeff Giddy, Charlie XCX producer John Schaaf, Rihanna and Coldplay producer Stargate, and artists like Jessie J, Josh Groban, and Victoria Justice. We're talking about their lives, their songs, their advice, their tips and tricks,
and their most embarrassing moments. So please tune in at Songs You Know podcast with Toby Gadd. Behind every successful business is a vision. Bringing it to life takes more than effort. It takes the right financial foundation and support. That's where Chase for Business comes in. With convenient digital tools, helpful resources, and personalized guidance, we can help your business forge ahead confidently. Learn more at chase.com backslash business.
It's cool. It's cool. You have earbuds in, right?
Adam and Eve, America's most trusted source for adult products, has been making people very happy for over 50 years. With thousands of toys for both men and women. Just go to adamandeve.com now and enter code IHEART for 50% off almost any one item. Plus free discreet shipping. That's adamandeve.com, code IHEART for 50% off.