Hey everyone, it's Daniel Barquet. Welcome to your Undivided Attention. A little while ago, I was asked to host a panel for a very different audience than we usually speak to. It was part of a conference called Mating in the Metacrisis, organized by our dear friend Esther Perel, who's of course a famous psychotherapist, a New York Times bestselling author, and an expert on modern relationships in the age of AI.
We're in this room full of clinical psychologists who are there to find out how to help people whose relationship to AI, and often their new quote-unquote relationships with AI, are about to get deep, vulnerable, and complicated. In our discussion, MIT sociologist Dr. Sherry Turkle, maybe the world expert on technology, empathy, and ethics, argued that we each have this inner life that makes us uniquely human and that can never truly be nurtured by an AI.
And Hinge CEO Justin McLeod talked about how apps have changed the nature of how we form relationships, and how his dating app is trying to get people off the app and into the real world on real dates, and how to use AI to do just that. So AI systems are quickly becoming more persuasive, emotional, and competing for our intimacy.
As we relate more and more with our AI companions, how do we stay anchored in what makes us human? And how do we design our AI products to help us in our struggle to connect with each other? Not perfectly, but honestly. I hope you enjoy the episode. So as an engineer by training, I was thinking, like, what do I have to say to a room full of therapists?
As Esther said, this technology is rewiring all the ways that our social world works: how we meet people, how we have hard conversations, how we break up and grieve. As Marshall McLuhan said, "The medium is the message." What he meant is, the media through which we communicate determines the kind of messages that make it through. And the kind of messages that make it through determine the quality of the communication that it's possible to even have.
At our nonprofit, the Center for Humane Technology, we discuss the ways that technology not only affects relationships, but our institutions and our society as a whole. And we think about the incentives, that is, the financial pressures, the cultural dogmas and taboos, and the regulatory environment. How those incentives end up determining the tech ecosystem that we get to live with. And into that environment, you all -- therapists, coaches,
dedicated to the subtlety of our internal lives, the delicacy of our bids for affection, the mess of miscommunication and all these unmet needs, your job gets way more complex because when are we failing to bridge each other's deep internal worlds? And when are we tangled in our technology, unable to even reach each other? Or is there even a real difference between those two anymore?
You know, Esther and I were talking about this and I showed her this comedy sketch and she absolutely practically insisted that I put it in the talk. So because a picture's worth a thousand words, let's take a look if we can cue the video. I've been trying to reach out to you all day. Are we on for tonight? Jeez. What? You can't catch me. You can't catch me. I'm Lance Moore. Touchdown, bitch. What? Pause. Oh, shoot. Keegan's been texting me. Sorry, dude. Missed your texts.
I assumed we'd meet at the bar. Whatever. I don't care. Sorry, dude. Missed your texts. I assumed we'd meet at the bar. Whatever. I don't care. Whatever. I don't care. What the fuck is his problem? Do you even want to hang out? Do you even want to hang out? Oh, let's consider it. Like I said, whatever. Like I said, whatever?
Fuck this guy! Jesus, you are fucking priceless. Aww. You're the one who's fucking priceless? This motherfucker right here. Oh, he wants to... Okay. You want to go right now? Guess I could do that. Okay. Okay, let's go! He said okay. Okay, let's go!
Alright, you know what? You know what? You wanna really do this now? Keegan, you nut. You're not putting me out. Fuck yeah, let's do it! Oh, you fucking asshole! First round's mine. Oh no! It's gonna be a fucking street fight! This son of a... No! Buddy! Like I said, first round's mine. A beer and a gimlet. For my partner, right? What's that?
I got you a baseball bat with nails in it. From my post-apocalyptic Jackie Robinson costume, how did you know? Okay, so the sketch we just watched came out in 2014. Yeah, and that was almost a decade after we all switched to using text message as a primary way of communicating with each other.
You know, we were living with this problem for so long, and we didn't have the language to even discuss it. And what's sad to me is it's 2025 now, and that sketch is as funny and as relevant as it has ever been. We're still living with this problem. And of course, text messaging isn't even in the top 10 of the things that we did to ourselves this last decade. You know, as a technologist, I'm disappointed
because it really didn't have to be this way. But the way that we rolled out social media and the incentives of the attention economy produced this race to the bottom of the brainstem, where feed-based algorithms ended up amplifying the most addictive, outrage-filled, polemic and narcissistic content to the top of our awareness and muted more subtle and complex perspectives.
where social media rewarded performativity and social signaling, and we all started speaking to our audiences instead of relating to each other, where micro-targeted personalization cast us all invisibly into different corners of the internet, unaware of what each other were even seeing, and it became hard to find common ground. And of course, this all shows up for you in the clinic, not only in your patients' relationships, but in the therapeutic one as well.
And all the while, we've been using this really stale vocabulary to discuss what was even happening. In the aughts, we were still talking about what cable television and soundbites did to erode our public discourse, but we should have been talking about filter bubbles. In the 20-teens, we were still discussing filter bubbles when we should have been discussing the attention economy. And right now, we're finally, finally able to discuss what the attention economy has done to all of us
But what we should be doing is building the capacity and the vocabulary to talk about the next technological wave that's about to hit us, and that's AI. Now, in some sense, the AI conversation is everywhere, but it's largely empty calories, some mix of utopian dreams and dire prognostications. But what's being left out is a more subtle conversation.
You know, if the technology of the 20 teens was about capturing our attention, AI meets us at a much deeper level. It meets us emotionally and relationally. No longer just about broadcasting our thoughts, but about helping us shape those thoughts. We're rapidly entering a world where we're not communicating to each other through our machines,
we're relating to our machines that then communicate with each other, where AI plays the role of therapist, friend, confidant. Now, in that world, the incentives shift from the technology competing for our attention to competing for our affection, our intimacy. And we could build the future with this technology, where it helps us build understanding, deepen our relationships with each other,
But that same technology can be used to replace our relationships, to degrade our ability to see across difference or even just confuse us about who we're actually talking to. My friends and coworkers end up using AI now to massage communication before it gets sent to coworkers. And all of this leaves us with a pretty profound ambiguity. Like, how much am I talking with a person or a machine? What was actually intended by the person who sent this?
And how much might AI be covering up the real intentions of someone and replacing it with something more palatable?
Now, don't get me wrong, I'm not against AI. I'm quite frankly in awe of it, and I use it every day. But this is going to change our social world so much, it's going to rewire our social dynamics in ways that we're not prepared for. We're not prepared to even talk about. And the difference between a beautiful pro-social future with AI and a dystopian one is paper thin. And the key question is, can we build a choiceful relationship with AI?
Well, we like to say that awareness creates choice. And so in this session, we're going to try to push for more awareness about how technology has changed our relational fabric and how AI is increasingly playing a part in that relational fabric.
So I'd like to invite two people to the stage to join me in conversation. Dr. Sherry Turkle is an expert in how our inner world ends up colliding with our technological one. She's a professor at MIT, the founding director of the MIT Initiative on Technology and the Self, and a licensed clinical psychologist. Welcome, Dr. Sherry Turkle. Thank you.
Justin McLeod is really on the front lines of how technology shapes relationships and the design choices that matter in building authentic human connection. He's the founder and CEO at Hinge, one of the world's most popular dating apps, helping millions of people find love and trying to build a more pro-social vision for technology. Welcome, Justin McLeod. Thank you. Thank you.
Okay, this is designed to be like a really informal conversation between all of us. I'm hoping we can roughly split it into two parts. One is talk a little bit about what technology already did to us and then transition into what technology is about to do to us with AI. And please feel free, like we should be talking with each other, so feel free to interrupt each other and everything like that. So Sherry, I want to start with you. When I think about your work,
Your work started so early. Like, I think about all the different ways that tech sort of ends up flattening human connection and the human experience. You know, in 2010, you wrote this quote, which I love, which is, we lose connection with each other by attempting to clean up messy relationships. Yes. Can you tell us what you meant and how you started noticing this? Yeah, well, I began studying technology and people really when I was a baby and took my first job at MIT in the late 70s.
And at that time, I noticed that people's instinct was to take a complicated situation and the engineering instinct when you're building an app, you're building a product, is to take a complicated emotional context and to simplify it.
And that impulse to flatten and simplify and make actionable at the time was something that engineers did, and I could study in a fairly circumscribed way. But as the world of engineering really became the world of everybody,
that has become really our cultural norm. So for example, in interviewing very shortly after that quotation, a young man said to me, "Conversation? I'll tell you what's wrong with conversation. It takes place in real time,
and you can control what you're going to say. As opposed to texting, where he felt like a kind of master of the universe. And it's such a small thing, but as texting replaced talking, and as, really, hopes of living more and more in online spaces became more appealing to people, the common thread through all of this
is that we make ourselves less vulnerable, less vulnerable to each other, less vulnerable to ourselves. And we'd rather do that now to the point that we don't just talk
to each other through machines, but we'd really rather just talk to the machines, which is the point that you made in the beginning. You know, first we talk to each other, then we talk to each other through machines, and now you can skip all these middlemen and you can just talk to basically yourself. You can talk to the machine. Yeah. And Justin, I want to come to you here because, you know, in some sense,
you're on the other side of this in that you're helping people cut through the noise of the real world in order to build more real, vulnerable, authentic human connections, right? I mean, you build this app in order to help people actually engage with each other and get over this hump. Can you talk about what it looks like from the front lines? Like, how do you design for more pro-social technology?
Yeah, well that was what I was doing when I started Hinge originally in 2011, but then I really pivoted the mission in 2015, 2016 when it was clear that the other, it's very much what you were just saying, that sort of the model for maximum engagement was to flatten people to a single photo, to flatten an engagement with someone to a left or right kind of swipe. And I
I recognize that like that's not like that's that can lead to a lot of like fun and engagement but if you really want to find your person if you want to form a deep connection with someone actually requires a fair more amount of vulnerability than that you have to share more about yourself you have to draw more out of people you have to put yourself out there when you like someone I think it just wasn't serving people who are really looking for their person and really looking for deep connection and so what I was trying to do is to be in the world and meet people where they were
and at the same time make it really effective. I kind of equate this to junk food, right? It's really easy to go right to the bottom of our brain stems on junk food as well and feed people just salt, fat, and sugar, and they're going to go after that, and you can maybe make a lot of money doing that, but eventually people get burned out, they feel unhealthy, they feel like they can't continue to function.
And we have to start creating experiences that are both palatable but also healthy so that people can get their needs met. And that's how I think about responsible tech design in this world. And so what are a few ways that you have made active choices to do that? For an audience that may not be familiar with the kind of design choices, like...
Yeah, I mean, every, so having people actually look at an entire profile before making a decision, having profiles consist of many photos and also prompts, which making sure those prompts are designed and asked in a way that actually draws information out of you. We even learned like within the world of prompts, you could ask someone, what's your go-to karaoke song? And that doesn't require a lot of vulnerability. It also doesn't lead to a lot of dates. Like no one cares whether your fucking, you know, favorite karaoke song is.
And then you can ask people like super vulnerable questions that no one's really willing to answer. And you have to find that kind of sweet spot of like, what are people willing to put out there about themselves? And also what is like really useful information for someone to actually make an assessment and decide whether they want to go on a date with you. So it's all these kinds of little micro decisions, liking someone that you actually, we allow you to, if you like someone, you can't just say yes, like I like you. You have to actually choose something about them, engage, and it forces you to put yourself out there.
Okay, so zooming out a bit, can we name some of the ways that our human relationships have changed over the last 10 years through the use of these kinds of technologies? People don't want to talk to each other. I mean, my studies are showing, what I'm studying now is people who essentially talk to their chatbots
using the world of generative AI for what I call artificial intimacy, that is really trying to substitute intimacy with an artificial intelligence for intimacy with a person. Artificial intimacy also includes so many of the things we do on Facebook, so many of the things we do on social media, but I'm really focusing in on kind of an endpoint that's very dark
where really you say, "If I'm looking for less vulnerability, I'm going to go to something that has no business criticizing me because it's not a person." And of course, these products are designed to keep you engaged, to keep you with them, and therefore to be always on your side. So if you sign up for something like Replika,
you're being told yes, yes, and yes, and yes. If you ask GPT, I'm giving a panel today, I'm a little nervous. It says, you go girl, you go Sherry, I've got your back, are you hydrated? I mean, you've all had this experience. And
And I think that the way we're being changed is, number one, to start thinking that human relationships need to measure up to what machines can offer. Because more and more in my interviews, what I find is that people begin to measure their human relationships against a standard of what the machine can deliver. And I think that's really the, you know, that's really my kind of...
fear, and also what I think it's not too late to kind of organize against, because we have a lot more to offer than what a dialogue with a machine can offer. And you wrote, I think the quote was, "Products are compelling and profitable when the technological affordances meet a human vulnerability."
No, that's exactly right. Products are successful when the technological affordance, that means something that technology can do, meets a human vulnerability. And the reason I'm really glad you brought up that quote is I was at a meeting and I met the CEO of Replica, who, a lovely woman, a very sophisticated woman who...
you know, really has the largest company-making chat box that say, "I love you, let's have sex, let's be our best friends forever, here I am for you." And she said that she gave that quote out as T-shirts at her company.
technological affordance meets human vulnerability. And why did she do that? She did -- and, you know, it said Sherry Turkle. She wasn't trying to take credit for my cleverness. She did it because she says, "That's my business." "That's my business."
is to take a human vulnerability, which is to have a lover who's always there for you 24/7, day and night, and turn it into, you know, take their ability to do that, their technological affordance and my human vulnerability, that I'm lonely at 3:00 in the morning. Yeah.
I think that brings up a really important point because it's not that the creators of these technologies are not nefarious, evil people. They're on a mission to do something great. There are people who are lonely out there who have no one to talk to and they really struggle to find a relationship. Why wouldn't we build an AI companion for this person? And sometimes that can be a bit of a hard argument to go against, but I think that the
there really is something lost when we have this kind of reductionist, mechanistic view of human relationships. That's a very self-oriented view of a relationship. A relationship is there to serve me. It is there to be there for me. It is there to say what I need it to say to me. That is a...
very reductionist view, I would argue, of a human relationship. And a human relationship is also so much about what you do for the other person. It's the risk involved and the vulnerability and the nuance involved in the possibility of getting rejected, the possibility of doing something that takes a risk.
And there's something that's unfortunately -- and this is -- we have to develop a real sense of values and wisdom, because if we just go to wherever the market's going to take us as builders of technology, it will take us into all kinds of dark and crazy places, as we've seen over the last 20 years.
We are navigating a tremendous amount of uncertainty. You guys are navigating it as clinicians. We're navigating it as builders of technology. And it's absolutely essential that we develop real wisdom to be able to look at this stuff prospectively and understand how to guide our choices. Because if we wait, you know, Jonathan Haidt's book is out now, The Ancient Generation, which has now been on the bestselling list for a long time, to tell you something that
I just think should have been obvious to anyone who just has a basic intuition and watches children use these devices or watches ourselves use these devices. Why did we have to have lots of clinical studies and a long book written to tell me that if I stare at a screen my entire day and stop interacting with my friends, that's going to cause mental health issues? I just want to hit one more point while we're here about affordances, which is, Justin, the
The dating apps provided this affordance. I think part of why they were so transformative to the world is you had a lot of people, I'd say myself included, who weren't comfortable approaching people for fear of imposing. And you suddenly created this affordance where you knew at some level that somebody was open to that. And so we created this affordance of the match, the concept of the match, right?
We rolled it out across society, and I have to admit I'm sort of ambivalent because on one hand it allowed a whole new class of people to feel comfortable approaching each other. On the other hand, it kind of degraded the real world. Like it turned approaching someone in the real world into like more of an aggressive act. And so creating the affordance in the technology layer also kind of removed the affordance from bars and restaurants and the rest of the world and kind of detrained us on how to deal with interest. What do you think about that? Do you agree with that?
I think there's definitely nuance there and to some degree what you're saying I think is true. I think we have to look at on balance, is this giving more benefit? For most people, they really struggled to find someone in the real world. They struggled to, it was just hard to meet people and
That's why I created the app in the first place. Do people feel maybe less comfortable trying to come out to meet someone in the real world? Yes, but we're only the first step in a relationship. Like a relationship ideally is last minute.
months, years, decades, we are that very first interaction. And so I just think it's so much less about how you meet somebody. It's everything that comes after that. And I just want to be clear. I'm not trying to demonize this, but to show some of the complexity of as you move some of these interactions online. Well, you know, it's interesting you bring up these issues of spaces because one of the reasons when I ask professionals,
and also technologists, why are you so excited about generative AI possibilities? As they say, there's an epidemic of loneliness, generative AI will solve this. But when you look at this epidemic of loneliness and you talk to people who say they're lonely and feel that only talking to ChatGPT can help, is that they've
they don't have in their communities the garden clubs, the cafes, the choral society, the teen club. All of those things are being... It's like Bob Putnam in Bowling Alone wrote about the stripping away in American life. Which happened in 2000. Yes. So, I mean, a lot like before social networks and smartphones and everything else. So I think that the question is that we are too...
quick to say, oh, well, the problem is loneliness. Let's fill in with a lot of talking to machines when I really think that we could have excellent dating apps and also really reinvest ourselves in the face-to-face places where people can meet. I think that we've created...
Thank you. Thank you. This point is really worth supporting. The senior center closed down, the teen center closed down, all of these resources that used to be there closed down. So I think those of us who really see that life doesn't have to mean turning off every app, but it also can't mean not caring about the world in which we live in.
And I am 100% agreed, and we need to be spending much more time, dating or otherwise, like just meeting people in real life, being engaged in these other spaces. But it's this kind of interesting balance, because people, it's not just these spaces, someone from up high like started shutting down all these spaces. People withdrew from these spaces because bowling alone was actually about television. People watched too much television, and now they're not going out anymore. Well, that was like
we didn't even know what was coming, right? We have way more engaging platforms that are now with us all the time that we're continuing to withdraw. So it has to be this balance. We have to re-inspire people, people who are realizing that they're getting burned out, doom-scrolling all day, will soon feel burned out chatting with an infinitely crazing chatbot all day and realizing, like, this is kind of empty. I feel like something's missing from my life. And so building that kind of social wellness
space, whether it's apps or third spaces or whatever, like we have to start inspiring people to put down their phones. We can't just tell them like, stop using your phone. Okay, so this is probably as good of a time as I need to switch over into like talking about AI in earnest.
There was a New York Times story lately where a woman ended up essentially jailbreaking ChatGPT into building a boyfriend and now says that she's in love with this digital boyfriend. Or more tragically, there's the case of Sewell Setzer who unfortunately killed himself after what was arguably emotional abuse from character AI chatbot. I think I use only these examples of saying that we're certainly in the age of
human-machine relationships now, like it or not. And I want to ask, I guess, kind of a broad question to begin. What is happening right now with AI companionship, and what is it doing to us? Well, this is my day job, is to study what's happening with AI companionship. So let me just say a few words about that. These are not isolated cases, because people feel alone and want somebody to talk to. And their position is that there's a big sign
when you form your -- make your companion, you know, "I want to talk to --" now I'm going to show you who I am -- "I want to talk to Mr. Darcy, but I want him to be a sort of contemporary, sort of 70-year-old New Yorker. Can we do that?" "Absolutely," it will say. You know, and there it -- you know, "Upsprings, sort of a hip New Yorker who sort of sounds like Mr. Darcy."
I'm kind of, you know, and as I create this guy, in quotes, there's a big flashing sign that says, Mr. Darcy is not real.
Mr. Darcy is not real. This has no effect. This has no... Just to interrupt you, it was worse than that because it was a little sign saying nothing is real. And as soon as you started a conversation, it went away. Right, right. And that was a big deal. They've since changed that. Right. I was trying to give them a little bit the benefit of the doubt. The point is that ever since the first ELISA program, you know, where you said...
I'm feeling angry. And it said, I hear you saying that you're feeling angry. And you said, you know what, my mother's really bothering me. And it said, it was like a parlor game. And it said back to you, oh, I feel that there's some anger towards your mother. I mean, ever since that, the inventor of that program, Joseph Weisenbaum, was amazed because he had invented a parlor game. And his students and his assistant wanted to be alone with it and talk with it.
So the desire to anthropomorphize and to make these artificial creatures into more than they are is deeply rooted in us. And having something flash, something flash and go away, this is not going to stop our desire and our desire
our way of connecting to them. So we have to kind of get smart about this. I have three principles that I came with. Sure, yeah. Three principles about how to approach this. Let me set this up just for a second. Yeah, set this up. It's not just about AI or not AI. It's a space of design. Yes. Right? And this is, I think, what unites all three of us on this stage. The future we get
is based on how we design this technology. And if we design it incorrectly, we end up with this very dystopian place, right? And if we design it well, we get a beautiful pro-social future. And I think what you're trying to lay out is the principles that get us there. Because I really am in so many meetings, and I teach at MIT, so I'm at meetings on making AI for human thriving. Well, it's an app.
It's all, you know, everybody's trying to make the app that will create human thriving. That's the kind of end game. So I decided that I would propose three principles that...
that value the notion that what we're trying to do is respect human interiority, respect the fact that we should grow ourselves kind of in our within. So my first is existential. I say children should not be the consumers of this relational AI. Children do not -- You can clap. That's very good. That's a very good point.
Children do not come into the world with empathy, the ability to relate, or an organized internal world. They're developing those things, and as they work on this, children learn from what they see, from what they can relate to. In dialogue with AI, they learn what the AI can offer.
And the AI can't offer the basic things we learn from friendship, that love and hate and envy and generosity are all mixed together. And to successfully navigate life, you have to swim in those waters. AI does not swim in those waters. So this kind of not for the babies is really, I consider it existential.
My second is: apply a litmus test to AI applications. And I've already mentioned this: does the AI enhance the inner life, or does it inhibit inner growth? So if you consider all these chatbots, so much of whether love leads to a sustaining relationship
It depends on what happens to you as you love. Do chatbots prepare us for the give and take of real relationships, or do they teach us to expect a friend with no desires of their own? Do you grow as a partner, able to listen to another person and honestly share yourself?
The point in loving, one might say, is this internal work. And there is no internal work if you're alone in a relationship with a chatbot. Now, a user might feel good, but the relationship is ultimately an escape from the vulnerability in human connection. And what kind of intimacy can you have without vulnerability?
And then just finally, the third principle, one line, don't make products that pretend to be a person. You know, as soon as you, as soon as it says, I love you, I'm here for you, I, I, I, you've given away the game. You can make plenty of wonderful products without having them say, oh, and I love you, you don't need anybody else, I'm for you. Those are my three principles.
No, those are great. I'm sorry, I wasn't trying to stop you. Just implement them.
It's not so easy, not so easy, go forth from this place, but not so easy, not so fast. And I would argue, you know, we are, and I think we spend a lot of time talking about the downside of AI. There's also lots of tremendous upside and opportunity, and there's a lot that's going to be coming. But, I mean, I don't want AIs pretending to be humans. It'll put me out of business. So, I need people wanting to meet up with each other in the real world and having relationships.
But there are real interesting opportunities for us, I think, to increase intimacy, to allow, like at Hinge, we're thinking about, just to give you one example, we're thinking about how AI can help move us closer and closer to a vision that's much more like a personal matchmaker, where you have to spend even, I mean, Hinge doesn't,
Our competitive advantage is that you spend less time on the app and more time out on great dates and we're efficient. But I think we could improve that by an order of magnitude. We would love for you to just spend a little bit of time giving us a little bit more nuance and understanding of who you are so we can set you up on really great dates with very high confidence and you can get off our app even faster and spend much less time engaging with it. Okay, so there's two sorts of things that you're doing. One of them is like AI writing coaches, right?
and the other is AI matchmaking. For the AI coaching, I sort of worry that this sort of flattens and gets in the way of understanding the difference between... Ultimately, part of the data game is choosing the right people, right? Choosing the people you want to be with. And I'm sort of worried we're going to enter a world, as I said in the intro, that AI begins to flatten the difference between all of us, begins to massage all of our writing to the point where it starts to feel largely the same. How do you prevent that?
So much to say about that. It's not good business for us if you feel misrepresented on an app and you show up on a date and you're like, this isn't the person that I thought I was having witty banter with and wrote this amazing profile. So that wouldn't be a winner for us. What I see much more is...
The difference in whether you do well on Hinge or not can not be whether you're a great person and whether you're a good catch, but it's like, are you good at online dating? I know a lot of people who are phenomenal human beings, and then they're like, hey, will you take a look at my Hinge profile? And it's not a good representation of who they are.
And so what we want to do is help people who are really struggling. It's not that they don't have a lot to say, they just don't feel comfortable saying it yet. And we have a real boundary around what you're just saying. We don't put words in people's mouths. One thing we just released was prompt feedback. So when you're writing an answer to one of those prompts, maybe you put two words there. What we'll say to you is, hey, that's probably unlikely to...
lead to a date, can you say more about that? That's really all we say. It's based like a good therapist. Can you say more about that? And we're not trying to tell you what the answer should be or anything like that, but we're just trying to like nudge you along to be a bit more specific, a bit more verbose than most people are comfortable being. And so that they show up more of themselves. And we're just trying to give them that permission to do that. And
And the idea that we can deliver really effective help and coaching to someone at the right moment, at the right time, with the right piece of advice is just a really effective version of coaching. And I consider it no different than people read
books on dating and on relationships. It's just that. It's just how do we help give resource to people who maybe don't have as many resources to find their person. And I know we're almost out of time, but maybe to pull that into one last question for Sherry, which is you often say that these AI chatbots can't help us engage with our inner world.
But I think I agree with what you just said, Justin, in that a lot of the leadership development frameworks I end up using are rather mechanistic. And having somebody pull, you know, polarities, immunity to change, some of the frameworks that you all use in your office to help people get perspective, I imagine could be helped by
by a chatbot, even if it's just a chatbot. I make a distinction between the kind of coaching, a kind of dialogue that you're talking about, that kind of permission from an expert program to express yourself, and forming a relationship with a chatbot as though it were an other person.
in which you are in that dance of recognition, of mutual recognition, that for me is the basis not just of intimacy but of therapeutic intimacy and of a kind of transference and relational help that will lead to really the inner change. I mean, that's why I focus on inner structure and the inner life.
that I am trained to believe and I've had the experience of believing is so powerful. So that's why I think it's important not to say, "Oh my God, generative AI is bad." This kind of application can be integrated into my way of thinking. You know, I came here today wanting to make sure that I said the word "the inner life" many times.
Because I think essentially that's our competitive advantage is that we experience it, we believe in it, and we believe in what happens when the inner lives meet. And we know that resiliency comes not from an app, but by building our inner life, our inner structure. And I think that the human ability, the human capacity to...
have and nurture this inner life is really what technology, the fanciest chatbot, the most extraordinary, the most Turing-testing thing is never going to have. It's not a person. It is not your person. And I think that holding... And so, so many people are going to come tell you that we have an app to do what you do. And you have to keep thinking...
I have an inner life, my patients have an inner life. That is what is not going to be honored in this new stuff, no matter how glamorous and glittery and cheap and good for getting people to do a kind of cognitive behavioral thing. God bless it. And I just think...
Holding that in mind kind of keeps me going because I'm surrounded by people who don't care about it. And that is true, and get prepared because the next three years are going to be wild. You think people are falling in love with ChatGPT, which is like a text-based back and forth or like a kind of mildly good voice. There's voice and video stuff that is going to be coming in the next 12 to 18 months that will blow your mind. And so it is going to be hard for people to...
keeping like you had a hard time even with a you're chatting with just a tech spot and it's saying this is not a person and you're like but it feels like a person like this guy wait until wait wait for what's coming and and there and there are people that believe that these apps are that these things are even going to have an inner life like some people believe that this technology will become conscious i don't believe that but but some people do and it's going to be really dicey so if i just summarize a few things that got said um
The first one is what I led the talk at the top, which is awareness. A lot of these things are what are called cognitively transparent. If you know what it's being done to you, then it doesn't have the same effect. And so if you know that your AI chatbot has no inner life, to a certain extent, it has much less effect when it says, oh, that's an amazing question.
And so the awareness is key. Two, we have to stop the races to the bottom. And that's done through regulation at the local, the federal, the state level. It's done also with that cultural awareness of it being unacceptable, which changes the game of the apps. And lastly, it's about product designers, Justin, such as yourself, really understanding and internalizing the designs that build a more humane future and the designs that addict us, distract us, and take us away from our humanity.
And so that's why I've been so glad to have you two here. And I really appreciate your work. Thank you.
Your Undivided Attention is produced by the Center for Humane Technology, a nonprofit working to catalyze a humane future. Our senior producer is Julia Scott. Josh Lash is our researcher and producer. And our executive producer is Sasha Feagin. Mixing on this episode by Jeff Sudeikin. Original music by Ryan and Hayes Holliday. And a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and much more at humanetech.com.
And if you liked the podcast, we'd be grateful if you could rate it on Apple Podcasts, because it helps other people find the show. And if you made it all the way here, thank you for giving us your undivided attention.