Many people saw that Mark Zuckerberg recently doubled down on a vision in which users have AI companions who understand them in the way that their feed algorithms do. So Zuck noted that the average American has three friends or fewer, but has the capacity to have 15. And that makes me sad. As a human, if people only have three friends, they want 15. I really believe in the loneliness epidemic and the idea that human connection is so critical.
His solution is that personalized chatbots could fill that void. It's like they could be embedded across Meta's Facebook, Instagram, WhatsApp, Ray-Ban, Meta Glasses, and this could be a solution to address this problem of social connection. So let's first talk about just the technology.
What has to work, memory, voice, visuals, before these AI companions move from, you know, a cool demo, something we're talking about, to something people actually interact with, like they would, you know, a friend on TikTok or Instagram? Well, actually, in fact, I think this question is important enough that I'm actually going to start with the distinction between companions and friends, and then we can actually get into the technology. Great.
Because I actually think it's extremely important that people understand the difference between companions and friends. And I think it's important whether, you know, all social media platforms, you know, Facebook and everything Facebook owns, but also LinkedIn and TikTok and YouTube and everybody actually understand that friendship is a two directional relationship. And
Whereas companionship and many other kind of interactions are not necessarily two directional. They can be, but they're not necessarily. And I think that's extremely important because it's the kind of subtle erosion of humanity, human beings, that
to allow people to fall into that misunderstanding. And by the way, there's a lot of people who naturally would fall into it anyway, because they think, well, friends are people who do good things for me. And that's, you know, so it's the, you praise me, you tell me I'm great, you're there for me at 2 a.m., you know. Friendship is, you're there for me, right? And actually, in fact, that's a massive, you know, kind of like,
a diminishment of your own character, of your own soul. Friendship is a two directional thing. It's not only are you there for me, but I am there for you. Now you can have different theories about friendship. I mean, my own theory of friendship is two people agree to help each other become the best versions of themselves. And I think that's an extremely important thing. But by the way, not only is it important that, that you're receiving that help to be the best version of yourself, but
but the fact that you're giving that help to be the best version of their self is also part of what makes you better. The kind of thing that's important in your own growth with friendship is you might show up at a lunch with a friend going, I'm planning on talking about this really terrible week that I've had. And, you know, this, you know, and I'm really looking forward to seeing my friend because they're going to help me with this. You sit down and your friend's like, oh yeah, my mother died yesterday. And you're like,
and we're going to talk about my friend. And that is precisely what is so important to be in friendship, to learn in friendship. And that kind of thing is very, very important. And so I don't think any AI tool today is capable of being a friend. And I think if it's
pretending to be a friend, you're actually harming the person in so doing, and you should not. And this is the reason why inflections, you know, pie, when you say, hey, you're my best friend, says, no, no, I'm your companion. Hey, should we talk about your friends? Have you seen any of them? Can you see them? You know, what's the thing? Because helping you go out into your world of friends is, I think, an extremely important thing for companions to do.
And by the way, that doesn't say that I don't think there is an important role for AI companions for all people. I think AI companions can play a very fruitful role.
human role for people. But the important thing is to be on that kind of like what function are they playing? And it's extremely important for these companions to not be deceptive about what role they're playing. And I think that there's a whole set of theory. Now, by the way, once you get into the detail of that, like you say, OK, we trained a companion to
to be there for human being category X. We are trained for this category, and this is our theory of human nature, and this is our theory about how we contribute, and should both be to some degree pre-advertised, and also there at the drop of a hat when something strange might be coming. And so like I'm trained to be a therapeutic companion, and oh, by the way, you're going into a zone where I'm planning on selling you something. Well, I should be explicit about that. So
Now, this gets us to the...
But what do we do? Because inevitably, these will become people's friends. There are going to be some companies that are promoting AI friends. Is this just a public service campaign to tell people not to? Is this talking to the leaders of these AI companies? Is there government regulation? Like, what do we do now that you sort of disagree with that stance? Well, I think it's beholden upon, call it the
the experts, the influencers, and also, you know, kind of the body politic to speak out on this, because the question would be is,
Like, like my sense would be is at minimum, it'd be good to have an MPAA on this. Yeah. That kind of said, hey, people have to like, here is where you have to be clear about what you're doing. And you have to be upfront before and you have to have these intervention points. And if you say, hey, I'm signing up for MPAA, you know, movie thing, PG-13.
then this is what I do. Right. I know what I'm going to get. Yeah. And I think that's extremely important. And obviously, if there was such a, you know, kind of a massive upswell of agreement with me on this, then it could enter into even a regulatory framework if someone were abusing it. But I think I would start with kind of what you're doing. And
Like I said, I think there are people who disagree with me about what the definition of friendship is because, you know, friendship is the person is the entity that kisses my ass and tells me what I'm great all the time. And that's what friends are. My friends are people who kiss my ass. And it's like, OK, that could be your theory. Right. I'll never be your friend. But but fine. Right.
This is the reason why, like, when I get into my theory of friendship, as you know, it's like, actually, I think friendship is a skill as well as just something nature. It's friendship is things where you have duties of loyalty, but it's actually the loyalty to the better selves. It's like a set of different things that are not just a, you know.
Aria, let me tell you how magnificent you are. Did I tell you how wonderful you are today? This is okay, Reed. This sounds great to me. You know, and sometimes that is the role of a friend. Right. You know, a friend shows up, you know, kind of dragging and you're like, ah, I should buck him up some. Yes, of course. That is sometimes the role. But a friendship comes in and has been a complete asshole to somebody. Right. Right.
You know, your role as a friend is not to say, oh, yeah, that person, they suck. It's like, whoa, hey, you know, your better self actually is better.
better here. And that is, in fact, actually part of the role of a friend. And so, you know, people sometimes need to hear like, hey, you know, you're great. It's wonderful. And sometimes need to hear, no, no, you should actually consider changing. And that's actually part of the role of friendship. Now, part of, I think, the broad space and companions, and I think this is one of the things that this whole new world of AI is
It's going to make us need to be more sophisticated, even as, you know, kind of everyday human beings is what are the different kind of roles that people play in your life? And like, for example, there are work friends and then there are friends that you might talk about, like, oh, I'm having difficulty with my life partner.
or my child, you know, one of the reasons why therapists are different than friends and yes, friends can help with that. But, but like a therapist, you can go talk to about like, you're kind of, Oh, I'm having these really like, like, like self abnegating thoughts because friends are just people. And that therapist is there to talk to you about like, you know, you might be sitting, you can go into a therapist and say, I'm having fantasies about becoming a cannibal. Right.
And your therapist can talk to you about that. You know, this is, I think, you know, the kind of thing that is the reason why when we're going to be training, I think this literally like pantheon, this panoply of different kinds of AI companions, it's, well, what are they trained? What theory of human nature, of human being are they trained on? What theory to the human good are they trained for? And it's,
explicitly, where are they trained to be 100% on your side interacting with them, as opposed to saying, no, no, no, abandon your human friends. No, no, you don't need any other human friends. You only need me because me, I'm going to be selling you stuff. I'm going to be drinking your time and I'm going to be putting ads in front of you. So no, no, abandon your human friends. Talk only to me. I think that's a degradation.
of the quality of elevation of human life, and that should not be what it's doing. And it has to be explicit about this. And I think this will be very important to do
And I think we as a market should demand it. We as an industry, a la MPA, should standardize around it. And if there's confusion on this, I think we as government should say, hey, look, if you're not stepping up to this, we should do that because I think that's like this is a super important thing. Now, let me get to a nuance that most people have not really tracked here.
which is part of the wild, wild west of the internet is done on Section 230. But Section 230 protects human beings and it protects the technologists for saying, hey, I'm facilitating like when a human being gets on and starts saying anti-vax things, like that's the human being's responsibility, not the platform responsibility. That's the Section 230. And we could mod it some and so forth, but an AI agent is not.
is not a human being. It's not protected under Section 230. So you can tell that we haven't gotten to the point where, like, what are our protections going to be around AI? Because at the moment, it's all on the tech company that's providing it, right? And by the way, I think we want to evolve that. So, for example, one of the reasons why I think which will really impede a medical companion,
is because, well, all this kind of medical liability stuff. And actually, in fact, we want to have medical companions because medical companions like 24 seven at two in the morning on Saturday morning. Like my choice is I go to the hospital if I have access to one. Well, OK, like it'd be great to start with it, because by the way, you talk to the companion, the companion says, get thee to a hospital right away. Doesn't matter if it's a three hour drive. Go.
you know, et cetera. Like all of that's very good to have, but we'll have to sort out all of the liability issues and kind of safe harbor and all the other things around that. Right. It's about transparency, accountability. Like we need to know what we're getting ourselves into. And so when you think of the technology, like, do you think we're there right now or what needs to progress until we get there for even AI companions, you know, if not AI friends? I think we're sufficiently there for a panoply of companions to,
If just the kind of training and, you know, kind of meta prompt guardrails were kind of put in the right way. Right. Like it's kind of the, hey, I'm trained for this. Right. And here's what you should expect from me relative to your good. It's going to with rapidity.
You know, get better in months and months and months. It'll get better with memory and knowledge of you and what really helps you. And of course, the way that it can interact with you and being much more emotional and having judiciousness and having like, for example, an agent who's cross-checking it. So when it says, you know,
hey, I think you should get a second opinion. He goes, well, yeah, you get a second opinion. But by the way, you know, here's all the things that really matter in this. And like not, you know, in this case, second opinion could be good. But, you know, your doctor is giving you pretty good mainstream advice here. On this podcast, we like to focus on what's possible with AI because we know it's the key to the next era of growth. A truth well understood by Stripe.
So, Reid, you brought up a great point in
your sort of overview of friendships versus companions, that you might have a different answer or we might have different considerations when it comes to young people under the age of 18 and also senior citizens. So when it comes to young people, Common Sense Media just came out with a report and they have deemed that
companions unsafe for teens and younger under the age of 18. Yet at the same time, Google just announced its plan to roll out its AI chatbot, Gemini, for kids under 13. So obviously, there's some nuance here. Like, what does a chatbot mean? What does a companion mean? Like, what exactly are we talking about when this AI is interacting with young people? But in general, sort of, what is your take? What are the pros and cons here? What should we be
thinking about as we're rolling out these companions to young people under 18? Well, so the very first thing I'm going to do, which will blow everyone's mind, is give a prediction that I'm near certainty about. And it's not just, of course, because I'm giving a prediction that has near certainty. That itself is, as you know, highly unique. But also because I am quite certain about this and we're about to begin it, which is
In some small and number of years, it will start becoming typical that when your child is born, you also have an AI companion for them that goes with them through their entire life or certainly their entire childhood, but probably their entire life.
And what does that mean to have that companion doing that? How is that companion great for elevating the kid? Because obviously it can be a tutor and a joyful explorer of the world and all the rest. And that can be all of this really great stuff. But what does that mean regard to the parenting relationship? And how does that companion relate to the parents? Because this parent wants a child to be raised Catholic and that parent wants the child to be raised
as a loyal New Yorker, you know, and, and, and all of the rest of these things and kind of like, what does that mean? And like, for example, of course, one of the things will be is the parent may select, I want the agent to tell the child, this is completely confidential. And Oh, by the way, parent, we just had this conversation. So like, where does this all play out is going to be really, really interesting and challenging. And of course, um,
very legitimately, the parent's going to want to say, I am responsible for the child. So the companion is something that I have a very strong voice in. And by the way, we might even, you know, as society say, well, yeah, you, by the way, have a strong voice from age zero to X that's kind of unilateral to you. And then from age F,
x to y new new things apply to have some limitation it's part of the reason why we have social workers like if if you know someone's theory of parenting is beat the child we as a modern society say no not so much you know uh that's that's not allowed right and and what is that nuance and for example does that then companion go like if if a like a
If a child's talking to their companion and saying, well, you know, and it's usually men, of course, that are physically abusive, but not always. Right. You know, if my parent is like, like, oh, my parents beating me and does the companion have a job to call social work? I can say, wait, I have a problem. We got to we got to do something. It's this really tangled thicket that's going to cause us to confront a ton of issues like
seriously important. Now, obviously, the way that tech company is going to start is to try to start just very narrow scope. Like I'm going to try to stay out of the parenting lane altogether. I'm going to try to take no responsibility. I'm going to try to just be there as like an informal Wikipedia, generally say positive things and try to help you. If you go, I'm really lonely. We're like, oh, let's try to help you not be lonely and so forth. But we're going to have all of these issues of, well, what is, because now all of a sudden, tangibly, you have an agent.
that's in direct interaction with a kid, well, who else is that agent accountable to? Accountable to the parents?
accountable to the school, accountable to the society. And, you know, we already have troubles with public schools, right? Like, are you allowed to teach, you know, kind of scientific evolution or, you know, where does religion play a role in the schools? I mean, we have this, you know, craziness, you know, in the U.S. of trying to like ban certain kinds of textbooks and other kinds of things. It's like, well, this is going to make that, you know,
a million X. I will confess to, it is such a tangled thicket that, you know, when I started LinkedIn, it's like, no, no, 18 or older. Where society judges people to be adults, that's where I'm going to play because I precisely think it's nuanced. This is going to be very challenging ground because, you know, we can't even, like there's huge things that we can't even agree on that I think are relatively straightforward. Absolutely. I mean, Reid, I
I did not think you were going to say that prediction. And I'm excited to see how it plays out. So here we are. Possible is produced by Wonder Media Network. It's hosted by Ari Finger and me, Reid Hoffman. Our showrunner is Sean Young.
Possible is produced by Katie Sanders, Katie Allen, Sarah Schlie, Vanessa Handy, Aaliyah Yates, Paloma Moreno-Gimenez, and Malia Agudelo. Jenny Kaplan is our executive producer and editor. Special thanks to Surya Yalamachili, Sayida Sepiyeva, Benassi Dilos, Ian Ellis, Greg Viato, Park Patil, and Ben Rellis.