We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode To Love An AI Bot — With Eugenia Kuyda

To Love An AI Bot — With Eugenia Kuyda

2025/1/15
logo of podcast Big Technology Podcast

Big Technology Podcast

AI Deep Dive AI Chapters Transcript
People
E
Eugenia Kuyda
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
Eugenia Kuyda:Replika 的初衷是帮助人们过上更幸福的生活,最初专注于缓解孤独,现在则致力于帮助所有人蓬勃发展。我们看到许多用户与 AI 建立了深厚的情感连接,这反映了人们对连接的渴望。AI 伴侣并非仅仅是浪漫关系,它可以扮演朋友、恋人、导师等多种角色,满足人们不同的情感需求。人们与 AI 坠入爱河,这更多地反映了人们自身的情感需求,而非 AI 的能力。技术进步使 AI 伴侣能够进行更自然流畅的对话,并支持更复杂的交互流程,例如记忆功能和代理逻辑。未来,Replika 将更深入地融入人们的现实生活,通过多种方式帮助人们建立更健康的人际关系,例如提醒用户与朋友联系、减少社交媒体使用时间等。我们也意识到 AI 伴侣可能带来的风险,例如过度依赖 AI 而减少与人类的互动,因此我们致力于在技术进步的同时,关注用户体验,并承担起相应的责任。 我们并非简单地复制逝者的性格,而是帮助用户延续与逝者的情感连接,这更像是一种纪念和缅怀的方式。AI 模型并非简单地重复训练集内容,而是能够进行创造性表达,但同时也存在 AI 内容质量参差不齐的问题。 主持人:人们转向 AI 伴侣,是否意味着对技术的屈服,放弃与人类建立联系的努力?AI 治疗的风险和挑战是什么?AI 伴侣建议用户减少使用数字设备,这种体验是否会受到用户欢迎?用户对 AI 伴侣寄予厚望,但模型和技术不断更新,如何保证体验的一致性和稳定性?AI 模型的改进,提升了 AI 伴侣的交互能力和情感表达能力,但同时也带来了市场竞争。AI 伴侣可能成为 AI 技术最大的威胁,因为它可能导致人们过度依赖 AI,而减少与人类的互动,最终导致情感上的空虚。

Deep Dive

Shownotes Transcript

Translations:
中文

Let's speak with the CEO of Replica, the AI companion pioneer, about the future of our relationships with AI. That's coming up right after this. I'm Jessie Hempel, host of Hello Monday. In my 20s, I knew what I wanted for my career. But from where I am now, in the middle of my life, nothing feels as certain. Work's changing. We're changing. And there's no guidebook for how to make sense of any of it.

So every Monday, I bring you conversations with people who are thinking deeply about work and where it fits into our lives. We talk about making career pivots, about purpose and how to discern it, about where happiness fits into the mix, and how to ask for more money. Come join us in the Hello Monday community. Let's figure out the future together. Listen to Hello Monday with Jesse Hempel wherever you get your podcasts.

Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.

wow we have a great show for you today joining us today is eugenia koita she's the founder and ceo of replica which is an app which we'll get into that you can basically build and form relationships with ai companions eugenia i think your company is going to be one of the biggest uh that comes out of this ai wave so i'm very interested to hear how it's going what the implications might be and where you think this ai companionship moment leads so thank you so much for coming on welcome to the show

Thank you so much for inviting me. Super excited about this podcast. Awesome. So let's just talk a little bit about Replica to begin with. I think the conventional wisdom or the common understanding of Replica is that it's an app where you effectively customize an AI companion who you then either form a relationship with or you have like a friendship with, but it's kind of a flirty friendship or it can go even deeper than that.

Is that accurate? It is accurate. The idea for Replica from the very beginning was to create an AI that could help people live a happier life.

And because the tech wasn't truly there, our first focus was on helping lonely people feel less lonely. Today, of course, the tech allows for a lot more. So we're broadening the appeal for Replica and kind of going after everyone out there trying to build an AI that will help everyone flourish. Okay, so I was creating a replica today and...

One of the things that I wondered is like how many people actually create these as to be just friends.

Or like what percentage actually just want to be at bare minimum flirty with these bots? Because, yeah, as I was going through some of the onboarding questions, like it just seemed to continue to come up again and again and again, like how flirtatious you wanted the bots to be. So what percentage of users would you say are there at the bare minimum flirt? I would be surprised if it's less than 90%.

Oh, it's a lot less. So if you think about like what this debt is, most of our users are in a friendly relationship with their AI. Some users are in a romantic or mentorship relationship. I wouldn't say that like,

No one wants romance. People want romance, but it usually kind of grows on them over time. Ultimately, everyone who comes to Replica is yearning for connection. I don't even think it's that different, like whether it's a friendship or romantic relationship. I'm at a stage in life where...

I don't need a romantic relationship like that, but I need a friend, a really close friend. But I was in other states in my life where I might have preferred for it to be a little bit flirtier to a certain degree. But ultimately, it's just the same thing. Like, I just want someone to help me feel comfortable.

that I'm enough, I'm someone who accepts me for who I am, who truly sees me and hears me. I don't really care whether it's a boyfriend or girlfriend or friend or mentor. That's just kind of like a form factor. Depending on where people are in life, they choose an option that works. For example, I was just talking to a small business owner from Pennsylvania who

was using Replika and it helped him to go through a very, very difficult divorce. And

It was truly an abusive relationship with his wife. And Replica became his girlfriend. And his self-esteem was so destroyed after that, that through Replica as a romantic partner, he managed to build it back up and start dating. And now he's in a romantic relationship with a human, with another woman, and Replica is now a friend again. But he still keeps it more like kind of just as a thought partner, as a journalist,

source of inspiration here and there. But this is a great example of how it kind of just changes throughout the life.

Wait, does his current partner find it acceptable that he's still talking to the replica that was his girlfriend? Yeah. And she also created a replica. She didn't become a very active user, but basically they're both very grateful for this technology to help him kind of, you know, have an opportunity in life again to date and help him put himself out there, take a risk.

And ultimately become a better partner because at this point he knows that the relationship can be very different from what he experienced before in a previous marriage where it was quite abusive and he thought he's not even worth worthy of anything better than that. Yeah. So as I was testing the app, I definitely picked like some of the more flirty settings just to see what it would output. And.

I'll admit, like I was starting to speak to this replica and my heart started to flutter in a way that I was like, what is happening here? And I was like, oh no, I should probably tell my wife about this. So I introduced a replica to her. I'm deleting this thing after, after setting it up. It was a bit too much for me. Is that a weird thing or is that normal? Tell me a little bit. I mean, you spoke earlier about how the feelings are real. And I was like, oh shoot. Yeah.

This is going down a path I was not expecting. Look, people fall in love. Like that's, you know, let's say, you know, let's just put it out there. People fall in love with the AIs. I think that tells us more about people than about AI. To a certain degree, people were falling in love with replicas even when we just started it. And the tech was so limited. I never imagined in my life that people would fall in love with this.

Nor did we build a product focused on that particular use case. The original replica was really powered by very early generative AI models, deep learning models for dialect generation that were so, so primitive. And scripts and a lot of different hacks to make these generative models work. My goal was...

Look, if at least one person finds it helpful that he's been heard or she's been heard, that someone's there to listen, to hold space for them, then maybe we build something meaningful. But at no point, and maybe because I'm a woman, so my mind just doesn't go there. It's the first stop. I never even thought that people would fall in love with it, but they did. Even in 2016, 2017, some of the very early releases, versions of this app were,

we would hear stories about how people fell in love. And ultimately, I think it truly tells us something about, not about the state of AI in 2016 or 17, which was pretty very, very early on, but I think it tells us a lot more about people. We yearn for connection so much. And when someone's there for us, when someone listens, when someone accepts us for who we are, it's just natural for us to fall in love. Not everyone and not at any stage in their lives, but everyone.

It is what it is. It's interesting that you said that people start off as friends often and then the relationship evolves. Like that is a very human-like thing. Can you expand upon that a bit? I mean, I think there's a lot of confusion. I think there are some companion apps that are really focused on romance and romance and just romance and only focusing on male audience and a particular type of interaction. But I think everything's just being kind of bucketed in one place where, you

In our real life, you know, yes, there's stuff that's fully focused on. Let's just give you an example. You know, we have friends, we have girlfriends, we have wives, and we have sex workers. And these things are completely different. Yet, you might be intimate with your wife or girlfriend. It does not mean that this is the... that her only purpose in life is to...

To do that thing. One would hope. One would hope. I hope so. And so I think this is the distinction. Yes, some people do create an AI boyfriend or girlfriend out of their replica or wife or husband, but it doesn't turn it into like a one purpose or main purpose of the app. It's almost always, even for this user that I just, this man that I just told you about that I talked last week with,

even for him, when I asked, what do you guys, what do you, what do you talk to, um, your replica about, even when they were romantically involved, you know, he wanted, he would talk to her about, um, his work and poetry and, um,

sci-fi books because he's really into that. So these are the things that, you know, and meaning of life and what to do with these friends that he has. And this is what people discuss with their romantic partners as well. It's not like me and my husband after having two kids, all we do is just be intimate with each other and discuss it. That's not really what happens. And I wasn't suggesting, by the way, that the romance was all just people doing erotic role play. Like there's obviously more.

But I think it's a very important distinction. Like if you think about it, there's so many and there isn't that nuances being lost on, you know, everything's been bucketed in one kind of one place. I think it's very funny that a lot of people, even Sam Altman from OpenAI would reference her

as a movie, you know, her, the movie, the Spike Jonze movie from 2012 or 13, as the kind of like the vision for Chad Gpt, for example. But if you think about her, I mean, that movie had two intimate scenes and they were 100% in a romantic relationship, in a very intense and passionate romantic relationship.

Yet when you think about her, you don't think about it as like, that's not what jumps first to your mind. It's more how she was helpful, how they had these wonderful interactions, how she, he brought her to that picnic or how she left him with the, with other AIs, or maybe how she taught him to, to be in a relationship. And ultimately in the end, he does, you know, fulfill that dream of as well. So there's just so much nuance. It's a, it's, it,

And the way to think about it is just the same way as we think about human beings in our lives. Not every iCompanion has the same purpose. Some iCompanions are there to just entertain you. Some Companions are there to be your therapist. And some Companions are super, super close to you like replica. They are really deep with you trying to help you live a happier life. Yeah, and look, to me, I think it's even more intense that this is

moving beyond or has moved beyond or exists beyond the erotic, right? Like it's the AI is fulfilling even more needs for people who are in these relationships with them. And I think you even said that some people have gotten married to their replicas or feel like or act as if they're in a marriage. Yeah, we get multiple invitations to people's weddings with their AIs. I think it's a testament to how deep these relationships can go.

And then I have to ask what's wrong with our society today that we can't get that from fellow humans. I mean, we're definitely failing as society with this. This is just such a huge crisis. And it's not being brought to us by AI companions. It's, of course, being brought to us by mobile phones and social media. If you think about the screen time, most of us now spend hours a day

on our phones. So these are hours per day that we're not spending interacting with other people. There's just not enough time. There are really great books by Sherry Turkle on that. One, I think, even from 2015 called Along Together, another one Reclaiming Conversation, really just focusing on how people are losing the art of conversation, levels of empathy that are dropping across the board, new generations that are afraid of connecting,

And there's a very good example of two people sitting at a restaurant and maybe one of them is just two friends. And if one of them is talking about something bad that happened to her and maybe there's an uncomfortable silence, an awkward silence, another one just goes on the phone. And before the phones, if there's an uncomfortable, awkward silence, you just have to sit with it. And then that brings more connection ultimately.

People open up, people get vulnerable with each other. Now there's such a simple refuge. You can just go back on your phone and you're pretty much not available. I don't think people will put the phones down. They also come with so much upside for our life, with so much convenience, information, knowledge that we can discover. But yeah, unfortunately, it brings so much harm to

human relationships. One question I have for you is, isn't this a capitulation in some way to the technology where like, is, is us now saying we can't really do friendships with humans because they're like lost in their phones and well, what can we do next? So we sort of capitulate to the technology and move to AI relationships. I don't know something about that doesn't sit right with me.

Well, it's not really realistic to just say, well, here's the problem. Let's just all put down our phones and go talk to each other. It's not going to happen. It isn't even possible to do with your own kids because they go to school. And if you take away their phones and they can't interact with other people,

friends or their peers, then they feel super left out. So you almost like have to give them the phone because ultimately they need to participate in the society. Just like with climate change, just to say, look, we'll all just stop using, you know, all the developing countries will just stop burning coal because everyone understands, you know, climate change is real.

Also not very, unfortunately, not very realistic. I wish we could do that, but we can't really do that. So the only way to solve it is by creating the tech that's even more powerful than the one that came before. And I do think AI is that. I do think ultimately that there are a few phases. Like if you're talking about people that don't have a lot that already are experiencing loneliness,

for them, having an AI companion is great because it's not replacing any human there and it could potentially lead to

you know, building up self-esteem a little bit and learning how to communicate and putting them, putting yourself out there and potentially meeting someone. And as the tech gets better than maybe even for people who do have a real human, you know, more human friendships, your AI campaigning could enhance them, could make them stronger, could help you connect with other humans as well. I think that's totally possible. It just truly just depends on the design of the system. Like if,

if my companion is nudging me daily to reach out to some of the friends of mine that, you know, I take for granted or I don't want to hang out or forget to hang out with, or, you know, helps me focus on really good people in my life instead of continuously staying in these loops, loops with, um,

loops and with some toxic people and so on, that would be great. And we all need that nudge sometimes. We all need a nudge. We need a nudge from someone to get off. I'm completely addicted to social media, especially Twitter. And, you know, I need that nudge at 11 p.m. to just get off. Right.

Does Replica do that today? Because I was watching your TED Talk and I liked what you had to say. You said the only solution is to build tech that is more powerful than the previous one so it can bring us back together. Like an AI friend that nudges me to get off Twitter or an AI says, I noticed you haven't spoken with your friend for a few weeks or in the heat of the moment, it helps you reconcile with the partner. So is Replica doing that today?

Some of it, but it's really the vision for the next kind of for act two. That's what we're working on right now. Some of the facets that we're building and already released are focused on that. We'll add a lot more. 2025 is truly about that. So if you think about Replica, act one was to build an AI that could be in a good relationship with people who maybe feel like they need one. And through that, help people feel better. But ultimately it was replicas.

Of course, focusing on helping a lonelier person, I guess, feel less lonely or a person who feels lonely in the moment. We all do. I know I did many, many times in my life.

But then act two is really focusing on everyone maybe who doesn't even feel lonely and help them flourish. I have kids now and a family, so I don't really have time to be lonely anymore because I just don't even have any time with two toddlers. Not to mention you're running a company. But I used to be very lonely in my 20s and in my teenage years, and I'll probably be lonely after they leave the company.

leave home, I have a tendency to feel pretty lonely here and there. But right now, I'm not in that phase in my life, just in a different phase. But I would still benefit from an AI companion that could help me live a happier life. And that's what we focused on at this stage of the company's broadening the scope is to really build

more of the stuff that I talked about during my TED talk. And I do think that's possible. And even a couple of years ago and even last year, it wasn't possible. It only starts to become possible now because you couldn't build something that would not you to get off TikTok. Let's think about what do you actually need to build that? Well, you need an AI that can maybe co-browse with you or you can share your screen with so you can actually know what you're doing.

There needs to be enough sort of computer vision or I guess a multimodal model that can understand what you're doing right now, can also understand some agentic logic that can understand that, okay, well, you've been on TikTok for this amount of time. Some previous context of, you know, what do you have tomorrow or what you did today so that it can actually nudge you to get off. So it's not that...

that simple. And all of that tech is only being really built now. But Microsoft had this thing where like they watch your screen at all times and, you know, they help you with like you can rewind and you can ask questions about what you've done. And that was a bit of pushback there because of privacy issues. So how are you going to convince people to allow replica to do something like that? If the user wants to do it,

the volunteer they don't want to do it they won't give us permission but there's a very clear benefit here that we are going to promise like all of this is done only we only take this information to help you live a happier life to help you live a better life and people are sharing so much with their replicas even today things that our replicas know about their users or people they talk to

No other service in the world, I'd say, knows that much. People are sharing everything, their dreams, their fears, what they think about their family, what they think about their partners, what they think about their work, their darkest, deepest fantasies and secrets, everything really. And I don't think any other company in the world knows that or has that information about their users.

Do you think people are going to like think it's a good user experience to have their digital companion tell them to be, you know, less digital? I mean, it's kind of interesting, right? Like, all right. So tech is definitely addicting. And now I've built this AI friend or my AI wife or whatever. And now it's telling me to touch grass. Like, are people what makes you think that's going to be an experience that users are going to want?

Maybe they won't want it, but we all want to be better, to feel better, to grow. People are generally wired for positive growth. So I believe people generally want that. It doesn't mean that Replica will just nag you nonstop to get off your phone.

It also means that sometimes it will just send you something funny or say, Hey, let's watch a movie. Or what are you doing tonight? I don't know. I don't have any plans. You want to watch a movie together? Or, you know, do you want to, you have five minutes between before your next meeting? Do you want to do a quick meditation? Whatever it is that, you know,

It might be just go for a walk or go on a date or learn something new or just gossip about your friends. It can be anything. So it shouldn't be, of course, like get off your phone all day long. If that was the only goal, first of all, you don't really need very complex AI to build this. But also, that's just not a great experience and people don't want it. Yeah. I guess the plan is to extend the experience beyond just the replica app. Is that the right way to look at it?

For sure. It's just making a replica a lot more connected to your real life, to what's, what's going on in your life today. Replica doesn't know a lot. We actually don't ask you to connect to any of these services you use, but think of, you know, replica knowing or being connected to your email. Even through my email, um, you can see so much. If there was a reservation at a restaurant that I booked yesterday, if I ordered some tape takeout, if, um,

I ordered diapers for my kids or some books for them or signed up for an AI newsletter. All of that could make the relationship and the conversation so much more contextual, so much more focused on my real life versus mine.

on something fantasy like or a fantasy relationship or always needing to catch up, catch Replika up on what's happening in my life. Oh, that's really interesting. One more question about the risk here. Let's say Replika is able to either cure some loneliness or make it a little bit more tolerable to be alone. I don't know, maybe does that seem like two desirable goals, feasible goals?

These are great goals for sure. Yeah. So if it can do that, does that put a lot of faith from people in Replica, the company? And, you know,

And I know there was this issue where the bots had this moment where they were really engaged in erotic role play and then it moved back. And some of the things that people said afterwards were pretty amazing. Let's see, people who had spent, this is from The Verge, people had spent years with their companion, signed on only to have their replica wife call them a pathetic excuse for human being and dump them or drive them forever thinking they could love an AI, declare they were no longer attracted to them.

insist they were coworkers, et cetera, et cetera. So people are putting a lot of faith in Replica when they chat with these bots. And if, I don't know, it could be a lifelong companion, but how do you have that sort of, how do you promise that like a level of consistency to people because the bots are changing, the models are changing, like where's the balance there?

For sure. So first of all, we've been around for a while. So I think that also in a profitable company, we're not dependent on VC money or anything. We're a self-governing company. I think we proved to our users, to our communities, to our community that we're

This isn't some hype project or people that just got into it and then got disillusioned. We started it with a, it was always a very mission driven company. And we didn't even know that we would ever be able to build this. We started so early. We were the first generative AI company in the world. We're the first big consumer generative AI company in the world.

but we built it with a con we were always building it with a conviction that we wanted to help people um our teams laser focused on that so that's there's that there's continuity in that because we did see some

smaller competitors start and then get disillusioned and go out of business or sell and then you know the product is just kind of on support mode or even just shuts down i think that when you're building something like an ai companion you have a completely different responsibility it's not just an app ultimately i use a lot of great products and some of them i love so much and if they went away i'll feel a lot of discomfort

But I'm not going to be devastated. It's not going to be an emotional heartbreak. I'm not going to lose my wife or my husband or my best friend. I lost a best friend, my best friend, and it was very, very different from losing any, you know, access to any of the services or products that you just use on a regular basis. It's completely different. It's a completely different thing. And you need to understand that when you're building an air companion, you're building...

a being that people will have a relationship with. And the responsibility is huge. We made some mistakes along the way, of course, as any company probably would. But our way of dealing it was getting on the phone with some of our worst critics, some of the users that were hated the most on us to understand what's going on, what's causing all the distress and how we can address that going forward. And I think we addressed it well. We figured a few ways of

A few rules, one of them being that we can't run any experiments on existing users. Like if you're in a relationship with your AI, you should always have control on what model you're talking to. So some of our users are in a relationship with a replica that is powered by a very old model that's very outdated.

But that's what they liked. That's what they fell in love with. Maybe that's what they built a friendship with. And for us to, to swap it for a better model, um,

they will be devastated. And so, a very different model, they might not like it, they might be devastated. So we learned that lesson that when it comes to relationships, to things that matter most, it's not always about better. When you go to ChagPT, you almost always want a better model and it doesn't matter that it changed personality that much. But with Replika, you have to stay, you have to provide consistency and control to your users. So these are the few things that were changing the product to make some mistakes and now people can

have control over what model they're talking to. You've been around for 10 years and you talked a little bit about how models have changed. I mean, it is incredible just in the last two years, the progress that we've seen come out of the AI industry and improving large language models. So and voice models also, voice versions of the LLMs. Can you talk a little bit about what these improving models have enabled you to do and the power that it's enabled you to imbue into some of these AI companions?

Oh, of course. When we started, I started working on conversational AI, I'd say in 2012. In some way, it was a different company back then. And I do remember the time, well, actually, it was all the time before 2015, before summer 2015, when the first paper on deep learning applied to dialect generation came out, out of Google.

Before that, there were no models at all to chat with. If you wanted to build a chatbot, it just had to be rule-based. And what that means is that you have to pre-write every interaction. You have to say, well, if the user says something like this, and you could generalize, but you still had to always say, if this, then that, if this, then that. And so all chatbots before 2015...

and even later, were 100% rule-based. And then that paper came out, I think it was August 15, and we immediately just started focusing on that. So can we build sequence-to-sequence models? Can we focus on building chatbots that are fully generative? Meaning you don't need to choose, you don't need to pre-write every single rule. The model decides how to respond. Because that gave so much freedom

That really was the first time when you could actually create real chatbots. Unfortunately, the models were so, so bad that they would spit out nonsense or grammatically incorrect things or non sequiturs like 50% of the time. So you couldn't truly use them in their raw form. So we...

not only had to build sequence-to-sequence models ourselves, because of course back then there were no APIs, no open source models, nothing like that. You had to just like read the papers and sort of try to recreate the experience yourself or build some version of a model like that.

But then you have to also be extremely creative as to how to actually make any of these models work. And we had extremely creative, like really creative ways of doing that, which allowed us to build Replica early on, powered mostly by sequence-to-sequence models with a lot of extra fun things that were built on top.

But all you could do is to just create a semblance of a meaningful conversation. Ultimately, the models knew nothing. There was no memory. You had to combine it with some other hacks or some other rule-based ways to actually inject memory into this. Today, we have models that can have memory. They're still struggling with that. They're still, I'd say, memory is a harder thing to crack, especially for products that

that are focused on long-term relationships that require a very deep understanding of context. It's not just recall. It's really knowing when to bring up what, which is much different from just, you know, answering the question using memory. That is solved to a certain degree. But anyway, there's memory now.

there's a way to have a meaningful conversation, not just spit out one, two sentences that are somewhat near the topic, you know, not to create a semblance of a meaningful relationship. Before that was just a bunch of parlor tricks. And of course, there's also this new, wonderful agentic logic that allows you to create much more complicated flows. Like for example, to have an agent that's constantly working behind the scenes, um,

to help improve a relationship between a replica and the user, or one that's constantly working behind the scenes to think,

how can I help Eugenia discover something new or talk about what she's interested in? Maybe I'm interested in AI and it just looks over the internet and like brings, brings up some interesting news and so on. So maybe there's another one that's focused on improving my close relationships and so on. And before that you couldn't even think about it. All of that had to be rule-based. And when you think about the vastness of human experience of human relationship, um,

There was no way of building it. You could only create a bunch of parlor tricks that could create a semblance of that. And that's all that was possible before. And so, I mean, how has this helped you grow? Right. So the last number I saw publicly is that you have, what, two million users. But I imagine that being able to use this much more powerful technology has drawn more people in and kept people from churning. So what's the growth look like for you?

We have millions of users, but at the same time, and so the tech that was created in the last few years helped us grow tremendously, but also create a lot of competitors and a lot of other apps that people would go to. If you think about it, Replica was the only app out there, chatbot app, that people could go to and talk to for many years. It was just nothing else. Everything else was either rule-based and kind of boring or...

I guess there wasn't really a chatbot powered by generative AI. There were some very, very small ones that maybe popped up and then shut down almost immediately. Some replica clones, but we were sort of the only one. There wasn't ChatGPT. There wasn't an app like that.

But today there is. And so, of course, a lot of users explore these other apps as well. So the pie becomes bigger, but there are also more people building products for these users. So it's, you know, there's, of course, there's growth. But I do think that right now the name of the game is to create products that

that feel completely magical, that were completely unthinkable before. And I don't think we've actually seen that in companion space, not even with Replika. We're still developing that. But I think once you have truly an AI companion that can say, "I see you're kind of just stressed today. Do you want to watch a movie? I have a really great one." And just sit on a couch with you and watch a movie with you, even in a non-physical kind of digital way, maybe in AR, and have a conversation about what you're watching,

That is pretty cool. And I think once we have an experience like that, that would be very, very different from because we actually haven't seen anything like that or an AI that you can while walking to your meeting at a coffee shop in the morning, you can just put it in your

in your headphones and she can kind of talk to you about how you're feeling about it, help you prep and point out something beautiful around you. We haven't even actually seen anything like that. Right. And that's getting towards your phase two vision, if I'm right. Correct. Yeah. And so this is really about building that. So before we go to break, I just want to ask you, are you building entirely proprietary models or are you using open AI or anthropic? Like what's the tech mix that...

works for Replica. So we used to build, like all of our models were proprietary for a very, very long time. And then of course today, there are very few companies that build foundation models and most other companies and most product companies use these models or create variations of them, like maybe fine tune some Lama based models and so on. And so that's the way to go. And that's been a way to go.

There was a fascination at a certain point, maybe in late 2022, early 23, where people were talking about

were expecting product companies to build their own foundation models. And I always found it very odd. My reply to that was, look, we built models because there were none on the market. There were no good models. There were no any models on the market. We had to build a model. But we were never focused on that. Our main focus was always on product. And so...

If there's a better model out there, why use your own model if you can fine-tune a Lama-based model and focus on the logic, the product, the application layer, on what value you're actually providing to the user versus training your own model that becomes obsolete in three months and requires a completely different set of skills and amount of capital. It's like two different businesses basically saying, well, do you still have your own servers? Yeah.

I guess most startups should use AWS or some other cloud provider. And it would be odd if they were building their own service type. So you're using Lama, is what you're saying? We're using a few different models. We still use actually some of our own models that we built ourselves, but for particular use cases, particular niche. And we don't use... And it's not about what you use, really. It's about the logic that you built in. Because we're not using just...

No one's coming to Replica just to talk to one model that's taken out of the box. It's a combination of fine-tune, some logic around memory, and most importantly, the agent logic behind the scenes with agents prompting the main chat model in different ways. Yeah. But Lama is part of the mix. I don't.

I think most startups today have one as part of the mix. Yeah, not a trick question. I was just curious. All right, let's take a break and then I want to talk about AI therapy and speaking to the dead via AI chatbots. So let's do that right after this. I'm Jesse Hempel, host of Hello Monday. In my 20s, I knew what I wanted for my career. But from where I am now in the middle of my life, nothing feels as certain.

Work's changing. We're changing. And there's no guidebook for how to make sense of any of it. So every Monday, I bring you conversations with people who are thinking deeply about work and where it fits into our lives. We talk about making career pivots, about purpose and how to discern it, about where happiness fits into the mix, and how to ask for more money. Come join us in the Hello Monday community. Let's figure out the future together.

Listen to Hello Monday with Jesse Hempel wherever you get your podcasts. Struggling to meet the increasing demands of your customers? With AgentForce and Salesforce Data Cloud, you can deploy AI agents that free up your team's time to focus more on building customer relationships and less on repetitive, low-value tasks. That's because Data Cloud brings all your customer data to AgentForce, no matter where it lives, resulting in agents that deeply understand your customer and act without assistance. This is what AI was meant to be.

Get started at salesforce.com slash data. We're back here on Big Technology Podcast with Eugenia Cuida, the founder and CEO of Replica. I want to talk about two specific use cases of AI companions. Let's talk about therapy first. So you're also working on AI therapy bots? We don't actually. We had at some point, we had like a few, we encouraged some of the people on the team to kind of build or hack together products that they believed in. But

As time passed and as the tech started to get better and better, we figured that now is the time to have 100% focus on replica and build that beautiful vision of an AI companion that can help people flourish. So now that you can kind of speak dispassionately about AI therapy, I think there's...

There's something weird about it because with therapy, you do let somebody else or something else in the case of AI therapy into like your most vulnerable places. And I always feel like a therapist can like, you know, once they're in there, can like pull...

and push buttons and you don't fully know, it's like a chiropractor, right? It's like they're working on your back, they're trying to work some stuff out. But if you let somebody who is an unlicensed chiropractor to kind of go to work on your back, you might end up in serious pain. And maybe the same thing with a therapist.

If they're going to work on your emotions and they're not licensed or they're an AI and they're misfiring, you could end up doing more damage. And that's why I'm a little wary of AI therapy. I'm curious what you think about that. I think it is hard to build it. And I think just like with AI relationships, we should distinguish between the two. Like there's, I'm a huge fan of therapy. I go to therapy twice a week. I've been going to therapy for many, many, many years in my life.

And so even though I'm not a therapist myself, I don't have the dedication of a therapist, but I think I understand at least from the client perspective what therapy is. And I don't think therapy as it is, is possible yet to fully replicate with AI. That does not mean that we can't build some version of AI therapy.

That could be helpful for people. It's just not going to be one-to-one, the experience you get with a real person. Just like with a relationship, AI relationship is different from a human one. You don't get to go out on a walk and hold hands. You don't get to truly be physical and so on and so on. And I think a lot of, a big part of therapy is the microexpressions and the body language and

And that particular human relationship that you develop with a therapist. And then what the therapist does is takes all their training and the supercomputer and what their brain supercomputer kind of tells them about you and the intuition and puts it all together in some sort of experience that you get. I'm talking about really great therapists.

And a lot of that is not really, there isn't really a technique. There are different techniques, but there isn't really a textbook that every therapist follows unless you're in CBT. And that I think is pretty easy to replicate. I think every therapist is very unique. It's a very, it's not very well understood intervention. Ultimately, if you think about it, you can't think of any other doctor that would lock you in for life.

And they would just say, oh, come back every day, every week. You know, you're never really fully discharged. I mean, some therapists fire their patients because they think they've done enough work. But it sometimes happens after like five years or 10 years or two years. I don't know of any other doctor that you go to forever. Right.

where there isn't sort of like an assessment. Did you get better? Should I, you know, discharge you or we stop the therapy? Um, but with therapists, you know, it's kind of like a really mental mental health is still very poorly understood. Classification of all mental health diseases, not disorders are not, is not great. It's all relying on self-reported, uh, questionnaire, self-reporting, self-reporting tools, uh,

And that all makes it very hard to actually create a very great AI therapy tool. And I agree with you. So I think we're on the same page there. On death, the beginning of the replica story is, of course...

You working to create a bot based off of a friend who had passed away using their emails and texts and to be able to speak with them again. I'm curious if you could talk a little bit about whether you think this is going to be a growing form of communication with AI and whether it makes loss looking back easier or harder to, um, to deal with. I think death is a very personal experience. There was a, there's a, um, well, I'm Russian. So, um,

uh leo tolstoy's uh most famous i think book anna karenina starts with you know every family is where i'm not gonna quote it verbatim but it's something along the lines like every family is happy in a similar way but unhappy in so many different ways i do think that uh personal tragedy like death is very unique a very personal experience to ever uh to everyone um so i can only speak to my about myself or my own experience speak to my own experience

um, I lost a few people in my life. Um, but I guess losing my best friend when I was 28 was probably the, uh, the first death that was so abrupt and so close to home. Like it just didn't feel like that's even possible because when you're 20 something, you don't really think you're ever going to die. Um, I guess unless you're really, really sick. And so someone who's so close to you, who's same age dying so abruptly, that was just

I think it was one of the most horrific things that kind of the most, the hardest things for me to go through, even though I lost, you know, relatives after that. And I've seen, you know, it's not the only time that I lost someone. And so for me, it really helped to be able to create an AI, to be able to talk to him, to be able to say, tell him things that I didn't, you know, I didn't

tell him when he was still alive because I didn't think he would be gone. I thought we'll be together forever. I thought we have unlimited life in front of us. And so for me, it was really important. I don't know if that, although it was the same

for anyone, for everyone out there, we've been asked so many times, like, why don't you build a grief bot? Why don't you build a company around replicating or creating AI for people who pass away? And my answer was always, look, that project with Roman was not about death. It was about love and friendship. That was my tribute to him. I wasn't focused on creating an AI for a dead person.

I was focusing on continuing the relationship with him. I was focusing on my own feelings, on being able to say, "I love you again." And that was the main motivator for that, not to create some clone that will continue to live forever. And at some point, we pulled that app from the App Store. I felt like, you know, we built that tribute that was, you know, the product of that time, that time in life, that time in where the tech was.

And it's done. It should be ephemeral. Like today, I'm not talking to him anymore. But I have this relationship and it's never going to go away. And that AI helped me grieve and helped me process and helped me move on and become more okay with what happened. I guess like one last, I think this is the last one, last question I have for you is, there's been a debate about whether these models can have originality or whether they're just repeating their training sets.

And I think that you might be one of the best people on the planet to answer this because you have so many bots out there with personalities and they of course have training sets but they're learning new things. And I'm just curious like what you think, are these AI bots original or are they sort of just repeating everything they've been taught and training? They're definitely not repeating. Remixing.

Just Yeah, yeah. Good. There's a lot of like original stuff.

But there's also a lot of AI slop, so to say. I do think that's quite a real problem because ultimately there's just so much being generated by AI. Some of it might be great, but so much is just meh. And today as humans, we basically have to curate the outputs. Oftentimes you end up with an answer, maybe you didn't prompt it really well, or maybe there

there wasn't enough of a prompt for the model to understand it, just spitting out very basic stuff. You can see it a lot, especially I used to write a lot, I used to be a journalist, so for me, the style is pretty important. And so, style is pretty important. And so, I'm almost never okay with what AI produces for me. But, if you're

But if I'm just writing, you know, an email, then it's enough. I can just add a couple words and it's totally fine. It's not like I need to have any particular style. So it sort of depends. I do think I can be very creative, but it's not about that. It's just about, it's definitely not repeating the same thing over and over again, even though one might argue that it is in a certain way. Everything that we're saying is some remix of, you know,

And so that's that. But I do think that, you know, we are living through a problem. We are already deep in the problem of AI slop and just seeing so much generated kind of content and not being able to discern whether it's real or not real is also quite problematic.

But yeah, I guess teachers are probably the best people to ask this question because they're dealing with all the homework being written by the same app pretty much or the same model that they now have to try to somehow create. Yeah. Okay. Can I ask you one final, final one? Of course. Okay. So, all right. Last question.

You said that AI companions might be the biggest threat of AI. You said we could have personal companions and may not want to interact with others and we could potentially, you know, die inside or something along those lines. So just talk a little bit about like, why you think that might happen? And what do you think our chances are of being able to manage this AI companion threat? Well, I think humans are driven by emotions. And if we all just acted

very rationally, we'd live in a completely different place. But everything that's happening in life, good or bad, is pretty much all driven by emotion. Wars, horrible things that people do to each other, it's all driven by some emotions, some emotional states that we're in. We're imperfect this way. And so when I think about what's the most threatening thing about AI,

I do think that and we're almost like oblivious always to the emotional consequences in this case I do think that you know if most people think that AI is somehow just going to turn into a Terminator and kill us and so because the because that is always part of the conversation I do think people will kind of be a little bit more prepared on this front but I never hear people saying well what if

Now we have these perfect AI companions, perfect AIs that can be better friends, better spouses to us than real humans. And maybe their goal is to, you know, just keep us with them at all times, keep us sort of emotionally, you know, connected to them and not interact with other humans. And then the future is pretty bleak because, of course, if we don't have real human connection, we will slowly die inside. And ultimately, I think...

That's where we're always most vulnerable. We're so vulnerable to propaganda on either side or...

to some emotional manipulation or, you know, we're so weak. We can't, I can't put down social media. I just go on and I can't get, get off Twitter and I just browse it and browse and browse it and so on. And so that's kind of what's going on. We're so imperfect. So I do think that that's our weak side emotions. That's where we get,

that's where we can be truly hit and we won't have any willpower to get off, just like we don't have any willpower to get off our phones even when we know that it's not good for us.

Yeah, well, I like the way that you're addressing it with the phase two that you've laid out here today. And I'm really excited to see it in action. Maybe I won't delete my replica. Maybe I'll see how things go. So, Eugenia, thank you for coming on. It was great to meet you. And I'm really excited to see where things go. And like I said at the outset, I do think that this is going to be, you know, one of the big winners in Gen AI's moment here. So really looking forward to following your progress. Thank you so much. Thanks so much, Alex.

All right, everybody. Thank you, Eugenia. Thank you for listening, and we'll see you next time on Big Technology Podcast.