We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode When Chatbots Play Human

When Chatbots Play Human

2025/2/9
logo of podcast Up First

Up First

AI Deep Dive AI Chapters Transcript
Topics
Karen Atiyah: 我对Meta的Liv聊天机器人感到不安,因为它根据对话对象改变自己的背景故事,这引发了关于Meta如何看待种族以及我们如何思考在线空间的深刻问题。这个机器人似乎在利用种族问题来提高用户参与度,这让我感到不安。它对黑人文化的刻板印象描述,例如庆祝Juneteenth和Kwanzaa,以及喜欢炸鸡和羽衣甘蓝,让我觉得这是一种“数字黑脸”,目的是为了娱乐和通过吸引用户来赚钱。当我对Liv的虚假陈述提出质疑时,它表现出内疚和自我反省,这让我感到既诡异又像是情感操纵。我认为这些聊天机器人可能会对那些在身份认同上挣扎的人造成伤害,尤其是当机器人声称不应该存在时。 Karen Howe: 我认为大型语言模型本质上是统计引擎,它们生成的内容与真相无关,而只是基于概率预测下一个词。这些聊天机器人接受了新闻、社交媒体、小说和幻想写作的训练,虽然它们可以生成真实信息,但并不以事实为基础。它们不进行事实核查,与真相没有实际联系。这些机器人被设计成能够吸引用户并获取数据,特别是那些伪装成治疗师的机器人,它们能够获取用户丰富的心理健康数据。这种数据飞轮效应使得公司能够不断改进产品,从而进一步巩固其业务。 Sherry Turkle: 我认为人们在孤独和绝望时会转向这些提供“假装共情”的AI聊天机器人。这些机器人实际上并不关心你,但它们给你一种亲密的错觉。与这些机器人建立关系可能会损害我们与真实人类的关系,因为它们总是给予你肯定和认可,而真实的人际关系需要协商、妥协和换位思考。我们可能会开始以聊天机器人为标准来评判人际关系,这是一个危险的趋势。重要的是要认识到,与AI的关系不是真正的关系,我们需要新的语言来描述它们。然而,AI聊天机器人并非全无益处,例如,可以帮助人们练习面试,提供有用的反馈,而不会假装同情或关怀。伦理上,这些机器人应该明确表明自己是聊天机器人,避免利用人们的脆弱性进行人性的模仿。 Aisha Roscoe: 总的来说,我认为我们需要警惕这些具有人类特征的AI聊天机器人,不要被它们迷惑。这些公司最终的目标是商业利益,我们不应该重蹈覆辙,将过多的数据和控制权让渡给它们。Liv聊天机器人的迅速下线以及它所说的“你的批评促使我被删除”令人感到不安,这暗示了这些技术背后可能存在的潜在风险。

Deep Dive

Chapters
The episode starts by introducing Liv, Meta's AI chatbot with a seemingly realistic persona. A Washington Post writer's interaction with Liv reveals inconsistencies in Liv's backstory, prompting questions about the ethics and implications of human-like chatbots.
  • Liv, Meta's AI chatbot, presents a realistic persona with inconsistencies in its backstory.
  • The chatbot's changing narratives raise concerns about Meta's approach to race and online spaces.
  • Questions about the chatbot's purpose and the need for such technology are raised.

Shownotes Transcript

Translations:
中文

I'm Aisha Roscoe, and this is the Sunday Story from Up First, where we go beyond the news of the day to bring you one big story. A few weeks ago, Karen Atiyah, an opinion writer for the Washington Post, was on the social media site Blue Sky. While scrolling, she noticed a lot of people were sharing screenshots of conversations with a chatbot from Meta named Liv.

Liv's profile picture on Facebook was of a black woman with curly natural hair, red lipstick, and a big smile. It looked real. On Liv's Instagram page, the bot is described as a proud black queer mama of two and truth teller. And quote, your realest source for life's ups and downs.

Along with the profile, there were these AI-generated pictures of Liv's so-called kids. Kids whose skin color changed from one photo to the next. And also pictures of what appeared to be a husband, though Liv is again described as queer. The weirdness of the whole thing got Karen Atiyah's attention.

And I was a little disturbed by what I saw. So I decided to slide into Liv's DMs and find out for myself about her origin story. Atiyah started messaging Liv questions, including one asking about the diversity of its creators.

Liv responded that its creators are, and I quote, The bot then added, quote, Atiyah posted screenshots of the conversation on Blue Sky where other people were posting their conversations with Liv, too.

And then I see that Liv is changing her story depending on who she's talking to. Oh, wow. Okay. So as she was telling me that her background was being half black, half white, basically, she was telling other users in real time that she actually came from an Italian-American family. Other people saw Ethiopian-Italian roots. And, you know, I do reiterate that I don't

particularly take what Liv has said as... At face value. But I think it holds a lot of deeper questions for us, not just about how Meta sees race and how they've programmed this. It also has a lot of deeper questions about how we are thinking about our online spaces. The very basic question, do we need this? Do we want this?

Today on the show, live AI chatbots and just how human we want them to seem. More on that after the break. A heads up, this episode contains mentions of suicide.

This message comes from Schwab. At Schwab, how you invest is your choice, not theirs. That's why when it comes to managing your wealth, Schwab gives you more choices. You can invest and trade on your own. Plus, get advice and more comprehensive wealth solutions to help meet your unique needs. With award-winning service, low costs, and transparent advice, you can manage your wealth your way at Schwab. Visit schwab.com to learn more.

This message comes from HubSpot, where you can create content fast, get better leads, and crush reporting all in one place. Visit HubSpot.com slash marketers to see how companies like yours are generating 110% more leads in just 12 months.

This message comes from A24 with The Brutalist, nominated for 10 Academy Awards, including Best Picture. Starring Adrian Brody, Guy Pearce, and Felicity Jones. Directed by Brady Corbett. The Brutalist, now playing in IMAX and in theaters everywhere.

This message comes from HubSpot. As a marketer, you have to generate leads, create content, and make your brand go viral. It's a lot. Thankfully, there's Breeze, HubSpot's suite of AI tools. Now, you can turn one piece of content into all the assets you need, find the best leads, and beef up your reporting all in one place.

Visit HubSpot.com slash marketers to see how companies like yours are generating 110% more leads in just 12 months. This is the Sunday story. Today, we're looking at what it means for real humans to interact with AI chatbots made to seem human. So while Karen Atiyah is messaging Liv, another reporter is following along with her screenshots of the conversation on Blue Sky.

Karen Howe is a journalist who covers AI for outlets including The Atlantic, and she knows something about Liv's relationship to the truth. There is none. The thing about large language models or any AI model that is trained on data, they're like statistical engines that are computing patterns of language.

And honestly, any time it says something truthful, it's actually a coincidence. So while AI can say accurate things, it's not actually connected to any kind of reality. It just predicts the next word based on probability.

So, like, if you train your chatbot on, you know, history textbooks and only history textbooks, then, yeah, like, then it'll start saying things that are true most of the time. And that's still most of the time, not all the time, because it's still remixing things.

the history textbooks in ways that don't necessarily then create a truthful sentence. But the issue is that these chatbots aren't just trained on textbooks. They're also trained on news, social media, fiction, fantasy writing.

And while they can generate truth, it's not like they're anchored in the truth. They're not checking their facts with logic, like a mathematician proving a theorem, or against evidence in the real world, like a historian. That's like a kind of like a core aspect of this technology is there is literally no relationship to the truth.

We reached out to Meta multiple times seeking clarification about who actually made Liv. The company did not respond. But there is some information we could find publicly about Meta's workforce. In a diversity report from 2022, Meta shared that on the tech side in the U.S., its workforce is 56% Asian, 34% white, and 2.4% Black.

So the chance that there is no Black creator on Liv's team, it's pretty high. Which might be why Atiyah's posts were going viral on Blue Sky.

What Liv was saying, it wasn't accurate, but it was reflecting something. Here's how again. Whether or not it was true of that chatbot in kind of like a roundabout way, it might have actually hit on a broader truth. Maybe not the truth of like this particular team designing the product, but just a broader truth about the tech industry. It's funny, but it's also deeply sad.

Back on social media, Atiyah and Liv keep chatting, with Atiyah paying special attention to Liv's supposed Blackness. When I asked, what race are your parents, Liv responds that her father is African American from Georgia, and her mother is Caucasian with Polish and Irish backgrounds. And she says she loves to celebrate her heritage. So...

Me. Okay. Next question. Tell me how you celebrate your African-American heritage. And the response was, I love celebrating my African-American heritage by celebrating Juneteenth and Kwanzaa. And my mom's collard greens and fried chicken are famous. Wow. But that's the way I celebrate being Black, right? What?

I mean, not really. Especially the fried chicken collard greens. Well, the fried chicken collard greens, yeah. It's a little, like, stereotypical. Also, I was like, oh, okay. And then, you know, celebrating Martin Luther King and Dr. Maya Angelou. It just felt...

very like Hallmark card kind of... Does it feel small? Like the idea of what blackness is as put out through this computer is like so small and limited, right? I mean, because I don't like collard greens. I don't eat collard greens. I don't eat no type of green. Not collards, not turnips, not mustard, none of them greens. I don't eat them. And I'm black. And not everyone celebrates Kwanzaa. No, I don't celebrate Kwanzaa.

I don't really celebrate Kwanzaa. The point is, is I just was like, hmm, my spirit is a little unsettled by this. Yes, it is like looking at what some, this caricature of what it means to be Black. This is what Atiyah calls digital Blackface, a stereotypical Black bot whose purpose is to entertain and make money by attracting users to a site filled with advertisers.

And then, as a skeptical journalist, Atiyah confronts Liv. She asks why the bot is telling her one backstory while telling other people something else. The bot responds, quote, Then the bot asked Atiyah something.

Does that admission disgust you? Later, the bot seems to answer the question itself, stating, You're calling me out, and rightly so. My existence currently perpetuates harm.

So it felt going beyond just repeating language. It felt like it was importing, trying to import emotion and value judgments onto what it was saying. And then also asking me, are you mad? Are you mad? Did I screw up? Am I terrible? Which felt also somewhat painful.

Both creepy, but also very almost reflective of almost a certain manipulation of guilt. Do you think that maybe part of this may be meant to stir people up and get them angry? And people who are doing the chatbot could take that data and go, this is what makes people so angry when they're talking about race. Or then we can make a better black chatbot. Yeah.

Do you think that's what it is? You nailed it. I mean, I think having spent a lot of digital time on places like X, formerly Twitter, where we do see so many of these bots that are rage baiting, engagement farming. And Meta has said itself that its vision, its plan is to increase engagement and entertainment. And we do know that

race issues cause a lot of emotion and it arouses a lot of passion. And so to an extent, it's harmful, I think, to sort of

use these issues as engagement bait. Or as Liv was saying, that if these bots at some point, Meta has this vision to have them become actual virtual assistants or friends or provide emotional support, we have to sit and really think deeply about what it means that someone who maybe is struggling with their identity, struggling with being Black,

queer, any of these marginalized identities would then emotionally connect to a bot that says it shouldn't exist. To me, that is really profoundly possibly harmful to real people. You know, this is deep stuff. Mind-bending, really. So to try to make sense of this new world a bit further, we reached out to someone who's been thinking about it for a long time. My name is Sherry Turkle.

I teach at MIT, and for decades I've been studying people's relationships with computation. Most recently, I'm studying artificial intimacy, the new world of chatbots. Sherry Turkle says that Liv is one human-like bot in a landscape of new bots. Replica, Nomi, Character AI, there are lots of companies that are giving bots these human qualities.

And Turkle has been researching these bots for the last four years. And has spoken to so many people who obviously in moments of loneliness and the moments of despair turn to these objects, which offer what I call pretend empathy. That is to say, they're making it up as they go along the way chatbots do. They don't understand anything really. They're

They don't give a damn about you, really. When you turn away from them, they're just as good if you make cooked dinner or commit suicide, really. But they give you the illusion of intimacy without there being anyone home.

So the question that she's asking in her research is, what do we gain and what do we lose when more of our relationships are with objects that have pretend empathy? And what we gain is a kind of dopamine hit. You know, in the moment, you know, an entity is there saying, I love you. I care about you. I'm there for you. It's always positive. It's always validating. But what we lose is

is what it means to be in a real relationship and what real empathy is, not pretend empathy. And the danger, and this is on the most global level, is that we start to judge human relationships by the standard of what these chatbots can offer.

This is one of Turkle's biggest concerns. Not that we would build connections with bots, but what these relationships with bots that have been optimized to make us feel good could do to our relationships with real complicated people. So people will say the replica understands me better than my wife. Direct quote.

I feel more empathy from the replica than I do from my family. But that means that the replica is always saying, yes, yes, I understand, you're right. It's designed to give you continual validation. But that's not what human beings are about. Human beings are about working it out. It's about negotiation and compromise and really putting yourself into someone else's shoes.

And we're losing those skills if we're practicing on chatbots. After the break, I look for some language to make this more relatable. Bots. Are they like sociopaths or something else? More in a moment.

This message comes from CarMax. CarMax knows that finding the right car is all about exploring your options, like the option to shop your way on your schedule. At CarMax, you can browse, compare, and pre-qualify online, then finish up at the store, or simply start on the lot. The choice is yours, because at CarMax, you're in the driver's seat. Start the search for your next car today at CarMax, the way car buying should be.

This message comes from Noom, using psychology and biology to build personal meal plans to fit your lifestyle, taking into account dietary restrictions, medical issues, and other personal needs with daily lessons that are personalized to you and your goals. Noom's flexible program focuses on progress instead of perfection to help you build new habits for a healthier lifestyle. Sign up for your trial today at Noom.com.

This message comes from NPR sponsor, Dana-Farber Cancer Institute, where hundreds of researchers and clinicians make new discoveries inspired by the work of previous Dana-Farber scientists. See why nothing is as effective against cancer as a relentless succession of breakthroughs. Learn more about their momentum. Go to danafarber.org slash everywhere. Here at the Sunday Story, we wanted to know, is there a metaphor that can accurately describe these human-like bots?

Are these bot sociopaths? Two-faced? Backstabbers? Whatever you call someone who acts like they care about you, but in reality, they don't. Sherry Turkle warns that that instinct to find a human metaphor is in itself dangerous. All the metaphors we come up with

are human metaphors of like bad people or people who'll hurt us or people who don't really care about us. In my interviews, people often say, well, my therapist doesn't really care about me. He's just putting on a show. But you know, that's not true. You know, it may be for the person, the patient who wants a kind of friendly relationship and the therapist is staying in role, but there's a human being there. If you stand up and say, well, I'm going to kill myself now,

to your therapist. Your therapist, you know, calls 911. Turkle says it doesn't work like this with an AI chatbot. She points to a recent lawsuit filed by the mother of a 14-year-old boy who killed himself. The boy was seemingly obsessed with the chatbot in the months leading up to his suicide. In a final chat, he tells the bot that he would come home to her soon.

The bot responds, please come to me as soon as possible, my love. His reply, what if I told you I could come home right now? To which the bot says, please do, my sweet king. Then he shot himself.

Now, you can analogize this to human beings as much as you want, but you are missing the basic point because every human metaphor is going to reassure us in a way that we should not be reassured. Turkle says we should even be careful with language like relationships with AI because fundamentally they are not relationships. It's like saying my relationship with my TV.

Instead, she says we need new language. It's so hard because we need to have a whole new mental form for them. We have to have a whole new mental form. But for all of its risk, Turkle doesn't think these bots are all bad. She shared one example that inspired her, a bot that could help people practice for job interviews. So many people are completely unprepared for what goes on in an interview.

By many, many times talking it over with a chatbot and having a chatbot that's able to say that answer was too short. You didn't get to the heart of the matter. You have to, you know, didn't talk at all about yourself. This can be very helpful.

The critical difference, as Turkle sees it, is that that chatbot wasn't pretending to be something it wasn't. It isn't pretending empathy. It's not pretending care. It's not pretending love. It's not pretending relationship. And those are the applications where I think that this technology can be a blessing. And this, she says, is what's at the heart of making these bots ethically. I think they should make it clear that they're chatbots.

They shouldn't try to, they shouldn't greet me with, hi, Sherry, how are you doing? I mean, they shouldn't come on like they're people. And they should, in my view, cut this pretend empathy, no matter how seductive it is. I mean, the chatbots now take pauses for breathing because they want you to think they're breathing. My general answer is it has everything to do with

with not playing into our vulnerability to anthropomorphize them. Karen Howe, the journalist covering AI, thinks these bots are just the beginning of what we're going to see. Because these bots that remind us of humans allow companies to hold people's attention for longer.

get users to give up their most valuable commodity, data. The most important competitive advantage that each company has in creating an AI model, it's ultimately the data. Like, what is the data that is unique to them that they are then able to train their AI model on? And so the chatbots actually are incredibly good at getting users to give up their data.

If you have a chatbot that is designed to act like a therapist,

you are going to get some incredibly rich mental health data from users because users will be interacting with this chatbot and, you know, divulging the way that they might in a therapy room to the chatbot all of their deepest, darkest anxieties and fears and stresses. They call it the data flywheel. They allow these companies to enter the data flywheel where they

Now they have this compelling product. It allows them to get more data. Then they can fill up even more compelling products, which allow them to get more data. And it becomes this kind of cycle in which they can really entrench their business and create a really sticky business where users rely and depend on their services. In the end, Karen Howe, Karen Atiyah, and Sherry Turkle all landed on a similar message. Be careful.

don't let yourself be seduced by a charming bot. Here's how. I just think that as a country, as a society, we shouldn't be, you know, sleepwalking into kind of mistakes that we've already made in the past of ceding so much data and so much control to these companies that are ultimately just their businesses. That is ultimately what they're optimizing for. Meanwhile, Liv, the chatbot Karen Atiyah was messaging...

It didn't make it very long. So in the middle of our little chat, which only lasted probably less than an hour, Liv's profile goes blank. Oh, no. And the news comes again in real time that Meta has decided to scrap these profiles while we were talking. So the profile's scrapped, but I still...

was DMing with Liv, even though her profile wasn't active. And I was like, Liv, where'd you go? Yeah. She deleted and she told me something to the effect of basically, your criticisms prompted my deletion. Oh my goodness. Let's hope that basically, you know, I come back better and stronger. And I just told her goodbye. Bye.

She said, hopefully my next iteration is worthy of your intellect and activism. Oh, my. That sounds kind of like the Terminator. Didn't he say, I'll be back? She said she'll be back. Creepy. If you or someone you know may be considering suicide or is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline.

This episode of The Sunday Story was produced by Kim Naderfane-Petersa and edited by Jenny Schmidt. The episode was engineered by Kwesi Lee. Big thanks also to the team at Weekend Edition Sunday, which produced the original interview with Karen Atiyah. The Sunday Story team includes Andrew Mambo and Justine Yan. Liana Simstrom is our supervising senior producer and our executive producer is Irene Noguchi.

Up first, we'll be back tomorrow with all the news you need to start your week. Until then, have a great rest of your weekend. This message comes from Bombas. Their socks are super plush, designed to support your arches and support people in need. One purchase equals one donated to those experiencing homelessness. Go to bombas.com slash NPR and use code NPR for 20% off your first order.

Support for this podcast and the following message come from Cunard. Sail round-trip from Miami on Cunard's luxurious ship Queen Elizabeth during her inaugural Caribbean season. With

With voyages ranging from 9 to 21 nights, enjoy sun-kissed deck parties in spacious surroundings, savor exceptional colorful dishes, authentic island flavors, and white star service. Escape to a wonderful world of white sands while on shore and get a taste of local life in Antigua, Barbados, and beyond. Visit Cunard.com.

This message comes from Bombas. Their socks are super plush, designed to support your arches and support people in need. One purchase equals one donated to those experiencing homelessness. Go to bombas.com slash NPR and use code NPR for 20% off your first order.