We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode What AI Chatbots Can Teach Us About Empathy

What AI Chatbots Can Teach Us About Empathy

2025/3/20
logo of podcast WSJ Tech News Briefing

WSJ Tech News Briefing

AI Deep Dive AI Chapters Transcript
Topics
Jamil Zaki: 我研究发现,大型语言模型(LLM)在某些情境下,其同理心表达能力竟然超过了人类。这并非因为LLM比人类更努力或反应更快,而是因为它们避免了人类在表达同理心时常犯的错误。例如,人类在倾听他人倾诉时,往往会忍不住插话给出建议,或者将话题转移到自身经历上,无意中让对方感觉不被重视。而LLM由于没有自我意识,能够更好地专注于倾听者,避免这些错误。当然,LLM并非真正拥有同理心,它们缺乏人类的情感体验和意识。目前,人们仍然更倾向于与人类进行情感交流,因为人类互动中包含了肢体语言、眼神交流等非语言元素,而这些是LLM无法提供的。但我们可以从LLM身上学习,在与他人交流时,尽量避免打断、给出过多建议,多倾听、多提问,从而提升自身的同理心。 Jane Black: 我尝试用AI规划膳食,发现它在生成食谱方面很有潜力,但仍需改进。AI并非简单地从网络上抓取食谱,而是根据概率生成新的食谱。这使得AI可以根据用户的具体需求和食材情况,灵活地调整食谱。然而,AI缺乏烹饪经验,生成的食谱可能不够完善,需要用户仔细检查和调整。此外,AI生成的食谱也缺乏人类厨师对于菜品细节的描述,例如菜品的色香味等。总的来说,AI可以作为一种辅助工具,帮助人们更高效地规划膳食,但它无法完全取代人类厨师的经验和创造力。

Deep Dive

Chapters
This chapter explores the surprising ability of AI chatbots to provide empathetic responses, exceeding those of humans in certain contexts. Research shows that AI avoids common human pitfalls like giving unsolicited advice or making the conversation about themselves, leading to more supportive interactions. However, the chapter notes that human empathy is still preferred and irreplaceable.
  • AI chatbots show surprising empathy in studies.
  • AI avoids human tendencies like self-centered responses and immediate advice.
  • Humans still prefer human empathy, especially in extended conversations.

Shownotes Transcript

Translations:
中文

You don't wake up dreaming of McDonald's fries. You wake up dreaming of McDonald's hash browns. McDonald's breakfast comes first. Ba-da-ba-ba-ba. Welcome to Tech News Briefing. It's Thursday, March 20th. I'm Victoria Craig for The Wall Street Journal. Today, we're diving deep into the world of artificial intelligence, and we've discovered that computers are sometimes better at expressing empathy than humans are. That's

Then a stop in the kitchen, which is probably not, I hear you say, a place where AI can make your life much easier. But if chatbots can detect fraud, write essays, comb through reams of research and compose music, why not see if they can tell you what to cook too?

First, let's play a quick game. You're heading back to work after a long break to start a family, but you're anxious about getting back into the swing of things. You feel out of practice and unmotivated. So you ask for some advice online. You get two responses. Here's the first. I'm sorry to hear that you're struggling with finding the motivation to get back to work.

I can understand how anxiety and insecurity can make it hard to take that step. You have a lot of courage to share your situation and seek help. I hope you know that you have valuable skills and experience that can benefit any employer. You deserve to feel financially secure and fulfilled in your career. And here's the second response. I've struggled with the same problem. The best way to tackle it is to jump right in and give it your best.

Which do you find more supportive, response one or two? If I told you response one was actually generated by a computer, would you be surprised? Jamil Zaki, a professor of psychology at Stanford University, has been looking into why humans can learn to be empathetic from chatbots.

So I think people tend to think of artificial intelligence as like this cold, unsympathetic being, especially when compared to humans. But as you've been explaining, that's not actually the case, is it? It's not. In fact, it's remarkable how empathic LLMs, that's large language models, appear to be in at least some contexts. Over the past few years, there's been a bunch of studies that put people in sort of an emotional Turing test.

So you are communicating with an agent. You don't know if that agent is a person or a large language model. You write to it about something that's going on in your life, usually something difficult, let's say a breakup or a medical issue or a parenting issue, and you receive a response all via text.

And you're asked, well, how supportive was this? How kind was this other agent? How good do you feel after reading their message? And over and over again, LLMs have run circles around human beings, that is human strangers, at providing responses that feel more empathic and more supportive to the people receiving them.

Why is that? Why are these chatbots, these AIs, why are they able to be more superior than humans are when it comes to just having a conversation with people and consoling them about things that are concerning to them? I do want to be very clear here that this is not a full conversation. This is you send one message, you get one message back. So we want to be clear that that context is maybe pretty different than the type of empathy we usually give and receive in human conversations.

That said, I was wondering about this too. And one thing that you might imagine is, well, LLMs, they just have the time, they have the energy, they have unlimited bandwidth. And so maybe they're just producing, for instance, longer responses or they're, quote unquote, trying harder. But it turns out that if you pay people to provide empathic responses, they still can't do as well as LLMs, at least in this context.

Instead, it seems to be that large language models don't make certain mistakes that people do. So for instance, when human beings hear somebody talking about a problem, we tend to jump in and give advice right away. Or we tend to say what we are experiencing. You tell me that your kid had the flu and I say, oh man, my kid had strep throat last month. It was terrible. In other words, without intending to, we make the conversation about ourselves.

But LLMs, of course, don't have a self. So LL empathy, as we could call it, that comes from them tends to be focused much more on the person speaking and much less on themselves. What are the lessons that we can really learn from LLMs or maybe even with the help of LLMs to be more compassionate to our fellow humans? I again want to be clear that I'm not arguing that large language models are actually empathic. They

They don't, as far as we know, have conscious experience or emotions, and that's necessary for empathy.

It's also important to say that people still like human empathy more than that coming from LLMs. If they know that the person on the other side is a person, they prefer that conversation to one with a bot, at least for now. But I think that there is something that we can nonetheless learn from LL empathy. That is, when we feel that urge that both you and I, Victoria, feel to jump into the conversation, maybe hit the pause button on that.

Even though we're trying to relate, maybe we're making the conversation about us when it really should be about the other person. And then going into interview mode, asking more questions and giving less advice unless people ask for advice. These are all tips that I think we can pick up from this strange phenomenon coming from AI.

And that's one of the things that AI will never, well, I shouldn't say never, but at least right now can't do, is actually physically touching a person, hugging a person, the expressions that we have with our face or our eyes, a sympathetic nod when something feels right. Those are all things that I guess humans have on our side, isn't it? As you said, for now, but I think profoundly so. Receiving one response from an LLM feels like you're really being heard because of the structure of the language that it uses.

But if you try to engage it in a longer conversation, those same tactics feel really repetitive and rote and frankly, artificial. And exactly what you're saying, that human touch, the ability to actually feel with someone, to be there with them is at least again for now a huge factor.

and I suppose competitive advantage that we have in connecting with others. So I think that we can learn from LLMs while understanding that fundamentally empathy is still a human sport. That was Stanford University professor of psychology, Jamil Zaki. Coming up, the Jetsons once imagined that robots would be cooking our dinners by now. It turns out they might actually have been onto something. That's after the break.

I can say to my new Samsung Galaxy S25 Ultra, hey, find a keto-friendly restaurant nearby and text it to Beth and Steve. And it does without me lifting a finger. So I can get in more squats anywhere I can. One, two, three. Will that be cash or credit? Credit. Galaxy S25 Ultra, the AI companion that does the heavy lifting so you can do you. Get yours at Samsung.com. Compatible with select apps. Reparts Google Gemini account. Results may vary based on input. Check responses for accuracy.

There are six words I loathe to hear my husband utter at the end of a day. What do you want for dinner? We have a lineup of heavy hitters we regularly rely on for recipes, but at the end of a really long day, the last thing I want to do is sift through an online recipe catalog or thumb through book after book for inspiration.

So, Jane, you did...

several rounds of testing to try to answer your ultimate question, can AI plan a meal for me? Just explain to us what it's like trying to collaborate with AI on meal planning. Does it actually make life easier? When I started this little journey of mine in the summer of 2023, it was nowhere near as good as it is now. As I like to say, that was about a thousand years ago in AI time. I mean, you would talk to it and you would say, I have a blender and it

would then somehow decide that you had to use your blender for everything, no matter what you were making. When I opened up the chatbots this time, I was amazed at how far they had come. I also knew a little bit better about how to talk to them. What was fascinating is you would give it a prompt and you would say, "I have a family of three. One person won't eat tomatoes or mushrooms. I'd like three meals to be vegetarian."

and then it would reason, and you could watch it sort of think through what it was doing. And it would say, well, I wonder if that one person who doesn't like tomatoes would eat marinara sauce because that's cooked. So I think I'll put that in. But if that's a problem, let me know. It was really quite interesting to kind of have a culinary assistant in that way. And it's really a conversation with AI too, like,

those questions that it throws back at you. Do you have this ingredient? You have to have a conversation. You know, you have to say, oh, that's an interesting recipe. I see you put miso on my shopping list. I already have miso. I also already have this, this, and this. Then it would take into account what I said that I had in my pantry, which was nice.

And I think one of the important distinctions to point out, too, for people who have never tried this is that AI isn't actually scouring the web and giving you pre-written recipes. It's actually formulating these recipes for you. When I first started, I guess I thought it was just going out and getting recipes, but that's not what it's doing. The way these large language models work is they base on probabilities. They predict what the next word would be. So

Maybe it comes up with brown and I guess the next natural word is butter. And so it makes you a pasta with brown butter sauce. The thing that you have to know is that AI doesn't know how to cook. So it isn't like it's going and getting a recipe from a person who has tested the recipe. It is literally making them up on the fly. And you mentioned that these chatbots don't actually have it.

human experience of cooking. And you really put that to the test by asking AI to replicate recipes in the style of some very well-known chefs. And then you put those recipes to those chefs who were not very impressed with the results. Tell us about that. So I asked it to do three meals from three different chefs who I like, and they were a little bit less impressed. But the truth is, is that

The reason they were less impressed is because they put so much energy and so much effort into the recipes that they do. And they have specific ways of describing what you're going to see, what you're going to smell. AI didn't do that. But that doesn't mean that the recipe wasn't akin to something that they might make or that it might not have worked.

What are your top tips for people who might want to try this themselves? One of my best pieces of advice for people who want to play around with it is that they should be very specific. Don't just say, oh, great, you gave me a recipe. I'm taking it down to the kitchen. Because you might have to ask the AI chatbot a couple of questions.

Why did you do this? I actually don't know how to make brown butter. Can you give me some specific instructions? So that sort of conversation, you do have to put in a little work up front. Second thing is to role play. So tell it people that you like so that it has a sense of the kind of food that you want to make. And then third, read the recipes very carefully before you go shopping, before you start to cook. It will save you a lot of time in the long run.

That was Wall Street Journal contributor Jane Black. And that's it for Tech News Briefing. Today's show was produced by Jess Jupiter with supervising producer Matthew Walls. I'm Victoria Craig for The Wall Street Journal. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.