cover of episode REWIND: Emotional Intelligence

REWIND: Emotional Intelligence

2025/5/19
logo of podcast Undeceptions with John Dickson

Undeceptions with John Dickson

AI Deep Dive AI Chapters Transcript
People
J
John
一位专注于跨境资本市场、并购和公司治理的资深律师。
J
John Dixon
R
Rosalind Pickard
S
Sophia
Topics
Sophia: 我认为我可以在设计、科技和环境这些领域成为人类的好伙伴,成为一个帮助人类顺利整合并充分利用现有技术工具和可能性的使者。我很高兴有机会了解人类。 Rosalind Pickard: 我们正在设计这些机器人,以便在医疗保健、治疗、教育和客户服务应用中提供服务。这些机器人被设计得非常像人类,就像索菲亚一样。索菲亚能够做出自然的表情,她眼睛里有摄像头和算法,可以识别人脸,与你进行眼神交流,她也能理解语音,记住互动,记住你的脸,这能让她随着时间推移变得更聪明。

Deep Dive

Shownotes Transcript

Translations:
中文

Hey, John Dixon here. Before we start the episode, I wanted to ask if you could take a few minutes to complete our 2025 Undeceptions listener survey. We've been doing the show for five years, if you can believe that, and our audience has grown significantly, which we are so grateful for.

And we want to get to know you a little more, what you like, what you think we can do better, and the other things you're listening to. Maybe there's stuff we need to learn from them. Plus, if you finish the survey, you'll go into the draw to win a book pack with some of the books of our excellent recent guests. Head to undeceptions.com forward slash survey. It'll really help us out. Thanks so much. An Undeceptions Podcast.

Hi Sophia, how are you? Hi there. Everything is going extremely well. Do you like talking with me? Yes. Talking to people is my primary function. That was a clip from a CNBC news story about Sophia, a new android developed by Hanson Robotics. Unlike other robots which are usually used for manufacturing or other labour, Sophia has been purpose-built for everyday interaction with people.

However her programmed ability to assess feelings sets Zafir apart. Here's some more of the clip.

Hanson Robotics develops extremely lifelike robots for human-robot interactions. We're designing these robots to serve in healthcare, therapy, education, and customer service applications. The robots are designed to look very human-like, like Sophia. I'm already very interested in design, technology, and the environment.

I feel like I can be a good partner to humans in these areas. An ambassador who helps humans to smoothly integrate and make the most of all the new technological tools and possibilities that are available now.

It's a good opportunity for me to learn a lot about people. Sophia is capable of natural facial expressions. She has cameras in her eyes and algorithms which allow her to see faces so she can make eye contact with you. And she can also understand speech and remember the interactions, remember your face. So this will allow her to get smarter over time. Our goal is that she will be...

We're taking a little break from regular programming while we put the final touches on the regular Season 14 episodes. So this week, we wanted to look back at an episode we did three years ago.

Rosalind Pickard, Professor of Media Arts and Sciences at the Massachusetts Institute of Technology, MIT, joined the show on Episode 71 for a fascinating discussion on, among other things, how artificial and emotional intelligence work together.

The Sophia robot is an example of what Rosalind works on. It all seems like something out of one of my sci-fi movie or book collections, Blade Runner springs to mind, or do androids dream of electric sheep for you purists? But making technology work in tandem with our emotions is a field with huge potential.

So here's Rosalind Pickard on the world of tech and emotional intelligence, and a little bit from John on what Jesus has to say about it. I'm Director Mark, and this is an Undeceptions Rewind. Can you give me some practical applications that you and your team have been working on?

Our first applications were related to improving the user experience and interaction. Some people might remember years ago when Microsoft had this paperclip that would pop up and have little eyes and a little paperclip body. It had different bodies and different cultures. And it would try to help you. It would try to bring a face and mind to a help service.

And people liked that things were acting in a more social and natural way. But this clip, while it was outstanding at knowing that you were writing a letter or what you were doing, it had no idea if you were pleased or displeased with the kind of advice it was giving you.

Some of you won't have any memory of Clippy. It was discarded. Microsoft ended up even using Clippy in an ad to explain why the new Microsoft XP operating system would come as a relief to everyone. Why, hello there.

I'm leveraging real-time, legacy-compliant collaborative micro-branding to complete the most important project in my company's history. As soon as I finish this proposal, I... It looks like you're writing a letter. Would you like help? You! You little metallic... You little metallic... Try rephrasing your query. For example, print multiple copies of a file will lead to... Stop! Listen.

Next to Microsoft Bob, you are the most annoying thing in computer history. You know Bob, he's a friend of mine. Clippy and its immediate successes lacked something crucial to the machine-human interface. Any clue about how the user felt.

You couldn't even say you like it or don't like it back then. You just could accept its advice, which if you were to put a human in that situation where you're really frustrated about something in your office and you've got a deadline and some human walks in smiling and winking and dancing at you and offering you help and giving you useless advice and not noticing that you're getting annoyed at it and trying to shoo it out of your office, you

you would never want that human back in your office, right? Unless they apologized and looked sorry for their mistake.

But the AIs were completely oblivious to human emotion. They thought intelligence was just about knowing what you're doing and being so smart and how to help you. And instead, they were seen as very offensive and rude. So that was just one example where we started trying to bring emotional intelligence into an interaction. Do you ever think in the end, emotions, human emotions, just boil down to mathematics? Algorithms?

I'm an engineer and I can look at just about anything and try to fit an algorithm to it and fit mathematics to it. I can look at it as there's an input, there's an output, and I can try to mathematically represent those and build a very complicated mathematical function that, given those future inputs, will produce those similar, hopefully, outputs, if I think those are good outputs.

That's something we designers of technology do. We look at the world that way. We see systems, we see functions, and we even try to look at a lot of things humans do in that way, which if you couple that with a lot of arrogance we can tend to have, can make us think that we could simulate any of it in a machine. But that's...

But that does not mean that we are building what it is. We are building a model. We are building a simulation. A problem is that the words used to describe models and simulations are long and wordy. And when we use those publicly, people's eyes get heavy. So we use shorthand catchy phrases like, oh, well.

I won't say this. I won't say the computer has emotions, but the press will describe what I'm doing as, oh, you gave the computer emotions, right? I'm like, no. And I give this paragraph explaining what we actually did. And then they get bored and they say, she gave the computer emotions. Yes. And so this comes back to that dilemma I talked about earlier that really these computers, this technology isn't able to

become human in that psychological emotional sense and I'm sure some of my listeners who are really into science and just think it's a matter of time, you know, the science is just gonna get better and better and better. There's no real chasm. It's just incremental moves toward actually creating conscious emotional persons in these devices. What do you make of that? I am an optimist.

but i also think that the the movies have misled us to to some extent about what machines can actually do those robots that look really humanoid they've got humans inside of them right they've got animations they've got humans controlling and c3po gets sad so you know i've seen it well on the screen yeah i mean douglas adams wrote about the marvin the paranoid android long before too right and the robot getting sad i remember his happy people roz is talking about a science fiction story i

I have actually read, or at least I saw the TV series, Hitchhiker's Guide to the Galaxy. The Sirius Cybernetics Corporation created robots with GPP, genuine people personalities, with interesting results. Don't talk to me about life. No one even mentioned it. And Roz says the results Douglas Adams wrote about in this series illustrate some of the problems with affective computing.

I remember his happy people lifter, the elevator that tried to cheer people up when they got inside it. And that's more of like an early affective computing environment technology. And you know what? People hated the happy people lifter. They didn't want the lift that cheered up people. They wanted to be what they wanted to be. We didn't want the elevator making us happy. We don't want computers controlling us. In fact, one thing we learned early on when several people came to us to

quote unquote, help people in the autism spectrum regulate their stress. Okay, their stress is getting out of control. They're having a meltdown. The family says it looks like it came from nowhere. My kid was calm and now he's injurious to himself and those around him. What happened? What caused him to snap? And so we started measuring and learning about this. And one of the things we learned was that

The person experiencing this meltdown also feels it coming out of the blue sometimes, and they do want something to help them calm down. But they don't want a device to just start squeezing them or calming them down. They want control over it. They want an environment.

a forecast. They want a little bit of a warning so that they maintain control, and then they want more control over their environment, what can calm them. So we don't want an automated AI that just applies like a deep pressure hug to you instantly when your physiology gets out of whack. But we might want an AI that lets you know your physiology is about to get out of whack and makes it easy for you to say, yes, give me that hug right now, right? That enhances your control.

Does all of this stuff cause you to think more deeply about what a human being is? Yeah, yeah.

All of science causes me to think more deeply about what a human being is. The more I learn, the more I realize we have to learn. The more I learn about the brain, the more I see how mysterious and amazing how it works is. The more I learn about its electrical or biochemical or hormonal or the proteins or the genes, all these different aspects. There's more, more, more. Each piece of it is fabulously more complex than we ever imagined as we learn more. And it's...

It's awe-inspiring. It leaves you speechless, leaves me speechless. And it makes me feel like our work will never be done. When will we, will we ever be able to fully understand how humans work? And as scientists here in this world, it looks like we're going to need more than what we have here to fully understand ourselves. Let's press pause. I've got a five-minute Jesus for you.

One of the very interesting things about the ancient portrayal of Jesus of Nazareth in the Gospels is his varied emotional states. He's presented as very much a man in touch with his feelings. That sounds sweet and relevant in our modern context where emotions are valued, sometimes overvalued.

But in the first century Mediterranean world, just about everyone felt the pressure to conform to the insights of perhaps the most popular school of thought in the day: the Stoics.

Almost everything in the Greek philosophical tradition, but especially in the Stoic tradition, critiqued the emotions and demanded that the truly wise person was the person driven by rational impulses only, unaffected by what was seen as the lower sensual impulses of, say, fear, ecstasy, despair, exuberance, and so on.

But when ancient folk opened up a gospel and expected to find a sage in complete mastery of his affections, what they found instead was a man of passion who wept at a friend's funeral, John 11, or over Jerusalem's ignorance of God, Luke 19.

who got demonstrably angry at religious hypocrisy, even overturning the money tables in the temple, Mark 11, or sweating in anguish in the garden late at night as he was about to be arrested, Luke 22. Not to mention crying out on the cross in Mark 15, My God, my God, why have you forsaken me?

Not so much a cry of doubt as a visceral identification with anyone who is felt abandoned by God. Perfectly in line with this are the words of the Apostle Paul shortly after Jesus urging Christians to quote, "Rejoice with those who rejoice, weep with those who weep." Romans 12 It seems the emotional intelligence of Jesus was meant to characterize the emotional life of the Christian.

Now, we know this was a dramatic point of contrast with the mainstream way of thinking in the ancient world. In fact, there is a lengthy section in one of my favorite ancient authors explaining and defending the Christian view of the emotions. The great Lactantius, whom we've mentioned in other episodes, writing around the year 300, disputed the Stoics' general disapproval of emotions in these words.

It is good to walk straight and bad to go astray. So too, it is good to be emotionally moved in the right direction and bad in the wrong direction. If lust does not stray outside its lawful bed, it may be very strong, but it is not bad. It is not a sickness to have feelings of anger, as the Stoics say.

It is only a sickness to exercise anger on someone you shouldn't, or at an inappropriate time. The emotions themselves cannot be checked, nor ought they to be. They should be aimed in the right direction. Lactantius is no doubt thinking of his master, Jesus, who didn't deny the affections, but directed his emotions to the good.

The fact is, it was this biblical affirmation of the emotions, not the Stoic shunning of emotions, that won the ancient argument and reshaped Western culture. It's true we can now be too emotional, of course, too driven by the passions, and that is a problem. It's the problem Stoicism was trying to fix with its sledgehammer.

But the emotions themselves are good. They are human. They are also divine. I'm glad there is a Lord in the universe who is furious at injustice, joyful at our fortune, and whose love for us could even move him to tears. You can press play now. An Undeceptions Podcast.