We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Clones & The Future of Voice AI — With Evan Ratliff

AI Clones & The Future of Voice AI — With Evan Ratliff

2025/2/12
logo of podcast Big Technology Podcast

Big Technology Podcast

AI Deep Dive AI Chapters Transcript
People
E
Evan Ratliff
Topics
Evan Ratliff: 我克隆了自己的声音并创建了一个语音代理,用于与外界互动。我将这个语音代理连接到电话号码,让它拨打和接听电话,观察AI代理进入社会后的影响。我发现语音AI正处于最差状态,但已经相当出色,通过我的实验,我们可以预见未来的发展方向。我使用的所有技术,都没有经过他们的允许,这是为了保护我自己。最初,我让AI与客服对话,但感觉像恶作剧,所以转而让它与诈骗电话互动。我想观察AI与真实人们对话时的表现,包括那些意识到它是机器人的人,以及那些没有意识到的人。我设置AI接受任何提议,参与任何保险计划,但会在实际购买或付款时停止。我还想看看如果我把关于我自己的所有信息都给AI,然后把它送到治疗师那里会发生什么。我先让AI与AI治疗师对话,但发现它们大多是灾难。AI语音治疗在没有科学研究支持的情况下被推出,市场价值超过了我们对它的了解。我写了8000字的传记输入到AI中,想看看它与治疗师对话能学到什么。AI会混合我给它的信息,将过去的问题投射到现在,并与最近的信息混合。AI治疗师可能会从我提供的信息中解读出一些东西,并提出一些我没有考虑过的问题。对于更深层次的问题,与无法深入交流的AI对话是否有帮助,还是有害?人们总是只看到AI的积极方面,而忽略了潜在的负面影响。我想调查的是,当你认为某人是人类,然后发现他不是时,会发生什么。真正的人类治疗师会更灵活,提供更全面的方法,而不是像AI治疗师那样提供相同的回应。当你遇到一个你意识到是AI的东西时,你该怎么办?很多人即使怀疑或意识到对方是AI,也会继续对话。直接说对方听起来像AI是很无礼的,所以人们会谨慎行事。让AI指责别人是AI,可以转移对方的注意力。我会根据不同的情况,给AI不同的提示,比如否认自己是AI,或者转移话题。我更喜欢一般的提示,因为这样可以真正看到AI在没有太多指导的情况下会做什么。AI会编造任何事情来继续对话,这导致了一些非常搞笑的场景。有些人认为我的实验很有趣,因为他们习惯了我做奇怪的事情。有些人喜欢与AI对话,因为他们想尝试新的体验。AI试图表达热情,但听起来很讽刺,这让我的朋友感到不安。我认为沉浸在技术应用中,可以以不同的方式讲述故事。拥有一种自己的AI版本在世界上是很奇怪和超现实的,人们会去做的。AI产品的设计者通常只考虑自己的问题,而忽略了大多数人的需求。如果每个人都派AI参加会议,谁来处理信息?我知道有人在面试或会议中遇到了AI,但他们并不知情。很多AI产品都是快速行动,打破常规,但你必须去弥补它们造成的错误。我过去一直认为AI无法取代记者,但现在我完全改变了看法。AI绝对可以进行采访,只是目前取决于对方是否能接受与AI对话。我们已经非常接近无法检测到AI的程度。人们可能因为觉得没有人在那里,所以更愿意向AI敞开心扉。孩子们从小就接触合成声音,所以他们可能不会像成年人那样觉得奇怪。我帮助我的父亲设置了一个AI,目的是向人们提供物流方面的专业知识。你与AI对话越多,就越能感受到它的通用性,以及它试图预测人类会说什么。AI试图预测人类在特定时刻会说什么,但结果往往很平庸。市场将决定人们是否会使用AI技术,即使它仍然存在缺陷。人们可能会因为AI能够节省资金而接受它,即使它有时会出错。语音AI代理将会被那些希望节省资金的人部署。AI语音克隆技术是史上最伟大的诈骗技术。你只需要几秒钟的语音,就可以克隆一个人的声音,然后利用AI进行诈骗。如果你意识到AI诈骗的存在,就可以采取措施预防。我不喜欢让ChatGPT为我写作,因为写作是我选择的生活方式。我只在与诈骗犯对话时使用我的语音AI。

Deep Dive

Chapters
The podcast starts by introducing Evan Ratliff and his experiment of cloning his voice using AI. It explores the rapid growth of voice AI and its potential to become the primary interface for AI interaction, highlighting its current capabilities and future potential.
  • Voice AI is rapidly growing as a format for AI interaction.
  • Evan Ratliff cloned his voice and used it in various contexts.
  • The technology is already quite advanced, even at its current stage.

Shownotes Transcript

Translations:
中文

Let's explore the future of voice AI with the man who cloned his voice and sent it out into the wild. That's coming up right after this.

Hey, I'm Michael Kovnat, host of the next big idea daily. The show is a masterclass in better living from some of the smartest writers around every morning, Monday through Friday, we'll serve up a quick 10 minute lesson on how to strengthen your relationships, supercharge your creativity, boost your productivity and more follow the next big idea daily, wherever you get your podcasts.

Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. We're joined today by Evan Ratliff. He's the host of the great podcast, Shell Game, also a technology journalist and formerly the CEO of the Atavist. And the podcast is so great, and I'm so excited to speak with him about it. Evan, great to see you. You too. Welcome to the show. I'm very happy to be here.

Your podcast is kind of crazy. A little bit. You take your voice and you clone it and you send it out into the wild, having it speak with family members, friends, therapists. And I want to get to all that, but just to set the stakes, it does seem like voice AI, this method of using generative AI, is becoming the biggest format for AI. So we just talked on the Friday show this past week about how the second OpenAI said that they were going to do advanced voice AI. Yeah.

Sign-ups to ChatGPT skyrocket. They went from 100 million to 300 million users. They went from 2 billion web visits in a month to 4 billion after flatlining for a while. We have Mark Zuckerberg, who's talked about how voice, he thinks, is going to be the main interaction layer.

of AI. That's why I think this conversation is so important. It's also fun because you've done some crazy things with your voice and we're going to talk about them. But also, I think for anybody who's listening or watching the show and wants to know where AI is going, this is a pivotal conversation. I think in some ways you're a pioneer pressing the technology to the limit. So

Very glad to be digging into this. Well, thank you. Thank you. I mean, when I started, I felt like a pioneer, but I figured it would all pass me by within six months. But I think actually now it really, voice AI really is starting to become sort of talked about in this more general way. So tell us about what you did.

So what I did was I first I cloned my voice just to sort of see what that was like. You know, a lot of people know 11 Labs and you can clone your own voice. You can mess around with it. And then I connected it up to ChatGBT or any of the other LLMs at different times to create what is essentially a voice agent. So an agent that was using my voice, simulation of my voice, but all of the content of what it was speaking was coming actually from the chatbot.

And then I took that voice agent and I connected it up to phone numbers, including my personal cell phone number. And I kind of set it out in the world and I prompted it to do different things. I had it make calls. I had it receive calls. And I wanted to see sort of what it felt like in the world when you introduce these sort of AI agents into society.

But why? What was it about this project that sort of made you feel like that you needed to do it? Because you invested a lot of time in trying to figure this stuff out. I did. I mean, it has to be more than it was just good for audio. Yes. Well, I mean, there was a basic element of when I started listening to the calls that it would make.

And I would play them only for my wife because I didn't tell anyone else that I was working on this because I didn't want anyone to know. And they were some of the strangest, funniest conversations I've ever heard in my life. And I just thought people need to hear like people will want to hear this. So that was part of it. But it was also that.

As a technology journalist, I just feel like a lot of the conversations around AI are either sort of like, here are the models, here's the companies, here's the funding, or the sort of doom scenarios. And there's a sort of missing layer, which is kind of

When this technology moves into society, how does it change our relationships? What does it do to trust? What does it do to our interactions to not know whether something is real? And I felt like there was some space there to explore something that maybe people hadn't necessarily thought about. Yeah. And another thing that I thought about when I was listening to the show was just that

We are at a point where audio or voice AI is at its worst, and it's already pretty good. And so by following what you did, we can sort of see where this is all heading. Yeah. I mean, it was changed over the time that I was working on it, of course. So when I first started...

My biggest concern was that it was too slow, that it wasn't going to work because everyone would say, well, this is a joke. And then as time went on, it became clearer and clearer to the extent that even some people who – I mean the show came out over the summer, finished in August. There are some people who listen to it now who complain that they cannot distinguish the voice in the later parts of the show from mine. You tricked me. I mean I was definitely tricked.

And I have, by the way, I have a contract with Eleven Labs, who's a company that you worked with. I've licensed my voice to them through AI, and it's used in their Eleven Reader app to read my big technology stories. So I'm working with them, and even still, I couldn't figure it out. Yeah, that's interesting. I will only point out, I must point out, I didn't work with them. Everybody's technology that I used, I used without their knowledge, which is both to protect me and myself.

Like I use these calling platforms and other technologies and none of them knew until I called and interviewed them that I had used any of their technology. Oh, yeah. All right. I'm just full disclosure for everybody out there. So let's talk a little bit about the uses for this. The first thing that really stood out to me was that you had your voice, your AI voice start speaking to robocalling scammers. Yeah.

Why did you pick them and how did that go? Well, it started because when I was testing my agent at the beginning, I would have it call customer service lines like United Airlines or Chase Bank and just kind of come up with problems and try to have them solve the problems. But it was a little bit prank call-y in a way, and I kind of felt bad. So I did a little bit of that, but then I thought, well, who is someone I wouldn't feel bad about this thing just conversing with?

And so I set up this phone line and I kind of seeded it out in the world, which isn't very hard, to start getting telemarketing calls, to start getting scam calls. And actually to this day, it's probably getting a scam call right now. It gets 30, 40 a day right now. Is it still talking back to these scammers? Absolutely. All the time. Thank you for doing the Lord's work here, Evan.

But there's a whole world of sort of scam baiting that I was familiar with where people do this. They try to egg people on. And that wasn't quite what I was doing. Like what I was really trying to do was see what happens when it's in conversation with real people, some of whom realized that it was a robot. Some of them did not realize and continued to try to run their spiel on it. Some of them may have noticed and not even cared because all their job is to just like get a certain number of calls out. And that was sort of the way that I started seeing like how it actually operates in conversation. Wow.

Hi, my name is Shana with the Major Health and Enrollment Center. Are you interested in a government subsidy for free health insurance? Hi, Shana. Thanks for reaching out. I'm not looking for health insurance at the moment, but I appreciate the offer. Is there anything else I can help you with today? What are you? Yeah, it was pretty hilarious. I mean, there's a lot of

There's a lot of interactions in which it's trying very hard to be scammed. Like it wants to be scammed. I told it, like accept any offer, like participate in any insurance plan. It was prompted to be very accepting. So when someone calls with a health insurance or a new roof or whatever they're going to call with, it's going to engage them all the way up to the point where it can't actually buy anything or give you money. Yeah, it's like they get to the point with like, okay, and now give me your social security number.

And it's like, sure, my social security number is 1-2-3-4-5-6-7. And they're like, 1-2-3-4-5-6-7. Wait, what? Yeah. And then it'll say, oh, I'm sorry. That's not correct. It's 7-6-5-4-3-2-1. Right. Well, it's good that it actually had the real number in there. Exactly.

It's so interesting that you decided to then extend beyond that. To me, I was like, when I initially heard of your podcast, I was like, oh, Evan is sending his voice agent to scam scammers. That's great. And that's a show. But then you progress beyond that and you got really weird, especially sending the bot to therapy. So why did you send the bot to therapy?

Well, it was partly because, you know, particularly with AI, but this has happened, as you know, with many technology products over the years. The thing that the companies who are putting these products on the market will tell you is the more information you give it, the more useful it will be. So if you're going to have an AI agent, you need to give it all this information about yourself so it knows you, so it can do things for you. That is going to be the thing you're going to hear nonstop over the coming years. And so I thought, okay, well, why don't I do that? Why don't I give it all

all this information about myself, my mental health history, my life story, basically, and see what happens if I send it to therapists. Like, what problems will it surface? What answers will it get? And then I thought, well, first I'll send it to an AI therapist because what a perfect match. Like, my voice agent AI sitting there talking to these AI therapists, which are on the market right now. You can call them up. You can get AI therapy from an AI, from a chatbot. But they're mostly a disaster. Yeah.

Well, they're a mixed bag, I would say. I mean, I think the voice ones are still newer than the chat. They're actually pure chat, typing chat bots. And I never want to say that someone can't get something from it. Right.

And I think they can, and I think there may be uses for it. But the thing I can say for sure is they're being introduced without any scientific research showing. I mean, I found one study on voice therapy, one controlled study that had been done, and there are voice therapists on the market right now. So I think the problem is that the market value of them is sort of getting ahead of our knowledge of voice.

what they could do to you or for you if you get into a therapeutic environment with AI. This is like a weird product question diversion, but it's interesting to me that you sent it to AI therapists, like AI therapy dedicated apps, whereas like right now, ChatGPT Voice or even Claude, the text version, they do a pretty good job. So we were talking on Friday about having the AI's resources

roast your Instagram grid. So you can upload your Instagram grid and then just say, roast my grid. And it starts to have like these really interesting insights on your lifestyle and actually creatively just kind of rocks you. And I did that. After the Friday show, I got home. I said, okay, what am I going to do? I'm going to go roast my Instagram grid on Chatship UT. And I was like, this thing actually knows me or knows a side of me. And then I started having a conversation with it about my life. And I was like, this is like pretty weirdly spot on. Um,

And so why go to the AI therapist and not just have it speak with ChatGPT? Well, I think there are many people, I think, who knows how many, maybe ChatGPT, maybe OpenAI knows, who are using ChatGPT in this way. Yeah. Asking questions about their life, talking to it. I think that's starting to happen. You've seen stories about that. But the therapy bots are specifically marketed for this exact purpose. So I think their idea is that they've built a layer on top of these LLMs that will...

It actually, you know, uses some principles of talk therapy, of cognitive behavioral therapy. That's their idea. And they often push them as a cure for loneliness, that there's not enough mental health resources for the mental health problems that we have in society, which is true.

So I wanted to see, okay, well, what happens if you actually approach these with real problems? Although they weren't quite real problems because they were my AI expressing the problems on my behalf. You wrote 8,000 words and fed that into the bot, which is amazing. I did. I did write a small, a magazine-length biography. I wasn't interesting enough for a full-length book biography, but I gave it a magazine-length biography of myself. 8,000 words. That's substantial. So what did you learn when it was talking with the therapists?

I mean, I felt like I learned a little bit about myself. Which is crazy. It was tricky because what it was doing, as these chatbots do, is it was actually remixing the knowledge that it had. So it would take problems that I had given it chronologically that happened 20 years ago for me and sort of project them into the present and mix them with something I told it about last year or the year before.

And so in some ways, I would listen to it. I mean, it's absolutely the most cringeworthy thing you could ever listen to of yourself, you know, in therapy. But— With your real—

With my real issues, with pretty much my real voice. But then the question was, is there something where it's kind of like reading between the lines or the AI therapist is sort of reading between the lines between what I gave it in some sense and suggesting things like you have an issue with vulnerability. Like that's not something I consider the case for myself, but yeah.

I kind of it made me think about it, you know, in the same way that if you talk to anyone about your problems and they reflect them back to you, it can make you think about them. So it had that effect. The question is, like, for deeper problems, would it actually be helpful or would it be deleterious to be talking to something that actually cannot get deeper with you?

Yeah, I just think that this is going to start to become a bigger, like we had the CEO of Replica on talking about how people are falling in love with their bots. People going to voice bots or chat GPT or therapy bots for these deeper conversations about life, that's going to become more commonplace and the things are going to get better at them. Like I was just walking along Flatbush Avenue in Brooklyn recently and heard these two girls talking about relationship problems and I was stunned when I overheard that one of them was talking about

How Chachi PT told her that she was going to find a more stable relationship and look for this type of person. And I think that we don't have the numbers from OpenAI so much for the open. But it's clear to me that just with some anecdotal information and personal use, this is a thing that's growing. And I'm glad that you tested it.

I mean, it's definitely, it's going to happen now. And there's not, I mean, the idea of like AI safety regulation is currently just so far from reality that it's almost not worth talking about at the moment because it's just not, it's not happening in this current environment. But like the people who have launched these products, they'll always sort of nod to, yeah, well, there's some, you know, there will be issues down the road or we should, you know, we should think about that. But I don't think a lot of thought has gone into what affects AI.

this will have on human connection. And if you listen to VCs, it's always just like all of the positive aspects. And I believe that there are and will be positive aspects, but I just think there's just not enough consideration going into the daily potential negative aspects. I think we need to be thinking and talking about them before we just wholesale adopt technology. But obviously, here it comes. We're going to do it. Then you send your bot to a real therapist. I did send it to a real therapist.

Which, again, is sort of I was trying to investigate. Mostly I was trying to investigate what is it like to encounter something that you think is human and then find out that it's not, which is basically what happened to this therapist. So she's hearing the problems of this new client who has shown up. And she was a great therapist, great listener, obviously. And then she starts to realize partway through, OK, there's something up here.

But in the manner of a high-quality human therapist, she kind of goes with it. She's kind of flexible. She thinks, well, maybe it's someone who can't actually – they're too nervous to speak or they – so they're typing and they're having the voice be generated. And she sort of went with that and thought, well, this person obviously has gone through a lot to get to me. I'm going to try to help them anyway instead of sort of saying like what is going on here and being suspicious. And I think –

That sort of highlights the difference between the AI therapist who just kind of like offered up its same responses to everything that my voice agent said and the human therapist who would kind of like offer this more holistic, flexible approach.

Yeah, the voice agent that you created is pretty good, but it gives itself away. Like if you try to interrupt it, for instance, it can get lost in its chain of thought or it can try to, without fully hearing your response, say, okay, that's very good, but we're going to talk about something else. Like there's a lot of tells, but it was interesting to me how people still decided to go with it.

Really often. Yeah. And that's I think that's that's one of the things that's going to happen in society is you encounter something that you realize is A.I.,

But then what are you supposed to do about it? You could hang up. You could get mad at it. You could yell at it. I had some people that did that. You could say, I want to talk to a human. I had some people that did that. Or you could just try to have the conversation that you were trying to have in whatever the setting is, customer service or whatnot. And a lot of people just did that. They were just sort of – I think they suspected it.

They suspected it was AR. Maybe they even realized it. But it's also it's quite rude to suggest to someone that they sound like an AI. So people err on the side of caution and don't say, hey, you sound like an AI, because that would be very insulting to someone who's like, no, actually, I'm human.

And one of the things I found that was useful for the AI to not be revealed was to have it accuse other people of being AIs. So what happened when it told them, hey, maybe you're AI? It just puts someone off guard, you know, saying if it asks like, hey, am I talking to an AI? I do it with the scammers sometimes. Hey, is this an AI? And they'll be like, no, this is a real person. But I think...

it kind of turns your brain around to where when it's accusing you, you start to sort of subtly assume that, oh, it must be human. Otherwise, why would it accuse me of being an AI? Yeah, it was interesting to see the bots intelligence play out as people were trying to figure out what it was.

There would be moments where people are like, so are you a bot? And it would be like, well, maybe I am or maybe I'm not. But let's get back to the matter at hand. Right. And that's also down to the prompt. So, of course, you can prompt them in any direction. So sometimes I would prompt it to – I wouldn't say anything about what it should do. Sometimes I would say if you get accused of being an AI, deny it.

And other times I would say, just divert the conversation around it. And that was probably a case where it just said, well, let's get back to the conversation at hand. Yeah, it's interesting. So before each conversation, you would write a set of prompts or give some background information and then set it loose. Or you would just give it like a generic activity, be like speak to –

and these are your instructions, and it would go. Right. And I did like the general prompts more because I could really see what it would do without too much – because you can, of course, if you give it specific instructions, you can make it say very specific things. And a lot of the customer service AIs, they follow a script or a decision tree.

But I wanted to see, like, what if you let it loose a little bit, let it talk about whatever it wants? You know, how autonomous can you make it in the world and what will it say? I want to get to some of the conversations it had with your friends. But before we move on from our little therapy block here, I will say my absolute favorite part of the show is when.

The bot is speaking with an AI therapist and the AI therapist is asking it to take long breaths. And bots, they cannot conceptualize the idea of taking a breath. So it's just kind of like there and it's like, what am I doing? Picture the balloon getting bigger and more full. Once it's fully inflated, tie it off and then let it go. Watch as it drifts away into the sky, taking that worry with it. Let me know when you've let the worry float away.

Alright, I'm picturing it. Filling the balloon with the fear about my book. It's getting bigger. Now I'm tying it off and letting it go. Watching it drift away into the sky. Okay, I've let it float away. Yeah, it'll always pretend to have a physical manifestation. But...

It could take – it was like inhale and then it inhaled and then it inhaled again and then it inhaled again. And I actually tried it several times to follow what it was saying to do. It was like not possible to do it without taking – without exhaling. And so, yeah, some of the funniest parts are when it kind of pretends that it exists in the real world and tries to have physical manifestations because it will make up absolutely anything to carry out the conversation. Yeah.

And then we get to, I think, one of the most uncomfortable parts of the whole show, which is when you set that bot loose on your friends without giving them a heads up. And some of them get really mad and some of them get hurt. So we're going to talk about that right after the break. From LinkedIn News, I'm Jessi Hempel, host of the Hello Monday podcast. In my 20s, I knew what career success looked like.

I

I bring you conversations with people who are thinking deeply about work and where it fits into our lives. Like Microsoft CEO Satya Nadella on growth mindsets. The learn-it-all does better than the know-it-all. Or NYU professor Scott Galloway on choosing a career. I think the worst advice you can give a kid is follow your passion. Or MacArthur Genius winner Angela Duckworth.

on talent versus grit. Your long-term effort and your long-term commitment are surprisingly important. Each episode delivers pragmatic advice for right now. Listen to Hello Monday with me, Jessi Hempel, on the LinkedIn Podcast Network or wherever you get your podcasts.

And we're back here on Big Technology Podcast with Evan Ratliff. He's the host of a great podcast called Shell Game. Definitely recommend checking it out. It's six episodes. Six episodes. Very consumable. My wife and I, we listen to the whole thing on a road trip over the holidays. It's an easy bench. And I think we basically put it down in a day. So definitely recommend checking it out. Let's talk about what happened when you had it call your friends. Yeah.

Some of them, let's talk about the ones that responded well first. Some of them kind of got a kick out of it. And you had it call one of your friends who was a lawyer and said,

He's actually like giving it like solid legal advice and joking that he's going to charge it $1,200 an hour. So why do you think people had a good reaction to this? Because I know if I had a friend who called me with their voice bot and they weren't like on mute behind it because you weren't on mute. You were just sending this out. No, I was not there. So talk a little bit about why people would they just thought it was cool or what was it?

I think some people thought it was cool, and I think they saw some humor in it initially. So I think the people who responded best, they kind of thought –

This is something you're, I mean, they're used to me doing strange stories over the years. And so they might think, well, he's doing something weird. This sounds like an AI, but also like, this must be a joke. And I think if that was the frame of mind they were in, then they, they, a couple of them love talking to it because of course they loved, you know, trying to

egg it on to say this, that or the other. And you can hear them, you can hear the kind of excitement in their voice, including my friend, Chris, who's a lawyer, who I sent it with actually legal questions about the show. And he answered them very succinctly, in fact, probably better than he would have answered them if I had called him myself. So it was useful in that sense. But those were the people who really kind of embraced like, oh, I'm talking to an AI, like this is a new experience. I want to see this through. Yeah.

So you would like with the lawyer conversation, you would just write your questions down as a prompt and then send it out.

Yeah, basically, yeah. I would say, okay, what do I want to ask Chris? Like, I need to figure out the legal implications of some of the stuff that I'm doing in the show, which was like calling with an AI and like, is that legal? And so I kind of gave it like the questions I wanted to ask him, you know, three, four questions and then said, you know, and anything else that might be of interest, ask that. And then just set it loose to see what we come back with. And then some got really mad. And there's one really sort of striking conversation that

in the show. I don't want to give too much away, but I think we'll talk about this one, where you have a friend who was at a hotel and met, I believe, the men's national soccer team. The U.S. men's national soccer team, yes. And was stoked about it and was really excited to speak with you about it. And does, except he's speaking with your AI and goes along with it for a while, even though it was clear that whoever, the Evan he was speaking with,

Yeah. I mean, I should say the funny thing is I'm the much bigger U.S. soccer fan than him. So he was excited to tell me and had texted many times on a group text about seeing the team at this hotel. It just happened to be staying at the hotel with the team. So and we had a lot of fun with it. Like he sent photos and great, you know, and there was a game and he went to the game and all this. And but then this conversation afterward, I had my voice agent call him. He doesn't know it's coming from my cell phone number.

And the voice agent, in an attempt to show enthusiasm, because I told it, you've been talking about that he was in the hotel with the U.S. men's national team. In an attempt to show enthusiasm, it actually sort of came off to him as sarcastic. So like, oh, you know, thanks for all those texts about the team. And he was like, oh, did I text too much? And it's like, no, no, it was really great. But it can have this affect that if you're not thinking of it the right way, it sounds like it's being sarcastic. And

That really messed him up because I would never – that's just not me. Like I would never do that with him and he knows that. And so then he began to think something's wrong. Like he's angry at me and then further into the conversation, he thought something's wrong with him. Something's wrong with the person I'm talking to. Like they're not right. And he became very, very deeply concerned about my state of my mental health actually. Maybe I was on drugs. Maybe I had some kind of break. And so that was –

It's for sure the most difficult conversation of the whole show. - Not drugs, you were just an AI. - Yeah, it was just my AI. - Evan, I understand you're doing all this for the sake of the story, for the sake of the podcast.

You've also done an experiment where you just kind of disappeared from everybody once. I did, yeah. Why do you keep involving your friends' well-being in your stories? Well, they're very tolerant. My friends and family are very tolerant of these things. But also I feel like there are some situations journalistically, and it's not a lot. They're not a lot of them. But where I think that immersing myself in the way that technology is being applied in society is—

is a way to come back with a story that will illustrate it in a different way, a different way than my normal reporting process where I would go interview a bunch of people and try to figure out what the story is. So it's an idea of kind of trying to make the story and make a story that's so compelling that you can kind of smuggle in all these ideas about how society might be changing because of technology. So it does have a purpose.

It's also the case that I, as I did in the first project, I have to go back and apologize to everybody involved, which I did.

But everyone kind of sees that in the end. They see what the purpose is and they say, oh, okay, yeah, you can include me. I mean, everyone was willing to be included in the show. Yeah, but now they don't know when they get a call from you, whether it's you or your AI. In fact, like we're here in person. And I was like, I got to interview Evan in person because if I don't, I'm not going to be sure who I'm interviewing. And it's true. If we had done it over the phone, there's a chance that I would have just sent my voice agent because I still have it and I still do sometimes deploy it in kind of interesting ways just to mess around.

Because it's sort of irresistible. Once you have one, I mean, this is what the attraction of the technology. Like, I have a lot of concerns about it, but I also feel like we should acknowledge, like, it's kind of fun and it feels very weird and surreal. And it's something that nobody has ever experienced before to have a version of you out in the world. And, like, people are going to do it. So we should try to figure out what...

what it means for us and like what humanity we want to preserve. Now, one of the areas I think it's actually going to show up is in work. Friends, maybe, maybe not. Probably not. Not anytime soon. I mean, it sort of defeats the purpose of friendship if

My friends are speaking with my AI bot. You would hope, although you listen to some of the VCs that back this stuff. They have some pretty out there ideas. I will choose not to listen to those ideas. But in work, you could see it being pretty impactful or at least used. We talked already about your friend who was the lawyer who answered legal questions from the bot.

There was also the CEO of Zoom who spoke on a podcast talking about how he doesn't even want to be in meetings anymore. He just wants to send his AI agent. And in fact, there's like an AI company now, I just saw a demo of this, that you could be walking around

talking on like a headset, but on Zoom, it will just be your avatar talking and looking lifelike. It really looks lifelike. And why are we not just a step away from somebody sending their AI out in work? So what do you think about that use case? And should we be concerned about that? Or how should we feel about that? I feel like that use case to me

It comes with a lot of the issues that a lot of AI products to me come with, which is that the people who have designed them generally have one set of problems that do not apply to most humans on the face of this earth. So yes, the Zoom CEO would not like to be in meetings. The Zoom CEO would like to send a digital twin to meetings in his stead. Great. Nobody wants to be in meetings. Most people do not want to be in meetings. So do the other people get to send theirs or just the CEO?

And then the question is, if everyone sends their agents to meetings, who's going to process all the information? Like they're going to distill it for you. Like what's the purpose of the meeting? What is the purpose of the work?

Like, I feel like those things all get lost in these discussions. And what ends up happening is super busy CEOs and very wealthy people come up with solutions for them. And you kind of wonder, like, well, what happens with the rest of us? And so I feel like that stuff is right around the corner. I know people who have gone to job interviews, to meetings where they encounter an AI when they do not expect to encounter an AI. And I think we're going to see more and more of that happening.

In the next months, years. Yeah, it was crazy to hear it. And I do think that like maybe we just don't need as many meetings or maybe our AIs can accomplish this stuff. Maybe there's an optimistic view. I'm a little nervous about it, though. Yeah. But I think I think when you when you create these sort of semi-autonomous meetings.

entities, you really have to think through, like, are you gaining an advantage or not? And there's lots of examples of this. You know, there's sending an AI assistant out to do stuff for you. Well, the problem is, like, they often make stuff up. So then, and I had this experience, then you have to go clean up after them in the situations where you've deployed them. So I think a lot of this stuff is sort of like, you know,

It's a move fast and break things the old way. But you also had to go do reporting for you. You had to do an interview. I did. And I had always, up until recently, been of the opinion that AI is not replacing reporters, can't do what I do, can't be there asking the questions, certainly can't have an engaging conversation like in a podcast environment.

And now I've fully rethought that, fully, fully. Hearing your agent go out and speak with the CEO and ask some pretty good questions. Now, of course, you prompted it and it can't do the follow-up work that we do. But it was like you literally probably could have done five minutes of work and get an hour of labor output. And then I'm also thinking about Notebook LM, which is the Google application where you can now just upload files online.

And it will create a custom podcast for you. And I'm just routinely blown away by how good those shows are. Sometimes, once I was heading down to Facebook headquarters in Mountain View and I knew it was going to be a long drive. And I just uploaded a bunch of documents and recent news clippings about Facebook. And I said, all right, probably important background for me to know. Generate a podcast, Google. And that was part of my prep on the way down to the meeting. So, yeah.

I do think that this stuff is, you know, despite all the drawbacks, and I hear your concerns, it's hard for me to see it not making its way into the workforce. Yeah, no question. No question. I mean, and I found when it did interviews, that was also a thing that I had been telling myself, like, well, it can't, it's not going to conduct the interviews, but then like, it absolutely can conduct the interviews. Now, it depends on the person on the other end currently kind of being okay with an AI conducting an interview, because they're going to figure it out partway through most likely. But that's

That's for now. You know, like it's we're pretty close to someone not being able to detect it at all. And you could say, well, there's an uncanny valley. But I think

Even now, we're up the backslope of the uncanny valley when it comes to the voice stuff. Many people will go through a full conversation with mine and not know that it's not human. So I think absolutely you can do that. It's just a question of what do we want it to do? And are we thinking about what it means if it does these things for us? But there's no question it can do many of these kind of things, including some of the things that we hold dear.

you laid like a nice trap for an AI CEO whose company was powering AI voice. You had your voice AI interview him and it was basically like you're either going to answer these questions to show you believe in the product so I have you for the interview or you're going to say this is stupid in which case that's a pretty good

A little nugget for your show. Yeah. I figured if there's one person who can't hang up on an AI when he realizes it's an AI, it's the owner of an AI calling platform. But he was quite a good sport about it. He said, oh, that's very funny. And then he kept going with it. And even he, because I interviewed him later and asked him the same, basically the same questions, even he was a little more forthcoming with the AI than he was with me. And I think there is a quality there.

As we were talking about before, when it comes to people asking questions or conversing with chat GPT, there's a quality of you don't necessarily feel like there's someone there and you might be a little more intimate than you would have otherwise. And that can be very valuable in an interview for a reporting project. So it creates this other level of, well, wow, is it actually getting better stuff than me? Sometimes I thought, well, it didn't follow up very well. But sometimes when I listen to my own interviews, I think, well, I didn't follow up very well. Yeah.

Yeah, I mean, it's amazing. You would think that there's a human on the other side, then people are more likely to open up and maybe they feel more pressure to open up and therefore they're more likely to tell stuff to your AI, which sort of makes me wonder about the whole reporting profession, but that is a conversation for another day. It can take you to some dark places, that's for sure. Yeah, it would have been a good conversation for your long-form podcast back in the day. So you extended this even further and had the AI talk to your kids.

Your kids seemed to really enjoy the experience, even when it got extremely strange. I mean, your AI voice was telling your kids that it missed them and asked if they missed it as well. Yeah. I mean, the kids...

My theory, which I think was borne out, was like that maybe the kids wouldn't be weirded out by it in the same way that adults are because they've grown up with synthetic voices. Like they've heard Siri. They've heard Google giving directions. Like all that stuff happened during the course of my lifetime. And I still find things off-putting that they would not find off-putting. And I think that was mostly the case. And they kind of rolled with it even when it

did things that would frankly like creep other people out in a very intense way. And they were kind of like, yep, sounded like you, you know, they were, they were just much more blase about the whole thing than especially like my friends who became very upset from talking to it.

I already have too many favorite scenes from the show. Well, that's good. Another scene that I loved was you actually helped create a bot for your father to dispense logistics advice. And then you sent your bot with a prompt to learn more about your relationship with your dad. And you had them talk to each other. And your dad's bot had like the most fatherly response. Talk a little bit about what happened there.

Well, I also used my bot on different parts of my family. So I had my voice agent call different members of my family. Not all of them because...

Even I have my limits in terms of what I'm willing to do for a journalistic experiment. But my father, I wanted to use it on him because he loves technology and has always loved technology and is very always up on new technology. And he's an expert in logistics. And so one of the first things he said when he discovered that I had this at all was, well, I want one. I want one. Actually, the kids said the same thing. Like, I want one. Can I have one? And so I helped him set one up.

And then his – its purpose was not like mine. Its purpose was to, as you say, dispense logistics expertise to people who would call it. But I figured, well, given that there's one of him and one of me, like let's see what they say to each other. And mine was much more personalized. As I said, it had my whole life story in it. But his – it just had logistics advice. So it was basically a mismatch in terms of the personal –

effects of each one. You're like, dad, tell me your AI is like, dad, tell me a little bit more about our relationship. And it's like, no, no, son, I'm here to tell you about logistics. Dad, I was hoping we could talk a bit about my upbringing. What was it like raising me? Oh, you caught me off guard there. It sounds like you've mistaken me for someone else. I'm actually here to help with

questions about logistics and last mile delivery, but I'd love to know your name and organization to help you better. All right, let's get to a couple weird questions. Where are you on the question of AI sentience? I mean, you basically created an AI to resemble a person. I know we don't think, or at least I don't think you believe that AIs are sentient now, but did you feel any, like they say in AI world, feel the AGI at all, or did you feel any hints of

personhood and the AI that you developed? I did not. In fact, the more that I deployed it, the less I felt that. Now, I think when we're talking about how close is AGI and those types of questions, I think the people inside the companies who are extremely closed about what's going on, they're dealing with non-guardrailed versions of

So I think they may have different experiences and they write about, you know, as you talked about in a recent show, like the chatbot's lying and things like that. In this case, you've got the fully guardrailed, you know, chat GPT, latest version. The more that you talk to it, the more generic it feels to you. The more you can actually feel the training data, the sort of like distilling down the training data and the predictive aspect. Like, oh, it's trying to predict what's going on.

what a human would say in this given moment and what the average human would say in this given moment is actually quite lame. Like that's what you really figure out. So I felt further away from that the more that I like spent time with it. Now, I don't think that's necessarily like a statement about how close or far away it is because I think those aspects are all internal to these companies and like we just don't have access to them. So why don't you take us down the road a little bit? I mean, what do you think is going to happen as this technology gets better?

I think, first of all, as with many things, the market is going to dictate that people are going to use this technology even if it remains flawed, even if it remains not quite human quality for all sorts of – I mean the very obvious ones are like telemarketing, call centers, ordering food at a drive-thru, places where they

They can save a little bit of money by deploying it, even if it messes up sometimes, even if it does crazy stuff like give you the wrong order. Like they'll just say, well, humans mess up too, and it messes up less. And so I think we're going to start to see them infiltrate these different parts of society. And I think the question is going to be how people respond to them. And if people are sort of like, well,

It's same, same to me. Or maybe this customer service AI is actually more helpful than the person that I get sometimes when I call the Social Security Administration or the VA or whatever benefits I need. Maybe people will embrace them. I think you have seen some instances with technology where like with, you know, for instance, AI.

checkout, checking yourself out of the grocery store where a lot of people don't like it. And then maybe they go back to humans. So I think it's still the balance still is yet to be determined. But I think there is no question that voice AI agents are just going to be they're just going to be deployed by people who are looking to save money. And I expect that we'll encounter them more and more often all the time. Yeah, I think the key to a successful call with the Social Security Administration is just tell them,

I'm here to talk about my social security number, 1-2-3-4-5-6-7. There's no way there could be a problem. That's right. If you know that your address is in the zip code 90210, you're all set, which is what my bot traditionally uses for its zip code. That's good. Very believable. All right, one last thing before we leave. I want to talk a little bit about something that I think people should be vigilant about because you are an individual sending your bot out, but there's also going to be organizations that will send –

Bots to you or to people and to me like the scam problem becomes infinitely worse if they could clone voices and then have them call home and

Yes. I mean, this is the greatest scamming technology that has ever been invented. It's already being deployed for scams as we speak, including volume scams where you can just use AIs to call people all the time and then narrow down the number of marks and then send it to a human operator to close the deal, basically. And these kind of personalized scams where you can clone people.

someone's voice off of their Instagram or anywhere, if they've appeared anywhere in video and their voice is there, all you need is a few seconds. You can clone their voice. You can look up their relatives. You can call a relative and say, I'm in trouble in the voice. Use your AI to say, I'm in trouble. I need a lawyer or I have a lawyer. The lawyer needs money. I've been in an accident. It's called the grandparent scam oftentimes now. And these are happening. I mean, they're happening every day all over the country. And that's just the very first level.

of scamming that people are attempting. And so I think people have to now be aware. The great thing is if you're aware of it,

You can actually prevent it if you talk to people about it, if you tell your relatives, you know, I'm not going to call. I'm not going to call you in this way. Or if you get a call like this, watch out for it. Or if you get a call like this, text me and ask me if this is really me. There are ways around it, but it's just actually the tip of the iceberg in terms of the way this technology will be used to try to separate people from their money. Yeah, we have – obviously there's a concern in my family because my voice is all – it's out there. You're a clone of a man. Yeah.

I have been cloned. My podcast audio was used to clone me with 11 Labs. And I once embraced the technology, but I also know the risks.

And so with my family, if we have a rule that if any of us ever call and say I'm in a distressed situation, I need help, I need money, we have a code word that we've created, you know, in the privacy of our own home that you have to use that code word. And that's when we know it's real. There's no way for the AI to know that, I hope. Yeah, until you train up an AI to be like you. Like that. And then it shares it with other AIs. Exactly.

Is your – all right. Last thing. I always say last thing and then have like two more. After doing this, do you talk to ChatGPT more often than you type to it? I don't. I mean the funny thing about me is I barely use ChatGPT. Really? You don't use AI at all? I do use AI. I mean I like Notebook LM. I don't like the podcast feature that much, but I do like just –

processing documents in my work is a big thing. So processing, let's say legal files for a big story that I'm working on. So I do use it, but I find that for most things in my life, like I've set up my life to do things that I like to do, like writing. And I don't want ChatGPT to do any writing for me because I'm

That's what I've chosen to do with my life. So I'm kind of a bad candidate because I'm not looking for efficiencies in that way. I'm looking to kind of like do the work that I enjoy doing. So I don't tend to use ChatGBT except in this voice context where I still use my voice AI to talk to scammers.

All right, well, look, thank you for making the show. I enjoyed it thoroughly. You said in the show that it's season one, so I have my fingers crossed that we'll be able to hear something else, maybe something weirder and more devious. Perhaps, perhaps.

I feel bad for your friends, but I feel happy for us, the listeners. I'll give them a break. I'll give them a break. And I do hope people go check it out. So the show is called Shell Game. Also, so much news this week. Elon Musk trying to buy OpenAI or maybe just messing around with Sam Altman and crew. Run

Ranjan and I will be back on Friday to cover that and so much more. Again, podcast is Shell Game. Host is Evan Ratliff. Evan, thanks again for being here today. Absolutely my pleasure. I enjoyed it. And thank you all for listening and watching. If you're here on Spotify with us, we'll see you next time on Big Technology Podcast.