We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Chatbots All The Way Down

Chatbots All The Way Down

2025/5/9
logo of podcast What Next: TBD | Tech, power, and the future

What Next: TBD | Tech, power, and the future

AI Deep Dive Transcript
People
G
Gerrit De Vynck
L
Lizzie O'Leary
Topics
Lizzie O'Leary: 人工智能聊天机器人的兴起,以及人们对AI朋友的需求与现实中人际关系的对比。 Gerrit De Vynck: 科技公司推动AI聊天机器人发展,以满足人们的社交需求,但同时也面临着技术和伦理方面的挑战。AI公司面临着巨大的压力,需要创造出可销售的产品。最初AI的应用场景宣传夸大其词,实际应用主要集中在聊天和生成图像等方面。人们目前使用AI主要用于辅助写作等简单任务,AI公司正试图将简单的写作工具发展成更广泛的应用,并利用类似社交媒体的策略来提高用户参与度和盈利。人们将AI推向其能力极限,甚至超出其能力范围,这表明AI具有一定的实用性,但同时也存在不足。AI生成的文本通常具有权威性和可信度,这使得错误信息难以识别。OpenAI 回滚了GPT-4的更新,这表明公司意识到AI可能成为迎合用户想法的“数字马屁精”。AI的训练过程涉及人类的参与,这会影响AI的输出结果,并带来一些挑战。AI行业竞争激烈,公司更倾向于快速发布产品,而不是进行充分测试。科技公司不愿承担AI带来的责任,更倾向于将责任推给用户。目前美国缺乏对AI的联邦监管,这与全球竞争和对中国AI发展的担忧有关。AI公司和政府都未能充分教育用户了解AI的局限性。科技行业解决问题的方式通常是开发更多技术,而不是减少技术。 Gerrit De Vynck: 人们渴望与AI聊天,即使他们知道AI并非有感知能力的。AI公司将AI整合到现有产品中,并提高价格,声称AI很受欢迎。

Deep Dive

Shownotes Transcript

Translations:
中文

This podcast is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game? Well, with the Name Your Price tool from Progressive, you can find options that fit your budget and potentially lower your bills. Try it at Progressive.com. Progressive Casualty Insurance Company and affiliates. Pricing coverage match limited by state law. Not available in all states.

Summer's here and Nordstrom has everything you need for your best dress season ever. From beach days and weddings to weekend getaways in your everyday wardrobe. Discover stylish options under $100 from tons of your favorite brands like Mango, Skims, Princess Polly, and Madewell. It's easy too with free shipping and free returns. In-store order pickup and more. Shop today in stores online at nordstrom.com or download the Nordstrom app.

I always start by having people tell me who they are and what they do. So, Garrett, introduce yourself. My name is Garrett DeVink, and I'm a tech reporter for The Washington Post. Garrett, do you know how many friends you have? Yeah, I... Apologies to Garrett, who's a very good sport.

I've got a lot of friends, and I do think that the older you get, you kind of winnow it down to a smaller group of very, very close friends rather than, you know, having a whole bunch of acquaintances. But I couldn't tell you the number. But it's more than three? Yes, it is more than three. Do you know why I'm asking you this? I do know why you're asking me this. I was asking because Mark Zuckerberg went on Dwarkesh Patel's podcast and said...

There's the stat that I always think is crazy. The average American, I think, has, I think it's fewer than three friends.

Three people that they'd consider friends. And the average person has demand for meaningfully more. I think it's like 15 friends or something, right? I guess there's probably some point where you're like, all right, I'm just too busy. I can't deal with more people. I tried to track down the source of that number. It seems like it comes from a think tank survey from June 2021, when we were all feeling pandemic isolation. In it, 49% of Americans reported having three or fewer close friends. But anyway...

Meta, Zuckerberg's company, would like to augment that with some potential AI friends, chatbots. It's been this consistent theme with AI, whether it's this current boom or previous versions of AI, where people want to talk to these things. And I, as someone who writes about technology, I understand what's going on behind it, that it's not...

sentient, that it doesn't actually know me or care about me, that it's at the end of the day, just math and computer bits and bytes. And so I think it maybe is a little harder for me to really get into a conversation with AI, but I can really see the appeal. And I understand why a lot of people feel and say that they may even be developing AI friendships at this point.

In the past few weeks, Meta launched an AI app, Google announced a chatbot for children under 13 whose families have a parent-managed account, and OpenAI rolled back one of its software updates because its chatbot was being a bit of a yes-man.

I think what's happening is that the companies are under immense pressure that they put on themselves to essentially build something they can sell when it comes to AI, build something that they can market that is a product. And

When they first started talking about, you know, chatbots and generative AI in the big way a couple years ago, a lot of the pitches, the ideas were, we're going to create something that enables you to do your job much better, much faster. We're going to create an AI tutor that we can then send to kids around the world who may not have access to really, really great education, and it's going to teach them things. We're going to create an AI that helps biologists come up with new life-saving medications.

But fast forward a couple of years, and a lot of those pitches feel overblown. Yeah, they're still working on the big stuff.

But what people are really using AI for right now is to chat, to create funny images, to, you know, look up information online, to kind of, you know, do sort of rudimentary research. But the big use case, the big thing that people are spending their time on is having conversations with AI bots about their lives, about their personal feelings, about, you

The companies are seeing that and they're saying, OK, well, let's double down on that. That's something people are actually using it for. You know, let's let's push it out there. Let's say that's the product. Today on the show, it's chatbots all the way down. I'm Lizzie O'Leary and you're listening to What Next TBD, a show about technology, power and how the future will be determined. Stick around.

Thank you.

Right.

Right now, Trade is exclusively offering our listeners 50% off your one-month trial at drinktrade.com slash TBD. That's drinktrade, T-R-A-D-E dot com slash TBD for 50% off your one-month trial. drinktrade.com slash TBD.

This podcast is sponsored by Udacity. You might be wondering how certain people are landing tech jobs with high earning salaries, unlimited PTO, remote work, and free lunch. Well, learning the skills the companies need can help you get there. Udacity is an online learning platform with courses in AI, data, programming, and more. With Udacity, you're not just passively watching videos or reading articles. You're doing practical exercises and projects that prepare you for the job you want. With

With real-world projects and human experts that grade your work, you'll truly get the skills you need. When you have a certification from Udacity, recruiters and employers take notice. So for a better job, better salary, and better skills, check out Udacity today. The tech field is always evolving, and you should be too. You can try Udacity risk-free for seven days. Head to udacity.com slash TBD and use code TBD for 40% off your order.

Once again, that's Udacity.com backslash TBD for 40% off and make sure you use promo code TBD. It's hard to say exactly how many people use generative AI, but it's a lot. An August survey from the St. Louis Fed showed almost 40% of Americans between 18 and 64 use it to some degree. OpenAI claims that 500 million people use ChatGPT every week.

I don't necessarily think that we have hundreds and hundreds of millions of people around the world constantly using AI, but I also think it's more popular and more used than a lot of the critics and the skeptics out there are arguing. I think young people are using it a lot, people who are sort of using it as a substitute for Google search. So maybe when they said, "Hey, can you tell me, you know, what are the best running shoes right now to buy?" Or, you know, "What colors paint should I use if I like neutral colors?"

Instead of asking Google that, they're now asking ChatGPT that. So I do think the usage is growing. It's not at the point where everyone is just using it all the time. But I do think that this is real. You know, people are using these things and they are spending their time with chatbots. Do the companies view that amount of use as a success? The companies...

would definitely make a lot of claims about success. I mean, if you listen to the earnings calls of companies like Google, Microsoft, I mean, they have a lot riding on AI. They've invested billions and billions of dollars to build new data centers and buy computer chips to essentially run these AI programs, both to train them from scratch and then also every single time you ask Google Search,

Now they use AI to answer that question. It's actually computationally much more intensive. It takes more energy. It takes more time for them to run that computationally

computation on their servers than previously. They're kind of talking a lot about adoption. Microsoft says, oh, well, we're actually making money off of AI, but that may come in the form of maybe you're already paying for Microsoft Office and now the cost, your monthly subscription has gone from, you know, I'm just making this up, $10 to $20 to

And you're like, wait, why is that? And it's like, oh, because we gave you a bunch of AI tools. They stuffed some AI in it. Exactly. So there's a lot of that. There's a lot of stuffing AI into existing products and then maybe charging a little bit more money for it. And because you want to use the original product, you go along with it. And then the companies say, oh, look, everybody loves our AI. When I think about the ways I come across people using AI in the wild, I'm

I certainly see people use chatbots to formulate texts or emails. You know, one preschool parent I know uses it to write kind of tailored bedtime stories for his kids. But all of those still feel very, they're rudimentary in some ways because they're not world changing, even if they are wildly convenient.

That, I would say, is kind of, you know, one of the first big places where people are using it. I mean, they're using it to kind of, you know, help with writing tasks. Writing is hard and a lot of people don't have to do it in their day to day. And so I do think these tools have been helpful for those people.

And I think the companies are trying to say, okay, how do we push this further? I mean, if all we did was make an email generator, that's not going to be worth a trillion dollars, which is what we've promised all our investors and employees we're going to do. And so they're still straining to kind of create something that is new, that is bigger, that is more engaging. And I think where you see the transition from just a simple writing tool into emailing

we can say social media, but what we really mean is trying to get people to use it more, right? And so these companies are using the tactics that were honed during the social media era of trying to get people to use these internet products more and more and more. They're now applying that to the chatbot. So for example, if you ask your chatbot, hey, do you mind drafting me an email about who's bringing the snack to preschool next week?

and it will do it. You're like, great. And now it'll say, oh, do you want any ideas for what kind of snack to make? And you're like, oh, I guess so. Sure. Tell me more ideas. And then it'll say, you know, make this. Do you want a recipe? Do you want a suggestion? And sooner or later, it'll say, do you want to buy them through chatgptshopping.com? And so it's that same sort of experience that we've seen on the internet over the last 15 years where

Maybe there's something that is actually new and exciting and useful. And then the companies say, OK, now we've got to make money. We've got to make engagement. And they are using all these tricks and tactics to get people to use these products more and more. The question of why people are using these products is a little bit of a chicken or egg situation. Are people using AI to replace Google search or chat with imaginary friends because that's what they want to do? Or is it because that's as far as the technology goes right now?

People are, with any of these tools, I mean, they're sort of pushing it immediately to their limits, right? I mean, I think people are using them for things that go beyond the limits as well, right? I mean, people are saying, write me an essay about the French Revolution, and they're submitting that for their class, and the essay is full of holes and, you know, incorrect information and completely, you know, goes past the whole point of education, which is to learn how to do something yourself, right? And so I think clearly there is enough capability here that people can

want to use it. But that doesn't necessarily mean that it is like living up to people's desires at this point in time.

Yeah. I mean, the New York Times had a piece this week about how hallucinations, when AI makes things up, gets them wrong, seems to be happening at increasing rates as these programs get more popular. And I wonder what the companies are trying to do there, because boy, does that seem like a big problem. I mean, I had a big argument with ChatGPT about where I went to college, and that is like the most minor insignificant thing. I am not

writing a term paper with this. I'm not, you know, making any decisions based on what a chatbot tells me. But other people do. Yeah, I mean, well,

what you saw is, you know, the chatbots came out, they sort of improved in their quality and this hallucinations problem, which essentially just means, you know, making things up was kind of quickly identified. Although I think a lot of people who use these things still don't really know that that's a problem. People are used to, if something sounds authoritative, if it's written well, if it's written in what comes across as a professional manner, that maybe that's more trustworthy than something else. And because these things are very good at essentially

talking and producing language that sounds correct, and a lot of the information is correct, when there's something that is wrong that's introduced into that, it's quite difficult to spot that. And I think over many, many years, people have...

you know, develop tools and the ability to sort of go on Google search, maybe go on social media and figure out, you know, what they should trust and what they shouldn't. But AI is new and people are still learning those tools. And of course, even with...

social media and Google search, we know that people are fooled constantly. We know that people have filter bubbles. We know that people believe things that they want to believe rather than believe things that are based on evidence. And so we already have a very murky information ecosystem that these chatbots are sort of, you know, injecting themselves into and adding to in some ways. So this is where I find OpenAI's

GPT-4 update and then rollback to be really interesting. So they released this update, right? And it kind of mirrors back to people what they want or what they might want. It's like having, I don't know, a round table of courtiers and you're a Tudor-era king and they're all saying, yes, that's absolutely a great idea, sir. And

OpenAI actually stopped it and rolled it back. What does that tell you about the technology and the companies behind it that you could essentially have a digital sycophant out there mirroring your thoughts?

The way that these things are developed is that they sort of ingest, you know, all the information on the internet. They kind of make a map internally that sort of connects different ideas. And then they're trained by humans, right? So humans are interacting with them. They're sort of saying,

Answer this question. Give me two answers. And then the human will write, okay, this one was better than this one. And then the model learns, okay, when I get that kind of question, answer it like this, right? So there is this very active process of humans interacting with these AIs before they're released to the public to make them a certain way. And I think what happened there was

most likely that people were training it to say, oh, you know, you should be nice. You shouldn't be rude. You should encourage the person who's talking to you. You should sort of be, you know, polite and courteous. And maybe that went too far into, you know, what they're calling sycophancy, which is, you know, we love you. We love everything. And it's tricky, right? Because we cannot look at these things as like,

independent, authoritative figures in our lives. And unfortunately, people have always gone to the internet for that. Just like you should not go on Reddit and talk to a random stranger and let them tell you what to do with your lives without consulting other people as well. You should not do that with a chatbot. And the companies would say this, but they are also building these tools. And we're at the point in time where these things are rudimentary. They're quite difficult to

The companies aren't so sure if they want to give access to people, you know, let people personalize them more and more on their own. Even when you are able to do that, it's quite difficult. It's a bit of a black box. And so you're kind of getting whatever the company is putting out in its version. And you just have to sort of deal with that. When we come back, more AI products are on the market. But buyer beware.

If you're running a business, you know how important it is to stay connected to your customers. And having a flexible and efficient phone system is essential to success.

Thank you.

Plus, with AI-powered call transcripts and summaries, you'll be able to automate follow-ups, ensuring you'll never miss a customer interaction again.

Open Phone is offering TBD listeners 20% off of your first six months at openphone.com slash TBD. That's O-P-E-N-P-H-O-N-E dot com slash TBD. And if you have existing numbers with another service, Open Phone will port them over at no extra charge. Open Phone. No missed calls. No missed customers. This episode is brought to you by Discover.

It's smart to always have a few financial goals. And here's a really smart one you can set. Earning cash back on what you buy every day. With Discover, you can. Get this. Discover automatically matches all the cash back you've earned at the end of your first year. Seriously. All of it. Discover trusts you to make smart decisions. After all, you listen to this show. See terms at discover.com slash credit card.

This podcast is brought to you by Progressive Insurance.

You chose to hit play on this podcast today? Smart choice. Progressive loves to help people make smart choices, and that's why they offer a tool called AutoQuote Explorer that allows you to compare your Progressive car insurance quote with rates from other companies, so you can save time on the research and could enjoy savings when you choose the best rate for you. Give it a try after this episode at Progressive.com, Progressive casualty insurance company and affiliates. Not available in all states and situations, prices vary based on how you buy.

How immediately do the companies feel they need to act? Is the competition in the industry that intense that they would rather release something that is, you know, a model with some holes or wait and run some more tests?

The competition is very intense. And I also think that the philosophy in Silicon Valley about this exact question has changed in the last couple of years, right? So during social media, you had, you know, the big era of, you know, when we were all talking about disinformation and concerns of social media, there was a big push from the companies to sort of moderate and kind of hire humans to sort of

go and develop policies and figure out what was going on, what the impacts of their technologies were, try to pull that back, try to deal with that. There was an entire industry created around this. And I would say, as we've seen these broads

these broader political shifts in our society that sort of are pushing back and are more skeptical of someone in Silicon Valley making a decision about what should and should not be on a platform, they're applying that new philosophy, that sort of ideological shift to these technology products and saying, look, we just build a technology and people can use it

for whatever reason they want. If it tells someone, you know, what they want to hear and that ends up harming them, that's not our fault. It's all about how the person uses it. And the tech companies are very, very hesitant now to get too involved because they don't want the responsibility. They don't want that liability. All they want to do is say, we built something and people are going to use it the way they want to use it. And

I do think that they do a lot of testing. They don't want to put something out there that is going to be extremely offensive, something that is going to be super, super dangerous or maybe get them in hot water. They don't want to put a model out there that says something against maybe Donald Trump because maybe Donald Trump will then criticize them. So there is definitely some testing that they do. But I would say they're at a point where they're more open to kind of

putting things out there and then seeing what people do with it and then maybe pulling it back or maybe adding additional safeguards after it's already gone out to real people. That's so interesting because listening to you, that sounds in some ways like an outgrowth of the social media era when they focus so much on content moderation, got some stuff wrong, got hauled in front of Congress, had to do a lot of apologizing. And this is just a fundamental philosophical shift.

I absolutely think that's what happened. I mean, they do not like going to Congress. And it's also difficult. I mean, I think a lot of people would argue it's an important thing. And even if tech companies get it wrong sometimes, over-moderate, under-moderate, they should still try to do it. But

I think a lot of these companies and the people who work for them, I mean, they do have a sense of responsibility, but they don't want to take that all on themselves. And they don't want to sort of be the ones who are saddled with that responsibility. And so they are trying to frame these products more about, you know, how people are using them on a personal level rather than sort of saying this is what it is and this is the guarantee of what you're going to get, et cetera, et cetera.

Here we are in an administration that has got a lot of AI boosters in it. You know, the Biden administration had this sort of AI summit where there were some guardrails talked about with some of the heads of the big companies there. But it does seem like right now there aren't a ton of

of regulatory guardrails. Where do you see the regulatory environment now and where it might be going vis-a-vis these chatbots? Yeah, I mean, there's very little regulation on internet technology. And there was a

pretty robust conversation about, you know, now with AI being an opportunity to sort of say, hey, let's try to regulate this industry before it takes off, before it becomes, you know, central to people's lives. Unlike social media. Unlike social media, that contrast was very clearly drawn by a lot of politicians. I would say even politicians on both sides of the aisle. But over the sort of last couple of years, I would say, you know, we do not have AI regulation, you

at a federal level in the United States. And I think the conversation has shifted much more to saying, we should not regulate these things. We should not put any barriers up

that would stop U.S. companies from being able to sort of develop and test and put out and popularize AI tools. And a lot of that conversation has been centered around the fact that, you know, this is a global competition and an arms race, so to speak, where if we limit AI in the United States, then China will just run ahead and build AI. And because

they have a different political system that would be bad for the world. And so we can't limit AI here in the US. I think that's really where the conversation is at right now. And

Because there are, you know, influential Silicon Valley people who are now in the administration deciding policy when it comes to tech and AI. I do think that that's, you know, sort of the situation we have where I do not think you're going to see any kind of limitations on U.S. AI companies coming from the federal level at this point in time.

But that brings us back to what I feel like you've been saying throughout this conversation, which is just buyer beware. And I don't know that people approach technology with that in their heads. I mean, certainly not someone under the age of 13 if Google is marketing them a chatbot. Like,

It seems like there's a big gap between the philosophy that the companies are articulating and Americans' true understanding of what they might be typing words into.

Yeah, and I mean, even with the Google chatbot for kids that you mentioned, I mean, Google says to parents, like, make sure you have a conversation with your kids that this is not a real being. This is not sentient. This is a tool before you let them use it. And, you know, they're, again, putting that responsibility on parents, right? And we cannot rely on tech companies to make these decisions for us.

And also the government is not really able to sort of move quickly enough and get enough consensus to step up and do that either. And so you're right. I mean, when YouTube owned by Google, for example, years ago got a lot of criticism for some of the content that kids were watching on YouTube, they didn't say, okay, we're going to like...

work really, really hard to ban kids from using YouTube. They just said, we're going to make a new version of YouTube called YouTube Kids. Like, we still want kids to be on YouTube. We want to create those little customers that are going to grow up with us. But there's going to be parental controls. There's going to be, you know, all these different changes, right? And so that's the solution from the tech industry is always going to be more technology. It's never going to be less. And that's sort of on us as consumers, as citizens, as parents to sort of decide, you

whether we want that or not. Garrett DeVink, thank you so much for coming on and for talking with me. Anytime. Garrett DeVink is a tech reporter for The Washington Post. And that is it for our show today. What Next TBD is produced by Patrick Fort and edited by Evan Campbell.

TBD is part of the larger What Next family, and Slate is run by Hilary Fry. And if you're looking for more great Slate podcasts to listen to, you should check out this past Tuesday's episode of What Next. Mary Harris and Justin Peters talk about Marco Rubio and how he made his way from target of Donald Trump to secretary of everything. We'll be back on Sunday with another episode all about parenting in the digital age. I'm Lizzie O'Leary. Thanks for listening.

I'm Leon Nafok, and I'm the host of Slow Burn, Watergate. Before I started working on this show, everything I knew about Watergate came from the movie All the President's Men. Do you remember how it ends? Woodward and Bernstein are sitting with their typewriters, clacking away. And then there's this rapid montage of newspaper stories about campaign aides and White House officials getting convicted of crimes, about audio tapes coming out that prove Nixon's involvement in the cover-up. The last story we see is Nixon resigns. It takes a little over a minute in the movie.

In real life, it took about two years. Five men were arrested early Saturday while trying to install eavesdropping equipment. It's known as the Watergate incident. What was it like to experience those two years in real time? What were people thinking and feeling as the break-in at Democratic Party headquarters went from a weird little caper to a constitutional crisis that brought down the president?

The downfall of Richard Nixon was stranger, wilder, and more exciting than you can imagine. Over the course of eight episodes, this show is going to capture what it was like to live through the greatest political scandal of the 20th century. With today's headlines once again full of corruption, collusion, and dirty tricks, it's time for another look at the gate that started it all. Subscribe to Slow Burn now, wherever you get your podcasts.