We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Help! My friend won’t stop using AI

Help! My friend won’t stop using AI

2025/6/23
logo of podcast What in the World

What in the World

AI Deep Dive AI Chapters Transcript
People
A
Antonio Weiss
H
Hannah Gelbart
L
Liv McMahon
Topics
Hannah Gelbart: 我观察到朋友过度依赖AI聊天机器人,无论遇到什么问题都求助于AI,这导致了一些问题,包括AI提供错误信息引发的争论,以及人们对过度依赖AI的不满。我认为过度使用AI不仅令人厌烦,还会对环境产生影响。因此,我们需要探讨更健康的使用方式,并揭穿一些关于AI聊天机器人的迷思。 Liv McMahon: 我认为关于AI聊天机器人用水量的说法有一定的事实依据,但具体用水量因多种因素而异,很难确定精确数值。OpenAI的负责人Sam Altman表示,人们对ChatGPT的礼貌用语导致OpenAI花费了数百万美元的计算能力。此外,AI聊天机器人是否比谷歌搜索更准确,目前尚无定论。AI生成的答案可能过于现成,而且会捏造信息。我认为将个人健康或财务数据输入AI聊天机器人存在风险,因为这些信息可能会被用于进一步开发系统。AI聊天机器人非常擅长听起来像人类,但这可能会导致人际关系之间的摩擦。我认为除了使用AI聊天机器人,还可以考虑其他选择,例如使用搜索引擎、阅读书籍或与朋友和家人交流。 Antonio Weiss: 我认为在使用AI之前,应该考虑AI是否比自己做得更好,以及成本和潜在收益是否超过风险。人们不加思考地使用AI存在风险,最终可能导致每个人都在做一些对我们没有真正益处的事情。AI是为人设计的,如果它没有为你带来益处,就不应该使用它。

Deep Dive

Shownotes Transcript

Translations:
中文

Do you have a friend who relies on an AI chatbot for absolutely everything? Like every time they've got a question or there's a problem, they just reach for their phone and get an AI generated answer. And of course, sometimes it gets it wrong. And this can lead to all kinds of arguments or that feeling like you're being ganged up on by something that isn't even human.

It can also be so annoying to get an email that has clearly been written by a robot. And there's the environmental impact of using AI for every little thing. So today we're going to be busting some myths about AI chatbots and finding out if there is a healthier way for us to use them. I'm Hannah Gelbart, and this is What in the World from the BBC World Service. What in the World

Let's hear more about this now from BBC tech journalist Liv McMahon. Hello. Hi, Hannah. I want to play a game of true or false because AI is fast evolving. There are so many unknowns about how chat GPT and other chatbots work. And here are some of the rumours that I have heard flying around. So true or false?

that writing one prompt into ChatGPT uses up 500 millilitres, half a litre of water. It's a tricky one because it has some truth, but there's also some caveats to it. So the claim that ChatGPT uses 500 millilitres, which is basically the same amount as a small bottle of water,

came from research carried out by the Washington Post and researchers in California last year. And it's determined that an AI chatbot, which is using OpenAI's model GPT-4, would need a little more than one water bottle to generate a 100-word email. But in actual fact, the amount of water that is generated

needed to power one single query for an AI chatbot can really vary around a whole range of factors from the nature and length of the query itself to where the server's processing it are located and how much water is needed for cooling down the data centers in those kind of locations. And this is why you see so many research and experts still kind of struggle to identify a precise amount about how much is used. What we do know is that an AI chatbot will

likely require more water to process a request than you'd see for maybe an average Google search because it requires so much more computing power. OpenAI's boss Sam Altman has said that the amount needed for an average chat GPT query amounts to roughly one fifteenth of a teaspoon. But yeah, there's not a huge amount of detail on how he got to that calculation. So is it true that saying please and thank you to chat GPT cost tens of millions of dollars?

It's true in the sense that it's something that Sam Altman has said about the cost of people's politeness to charge UPT for his company. He said it basically costs OpenAI, the creator of Charge UPT, millions because of the computing power. Again, it's needed to generate responses to people saying please and thank you after them. They're very eager to please and so will always be prompted to reply when you say anything to them.

And I think, in fact, this has led to some people advocating that we should be less polite when using AI chatbots so that we're not using more energy. We're also helping reduce its environmental impact. But at the same time, some experts also say that, you know, we shouldn't descend into being rude or impolite to AI chatbots out of concern that this could seep into how we act in society more widely. All right. What about this one? AI chatbots are not as accurate as a Google search.

I'm going to be annoying again and say this one is still up for debate. So a Google search will kind of generally give us a wider set of results. And often these will prioritize authoritative and trusted sources in response to kind of certain queries.

And also experts I've spoken to in the past about, you know, what the era of AI means for web search have said that searching for info using a search engine can often be quite good for digital literacy. We have to kind of pick out certain sources and use our kind of critical thinking skills to really know what we're looking for and from where and when it's reliable and when it isn't.

And there are some concerns over, you know, in AI responses, you're getting a lot of information served up to you ready made, even if it often contains references and citations. And they can also do the thing of making things up in a very convincing way, or as the industry likes to call it, hallucinating.

But I will also say that doing a Google search now often means being met with AI generated responses right at the top of search results, thanks to Google's kind of rollout of its AI overviews feature. And that summarizes results using AI. And these have been seen to kind of create inaccuracies in themselves. So that's another layer of complexity to this. What should be quite a simple question?

So building on from that, right, here's another one I've seen a lot of people writing about. AI chatbots are going to become conscious or sentient. So this, I would say, mostly false. There are many out there who do believe that sentience, this idea of AI chatbots becoming in some way kind of alive and having their own independent sort of thoughts and feelings is...

is the possibility or even in fact on the horizon. We've seen some engineers at tech firms like Google in the past make kind of big claims that it might be here already. And it's important to say that those have been firmly dismissed and they can often be a result of hallucinations that I spoke about before where AI can make things up and that can include giving itself this kind of human-like kind of quality and talking as if it's alive.

AI firms continue to sort of strive towards creating really super intelligent AI that can handle so many different tasks at once in a way that matches or even surpasses human abilities. Some experts do believe that like this idea of developing consciousness is a possibility.

But what does that consciousness even look like? Because it wouldn't be a human consciousness because at the end of the day, these are computer systems that have been taught to simulate human behaviour. And this is a personal bugbear of mine. You can always detect if something has been written with AI. I'd say this one is false. There's a lot of discourse out there

about certain features or indicators. So for example, things like em dashes perhaps being hallmarks of AI generated text.

But all of these ideas about things that are routine and you'll definitely see in an AI generated response aren't ever really the full picture. And it's becoming increasingly difficult to know for certain if something has been written with AI. So that has cleared up some things, but it's still a bit of a murky area. And I want to ask you a little bit more, Liv, about how we're using AI chatbots, because a lot of people that I know are...

are relying on them really heavily and they're using them not just for research and job applications but also solving all kinds of personal problems. How common

How common is it for people nowadays to be that heavily reliant on it? Well, I think there is such a buzz around chatbots still at the moment. And you had ChatGPT come on the scene in late 2022. And now that they've become more normalised, and especially we've seen more kind of companies start to kind of introduce them for employees and for work tasks and things, it's also starting to become a bit more commonplace for people in general and in our everyday lives in the way that

that maybe before, like having a kind of voice assistant like Amazon Alexa once wasn't very normal and now is quite commonplace.

But important to bear in mind that not everyone has access to these tools. There's still a lot of wariness around them. And while people are starting to become increasingly reliant on them, there are also kind of concerns at the same time about like what increasingly using AI tools might mean for our own kind of personal development, how we learn. And that will also evolve alongside, you know, AI chatbots themselves and their capabilities. Yeah.

Now, I probably shouldn't have done this, Liv, but I use ChatGPT to check a bunch of blood test results and to get some recommendations for some supplements. And I did double check them with someone in my family who works in the field because I was just really curious as to what it would recommend. What is the worst thing that could happen if I put my personal health data or even, you know, my financial data into ChatGPT or an AI chatbot?

My kind of rule of thumb is that if I wouldn't put it on like Instagram or on social media, it also probably shouldn't go into an AI chatbot like ChatGPT. But the reason as to why is slightly different because

there are concerns about the fact that with the privacy policies that are attached to AI chatbots, they can often use information that's entered into them to help develop their systems further. So that means that if you're entering, kind of, if you're putting information in there that is in some way personally identifiable, if it contains sensitive information or personal data, that is then kind of

out of your hands and into the hands of a company who power this AI chatbot. And you don't necessarily then have the control over the way in which it is used, how it's used to further develop a system.

And many do have controls in place. So, for example, I think with ChatGPT, you can chat with it in a way where it won't retain information about prompts you put into it or information you share with it. But I think as a kind of key sort of rule of thumb, there's massive concerns around where this information ultimately goes. Yeah, I remember the days of the photo dump. I'm a bit more careful these days. But

One of the reasons I decided to use ChatGPT for this and why I use it so regularly is because I feel like it has kind of gotten to know me and it remembers all that stuff about me and it also talks to me like a human in a very eager to please way as you've said. What are the pros and cons of this? The kind of the style, the conversational tone, that really human voice that it seems to mimic

How is that affecting the way that we interact with it, but also the way that we interact with each other? Well, I think it's a double-edged sword. From what I've seen, there's a lot of people who find solace in using AI chatbots precisely because of what you said there. They are very good at sounding human, at often saying the right thing, being very supportive. They're often trained in a way that rather than kind of

Maybe if you're speaking to a friend or a family member or someone who knows you very, very well in real life, they might know you well enough to maybe call you out on certain things or share their kind of true opinion. AI chatbots don't have opinions. So you are talking to what is sometimes I think best thought of as a very supportive person.

interactive wall when you are speaking with a chatbot I think you always have to keep in perspective that ultimately you're talking to something that

is very good at appearing and sounding human, but also isn't. I've seen that chat GPT in particular, as well as other chatbots, they're also causing friction between people. Some people are getting annoyed at their partner or their friends and how reliant they have become on just turning to chat GPT for absolutely everything. Some people are like, hey, have you forgotten how to think for yourself?

So just asking for a friend here, next time I reach for my AI chatbot, what other options should I be considering instead? People are increasingly urging others to maybe go back to basics a little bit as a little

as an alternative to using AI tools. This might be from kind of, you know, using search engines, as I was saying before, to keep on kind of selecting and curating information and picking it out and learning from it, but also picking up a book to research or, you

in kind of our more sort of social personal lives making the scary but you know sometimes rewarding step of reaching out to friends family or support networks to talk about

private matters especially sharing any sensitive information. Yeah I think you're right it's about keeping those social networks active and having your inner circle whether it's friends or family the people that you can go to to bounce ideas off but I don't know about looking things up in a book that would involve going to the library Liv. I know it's very old fashioned but yeah call it nostalgia. Liv thank you so much for joining us. Thank you for having me Hannah it's been lovely. Music

And before we go, no, you shouldn't be using AI as a replacement for your doctor. But there are some other things to consider when it comes to how much we use it and what we use it for. Here is Antonio Weiss, an AI and digital expert. Is AI going to be better than what I can already do myself?

and do the costs and the potential benefit outweigh any of the risks? And that's quite a hard question. But, you know, the risks are it gets it wrong. Actually, somebody says, if you're reaching out to somebody who you've never spoken to, and they say, hang on, have you just used a robot to talk to me? Well, that's socially embarrassing, and also it could put you in an awkward position. What are the risks of doing this? And do those potential benefits outweigh all of those risks?

My own research shows that these models are getting really quite good at medical use cases. They're still wrong a proportion of the time. That proportion of time is, generally speaking, going down. But the risk of...

getting the wrong answer or something slightly more personal but really significant. The risk of that I think is really high and I think that is problematic. In all of this, the risk is that people continue to use AI unthinkingly and then we find ourselves in a place where everyone's doing something without really realising how is it really benefiting us? And at the end of the day, AI is designed by humans so it should be for human benefit and if it isn't for your benefit, you shouldn't be using it.

That's it for today. I hope this has been useful for you. It definitely was for me. We've also covered things like can AI save dating apps from doom and are AI influencers the future of social media? You can find those wherever you get your BBC podcasts. I'm Hannah Gelbart. This is What In The World from the BBC World Service and we'll see you next time.