We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Is No Substitute for the Human Brain

AI Is No Substitute for the Human Brain

2025/5/13
logo of podcast WSJ Tech News Briefing

WSJ Tech News Briefing

AI Deep Dive Transcript
People
C
Christopher Mims
N
Nicole Nguyen
Topics
Nicole Nguyen: 作为一名个人技术专栏作家,我主要关注AI工具的隐私问题。隐私法适用于AI工具,特别是公共AI工具。企业版本通常符合HIPAA或GDPR等隐私法规。对于消费者版本,如ChatGPT,替换学生姓名和清理个人信息是好做法。在许多AI工具中,你可以选择退出AI训练,但在设置中标记。使用临时聊天模式可以避免信息用于AI训练,但仍有可能受到人工审查。我们信任公司有合理的政策,但AI的早期阶段存在不确定性,所以需要谨慎对待个人数据的输入。

Deep Dive

Shownotes Transcript

Translations:
中文

In case you missed it, YouTube is the number one streaming platform in watch time in the U.S., ahead of Netflix, Disney, and Prime Video for the second year in a row. There's only one YouTube. Hey, TNB listeners. Before we get started, heads up. We're going to be asking you a question at the top of each show for the next few weeks.

Our goal here at Tech News Briefing is to keep you updated with the latest headlines and trends on all things tech. Now we want to know more about you, what you like about the show, and what more you'd like to hear from us. So our question this week is, what kind of stories about tech do you want to hear more of? Business decision making? Boardroom drama? How about peeking inside tech leaders' lives or tech policy? If

If you're listening on Spotify, you can look for our poll under the episode description, or you can send an email to tnb at wsj.com. Now on to the show.

Welcome to Tech News Briefing. It's Tuesday, May 13th. I'm Victoria Craig for The Wall Street Journal. You asked, now we're answering all of your burning questions about AI-powered chat tools and how to keep your personal data safe while using them. Then we're looking under the hood of those chatbots because the companies that make them say they get smarter the more we use them. But can they really think like humans?

But first, every day millions of people turn to AI chatbots for solutions to problems large and small. What kind of home gym mat should I buy? How can I best showcase my job experience on a new resume? Can you help me draft this tricky work email or craft an itinerary for a summer getaway? But what happens to all of that search data? Who owns it? And how much can you trust the answers that AI gives you?

WSJ personal tech columnist Nicole Nguyen explored these questions in her recent series called Chatbot Confidential. She asked you to send your questions about data privacy on platforms like ChatGPT or Claude, and now she's here to answer some of them. Hey, Nicole. Hi. So let's just start with a voicemail that we got from Daniel Stewart.

I work for a community college and was wondering when we discuss AI privacy issues, how does it relate to FERPA? I normally have to replace student names just to be safe, alter situations when I use...

AI just to be safe. I wonder if there's also similar issues with HIPAA. So for our listeners who may not know those abbreviations, FERPA and HIPAA are both federal laws that govern privacy. The former affords privacy to students and parents over education records. The latter, medical patients' privacy over medical records.

So Nicole, how does AI factor into all of these concerns over privacy and AI? Privacy laws extend to AI tools, particularly if you're using the public-facing AI tools that are not enterprise versions that are commissioned by your company, where typically those enterprise versions are compliant with privacy regulations such as HIPAA or GDPR in Europe or California CCPA.

So if you're using the consumer grade version of, say, ChatGPT or Anthropics Cloud, then what Daniel is doing, which is replacing student names, scrubbing as much personally identifiable information, sensitive information as possible, is great.

the right move, that's a good idea. We've also got another question from Mitch. He's a photo archivist. And on X, he asked if personal family photos uploaded to AI chatbots could be stored, used to train AI models, or accessed by other people later on. So this is a complicated answer. But in many AI tools, for example, ChatGPT and Gemini, you can opt out of AI training.

but you have to mark that in settings. You can also use what's called temporary chat in ChatGPT, which is like an incognito mode for ChatGPT. It does not use that information for AI training. It deletes the conversation immediately, but there's a caveat there. There is always a possibility if you're using these AI tools, because we are in the early days of generative AI, that your inputs and outputs could be subject to human review or

or stored for a longer time. And that's because the systems mark anything that is potentially harmful so that they can review and learn from those types of responses. And

We don't exactly know what is harmful and what isn't, but we trust that these companies have reasonable policies. You know, if you use Google Drive, for instance, Google Photos, we trust that Google has reasonable policies around what is flagged and what isn't. So that's where I'll leave my answer. You can opt out of AI training with the caveat that in some instances it could be reviewed.

That was WSJ personal tech columnist Nicole Nguyen. If you have more questions, throw them at us. You can send us a voicemail to tnb at wsj.com or you can leave us a voicemail at 212-416-2236. Coming up, we've been promised that AI chatbots will take on human-level smarts, but the list of skeptics is growing. We'll dig into that after the break.

Did you know that every day people watch on average more than 1 billion hours of YouTube on their TV screens? That's because YouTube is where people go deep on all the content they love. There's only one YouTube.

Can AI chatbots actually, at some point, solve problems in the same ways that humans can? In the industry, that ability is known as AGI, or artificial general intelligence. And increasingly, the research says AI models can take on more information to solve problems, but they do not think like humans. My colleague Julie Chang spoke to WSJ tech columnist Christopher Mims about what that means exactly.

So Christopher, you're essentially saying that we're nowhere near AGI. Is that right? That's correct. That's the right takeaway. We are definitely nowhere near AGI.

And anyone who tells you differently, I think honestly just hasn't looked that deeply into what intelligence actually is. It turns out that what today's transformer-based AIs, and that's the kind of AIs that underlies ChatGPT and a lot of other generative AIs,

the way that they work is just they have this kind of almost infinitely long list of little rules of thumb that they apply. And so to give you a concrete example, one reason historically these models have been really bad at math, even if you show them a million math problems and their correct answers,

is that they learn weird stuff. You know, if you ask them to multiply two numbers and one of them is between 200 and 211, it has a different set of little rules of thumb it uses for multiplying those numbers than it does for any other numbers anywhere else on the number timeline. So this is how today's AIs simulate intelligence.

And a lot of people have pushed back and said, oh, well, isn't this how people think? Like, we're just a big pile of rolls of thumb. No, you know, sorry, you're actually way more complicated than that. Humans have spatial three-dimensional models that include like causality and other things. Today's transformer-based AI is this idea that if we just make them big enough and show them enough data, they will spontaneously generate data.

in their cybertronic brains, the machinery of thought, that seems to be nonsense. In your column, you bring up this Manhattan map example. Can you talk about that and how it explains the bag of heuristics theory? So researchers think that the way that modern AIs work is what's called a bag of heuristics model. And

This just means a really long list of literally millions of rules of thumb. And so, for example, one researcher took a traditional large language model and gave it turn-by-turn directions from every point in Manhattan to every other point in Manhattan and discovered that it could then regurgitate directions between any two points on the island of Manhattan with 99% accuracy. Then they probed this model

to look at what was the map of Manhattan that it had generated, that it was reasoning from, if you can use that word.

to give back these directions when you ask it for directions on the island of Manhattan. And the map it regurgitated or that they were able to extract from it looked totally crazy. Streets were connected that are very far distant and diagonal to one another. It seemed to think that there were streets that like jumped over Central Park and all this other craziness.

And what this showed was that the AI had managed to learn a sort of mental model of what Manhattan streets work like or look like that could generate

accurate directions when asked, but in no way resembled what the actual street map of Manhattan was. And so this just shows you how kind of really strange and weird and simulated is the quote unquote understanding of an AI. Okay, but humans wouldn't be able to recreate a map either. Yes, there is some truth to that. The thing that reveals how weird the AI generated map of Manhattan is, is detours. So if I told you,

Okay, you're trying to get between like one block in Manhattan. So one block of 7th Avenue is suddenly blocked.

and you were an expert at navigating Manhattan, would you have any trouble just going over another block, like taking a detour? No, you'd have no trouble. And the idea is because you have some kind of explicit understanding of, oh, Manhattan is a grid, and if I'm on a grid of streets, I can just go around a detour, and that kind of implicit understanding would be something that you...

had acquired in the course of learning your way around Manhattan. But the AI, when you block even 1% of the streets in Manhattan,

it just completely breaks down. It also shows how in self-driving systems, they can get completely thrown by just the smallest thing, which would never throw a human being. So is AI intelligence plateauing then? Yes.

I should amend that by saying that the sort of general abilities of these AIs have definitely hit a ceiling where, for example, the latest reasoning models from open AI, they're actually worse at some tasks in a lot of ways. It seems like the reinforcement learning that they do with them to make them better at coding and mathematics makes them more likely to hallucinate, makes them worse at other things. So just throwing more data at it is not improving these models.

We can make them better in general by going in and manually tinkering or giving them access to... So for example, you can take a large language model and give it access to an explicit mathematical application that has been programmed by human beings. And then it can do math more like the way a person would if they were given access to a calculator. But the sort of important distinction there is

Now it's just software again. So we're just having to go in and put all this scaffolding around the AI because it's really not that capable at the end of the day. That was WSJ tech columnist Christopher Mims. And that's it for Tech News Briefing. Today's show was produced by Julie Chang with supervising producer Emily Martosi and additional support from Melanie Roy. I'm Victoria Craig for The Wall Street Journal. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.

The world's biggest creators, the world's biggest moments, all delivered to the world's biggest collection of passionate fans, providing unparalleled opportunities for your brand. There's only one YouTube.