Dental One associates redefine what it means to visit the dentist. Get top quality personalized support from committed experts that prioritize the well-being and satisfaction of you and your family. Care is centered on a highly personalized treatment plan backed by the trust and support of long-lasting relationships. Find out how you can make an appointment for a custom smile design experience by visiting doa-seriousxm.com.
Walmart Plus members save on meeting up with friends. Save on having them over for dinner with free delivery with no hidden fees or markups. That's groceries plus napkins plus that vegetable chopper to make things a bit easier. Plus, members save on gas to go meet them in their neck of the woods. Plus, when you're ready for the ultimate sign of friendship,
Start a show together with your included Paramount Plus subscription. Walmart Plus members save on this plus so much more. Start a 30-day free trial at WalmartPlus.com. Paramount Plus is central plan only. Separate registration required. See Walmart Plus terms and conditions. Listener supported. WNYC Studios. This is the New Yorker Radio Hour. I'm David Remnick. So the way the brain works is this.
Neurons get inputs, and if they get enough input, they go ping. And the input to a neuron either comes from the senses, but for most neurons, it's input from other neurons. And so neurons receiving these pings from other neurons... Geoffrey Hinton has been thinking about how brains work for a very long time. Hinton is a computer scientist who's been called the godfather of artificial intelligence, AI. For decades, he worked on building computers...
that would work in a way analogous to the human brain itself. It's an approach known as neural networks. This was an obscure and seemingly fruitless effort for a long while, but eventually it paid off beyond anybody's imagination.
That work on neural networks led to incredibly intricate machines like DALI, which will take your prompts and make you a beautiful picture. Or ChatGPT, which in the last year put AI on everybody's radar. Well, that was a future that nobody expected. These are machines that learn, and perhaps even think. ...strengths are associated with each incoming ping...
so it can decide if it got enough input for it to go ping. And that's all there is. That's all you need to know to know how the brain works. It's very clear that an AI revolution is at hand here and now, and it's going to reshape our world profoundly. But Geoffrey Hinton, the foremost pioneer of neural networks, has come to have concerns about what AI can do. Very serious concerns. Joshua Rothman, the New Yorker's ideas editor, recently talked with Hinton in depth.
And we'll hear some of their conversation today. Josh, The New Yorker has just published an entire issue on artificial intelligence. And at the very center of it is your profile of Jeffrey Hinton. Why is he so important? Why is he such a crucial figure? So he's followed the arc of the tech from the very beginning all the way to now. He's 75.
And during a period of time when nobody thought this technology would work, he continued to work at it. And he believed in it. And he's ultimately been proven right. And he's now said that he's scared about the tech that he worked on for his whole life. He...
He doesn't regret what he did, but he says we need to be realistic about what's been invented, which is a machine that can think the way we can. Tell me a little bit about what it's like to spend time with Geoffrey Hinton. He's a very emotionally rich as well as intellectually rich personality as you portray him. What was Hinton actually like as a person? A delightful person.
He's a little bit from a prior world. He's not a Silicon Valley techno overlord. He's not an eccentric egomaniac. He's a highly intelligent, basically humble person who's worked on this technology for a long time, who got used to being a regular computer science professor until he was in his 60s when this technology really started to take off. I was pretty intimidated by Jeff. Our first interaction, he sort of
gave me a quiz on various subjects in philosophy of mind, I think, to sort of like confirm that we were going to be on the same wavelength. And I have to say, I'm basically a regular Joe. I don't really understand. Like, I did a Khan Academy course on linear algebra. I did some things to get ready. You prepared to do this piece by taking a course online in linear algebra. I did. Yeah.
Wow. The strangeness of AI is so weird. You know, it combines physics and math and neuroscience and computers. It's like a weird discipline and psychology and ideas about learning and all this kind of stuff. What is AI and what are its implications? Because they seem so varied, so vast.
to some people so scary and to many, many other people so filled with possibility. So I think the question of what it is is a little bit of a contested one. But the best way for me to understand it as a mere mortal is to go back to the beginning. The way this all started was with the idea that, you know, our brains are powered by neurons that are connected in a network. And that created, that led to a field called neural networks, a field of computer science.
in which computer scientists would create networks of simulated neurons inside computers. Back in the day, in the 60s, 70s, 80s, you couldn't do that. That was impossible to simulate in a computer. But you could build small networks, and they could learn small things. They could learn to recognize handwritten digits, for example, like, say, in a zip code on an envelope.
But over the last many decades, computers have gotten bigger and bigger. They've gotten literally a billion times faster. The number of neurons that can be simulated inside a computer has grown by that scale. Now, these neural networks, they're not yet as complicated as the ones in our heads.
But they're really complicated. And they're capable of doing something that certainly looks from the outside to many people like reasoning. Jeff Hinton thinks that they're understanding just like we do. They're reasoning just like we do. We should take their mental lives seriously, as it were, and we should take their intelligence seriously. You spent quite a lot of time with him at his house on an island in Lake Huron. Let's hear some of your interview.
A lot of people struggle to understand how an AI mind is similar to or different from a human mind. And they can't decide or they don't know whether to think of today's AIs as similar to the computer programs they've used their whole lives or similar to the people that they converse with.
Like, how do you think about that? So I think today's big AI things like chat GPT-4, they're much more similar to people than they are to computer programs. So computers were designed so people could program them. That is, they do exactly what you said. And they didn't have things like intuition. But now if you look at what it took to make computers good at chess…
You had to give computers intuition. A computer had to be able to look at a board and think, oh, that will be a good move, the way a grandmaster does. In fact, they're better than grandmasters at that now. And using neural nets, we could get computers to learn intuitions. And that's very different from logical reasoning.
And when you're an expert in a domain, to begin with, you have to do reasoning. A real expert in the domain can just do it intuitively. That's what doctors call clinical experience. They just look at this patient and they know what they've got. And they didn't do a lot of reasoning. It's just obvious to them. But if they're good doctors, occasionally it turns out the patient hasn't got that and they'll use that to revise their intuitions. It sort of seems like when I write, I'm a writer, when I write, I'm the one doing it.
from the top down using my intelligence. It's not like driving a car or riding a bike. It's something I'm choosing to do, and those are my ideas coming out of my ego, as it were. And that's what I think of when I think of high-level intelligence is me writing an article or writing an email or something. It doesn't necessarily seem like it's connected to the world of learned, intuitive
But if you think when you're writing, suppose you're halfway through a sentence and now you have to choose the next word. So there'll be a word that comes to mind because it sort of fits nicely there. Why did it come to mind and how did you decide it fitted nicely? You have no idea. Retrospectively, you can make up a story.
which has probably got some element of truth, but definitely not the whole story. And really what's happening is these big pans of neural activity that you've learned are in effect implementing analogies with lots of different stuff you know, so that that word seems right there. Just the process of selecting the next word when you're writing
which you might say is just you doing autocomplete, involves more or less everything you ever learned. To do autocomplete properly, you have to understand how the world works. And that's what you're doing. You're just an autocomplete device.
So Josh Hinton is teasing you here. He's saying that your excellent writing is just autocomplete. At least I hope he's teasing because he's wrong. But what are the implications of machine with intelligence? What could it bring to society in the most positive sense? And what should we fear?
So on the positive side, there's ways in which these digital minds are different from ours, and I should say usefully different from ours. So if you think about what ChatGPT is, I think a lot of us have used ChatGPT at this point. One of the striking things is it seems to
Have ready access to a huge amount of knowledge, like more than we do. It doesn't mean that it's smarter than us exactly, but if you ask it to translate something between languages or to try to solve an equation or to discourse on the history of economics, like it can do that because these artificial intelligences are really good at working with huge amounts of data. Is this capacity because...
a machine, that artificial intelligence can have in its head, as it were, all of Google, all of Google Translate, and then begin to work with it, whereas our minds do not have that capacity. Is it a capacity question? - Yeah, it's partly a capacity question as I understand it.
I mean, I think it's different. Like, you think about what our minds are doing right now, for example. You know, you and I are having this conversation. We're moving towards a deeper level of mutual understanding. All the while, where maybe there's some part of our brains thinking about what we're going to have for lunch. You're a very busy man. I'm sure you have a lot going on. You're thinking about all that stuff, too. And AI also, and this is something that Jeff talked about in my piece that I thought was fascinating. You know, if I learn something, how do I communicate it to you?
I have to write a 10,000-word article for the New Yorker magazine. And you have to read it and try to stay awake during it. If an AI learns something and it wants to communicate it, it just downloads the information and it can be uploaded into another AI. The first thing that comes to mind is AIs have mastered a level of conversation and of not just responding to what you say, but of understanding your intent.
There's a piece in the issue that's about this, that's about how AI is affecting coders. One of the ways that ChatGPT is very powerful is that if you're sufficiently educated about computers and you want to make a computer program and you can instruct ChatGPT in what you want with enough specificity, it can write the code for you. It doesn't mean that every coder is going to be replaced by ChatGPT.
But it means that a competent coder with an imagination can accomplish a lot more than she used to be able to. Maybe she could do the work of five coders. So there's a dynamic where people who can master the technology can get a lot more done. So there's economic consequences that are real. And then there's this sort of bigger fear, a sort of science fictional fear, which Hinton shares. There's a whole bunch of risks that concern me.
And other people have talked about these much more than I have. I'm a kind of latecomer to worrying about the risks because very recently I came to the conclusion that these digital intelligences might already be as good as us.
They're able to communicate knowledge between one another much better than we can. So that's what made me feel I needed to talk out about the existential threat that these things will get to be smarter than us and take over. And that's because an AI can just copy its learned knowledge out of itself and give it directly to another AI. So you can have, say, 10,000 different copies of the same knowledge of the same neural network
Each can be looking at different data. And when one copy learns something from one part of the data, it can convey it to all the other copies that haven't seen that data simply by telling them how to update their weights, these synapse strengths inside. Now, you and I can't do that because my brain's wired differently from your brain. And if you told me the synapse strengths in your brain, it wouldn't do me any good. Right. How does that relate to this set of risks that are... Okay, so that relates to the
Existential threat that these things will become smarter than us, and not just a little bit smarter, but a lot smarter, and will also decide to take over. They'll decide to take control. That's the existential threat. And why would they decide to do that? A very senior official in the European Commission, who I was talking to, said, "Well, people have made such a mess of things, why wouldn't they?"
Computer scientist Jeffrey Hinton. He's speaking with The New Yorker's Joshua Rothman and will continue in a moment. This is The New Yorker Radio Hour.
Stop by Sherwin-Williams and get 30% off duration products and Superdeck stains from August 23rd through the 26th. It's the perfect time to transform your space with color. Whether you're looking to revamp your bedroom, living room, or home office, we have you covered with bold hues, soothing neutrals, and everything in between. Shop the sale online or visit your neighborhood Sherwin-Williams store. Click the banner to learn more. Retail sales only. Some exclusions apply. See store for details.
My Wrangler jeans from Walmart are legit my favorite go-to pants. They got that slim cut that's always fresh for going out. Hey, what's up? They're durable enough, even for my shift, and stretchy enough for when I want to kick back and chill with a movie. So basically, they can do it all, and on my budget. I mean, come on. You really can't beat all that. Shop your Wrangler pants at Walmart.
I'm Maria Konnikova. And I'm Nate Silver. And our new podcast, Risky Business, is a show about making better decisions. We're both journalists whom we light as poker players, and that's the lens we're going to use to approach this entire show. We're going to be discussing everything from high-stakes poker to personal questions. Like whether I should call a plumber or fix my shower myself. And, of course, we'll be talking about the election, too. Listen to Risky Business wherever you get your podcasts. This is the New Yorker Radio Hour. I'm David Remnick.
One year ago, the future arrived, loudly. ChatGPT launched at the end of last November, and it was all anybody could think about for a while. Suddenly, artificial intelligence wasn't just a tool for advanced science research, but it was entering all of our lives, right down to your kids cheating on their homework. Our current issue of The New Yorker is all about this explosion in artificial intelligence, the mind-boggling advances, and some of the terrifying possibilities.
And as part of that project, our ideas editor, Joshua Rothman, sat down with the so-called godfather of AI, Jeffrey Hinton. We'll continue with that conversation now. Now, Hinton has spent a lifetime helping to teach machines how to learn. And now he believes that he's succeeded almost all too well. And he's scared of what may happen when machines are smarter than people and have their own ideas about what to do.
Suppose it's a chess-playing computer. It wants to win the game. It doesn't have anywhere inside it an ego which thinks, "I want to win the game." It's wired up in such a way that it's trying to win. So I think the idea they don't have intentions and they don't have goals is just wrong.
Is the idea that people in charge of these systems will give them goals, will start us down this path? I mean, is that what we're envisioning? Or where would the goals come from that would start the whole problem? Okay, there's two sources of worry, and they're very distinct. They have quite different solutions. So one worry is bad actors. You can probably imagine Putin giving an autonomous lethal weapon the goal of killing Ukrainians.
And you can probably imagine he wouldn't hesitate. So that's the bad actor scenario. But there's another scenario, which is if you want a system to be effective, you need to give it the ability to create its own sub-goals. Like if you want to get back to the US, you're going to need to get to an airport. And so you have a sub-goal, get to an airport. That's a sub-goal. It's created in order to achieve a bigger goal.
So if you give an AI some goal and you want it to be effective, it's going to work by creating some sub-goals that will allow it to achieve the goal. Now, the problem is there's a very general sub-goal that helps with almost all goals. And the AI will certainly realize this very quickly. The very general goal is get more control. If I just get more control, it's going to help with everything. And these AIs are going to realize that.
And pretty soon they're going to realize, well, if these are my goals, best thing to do is stop humans interfering and just get on with it and do it a sensible way that these stupid humans don't understand. Whatever goals they do have were given to them by us. And a big question called the alignment problem is, can we give them goals such that they do useful things for us and they never, ever want to take over?
Nobody knows how to do that.
It's no use thinking we could air gap them so they can't actually pull levers or press buttons, because they could simply convince us to do it, because they're much more intelligent than us. I mean, is that a technical problem? Yes. And it's also a governance problem. It's a technical problem, a governance problem, but we don't even know how to solve the technical problem, even if we could do the governance right. Even if we could make a law that said you're not allowed to make an AI that can go wrong, we wouldn't know how to do that yet. Exactly. Exactly.
Imagine that you're not a central figure in the history of machine learning, but you're just a regular person. And now some of the world's biggest companies are saying, "We've developed this technology. It promises all sorts of benefits that you don't really want. You can drive your car, you can do your job, you can do everything you need to do in your life. We don't know how to control it. It might take your job, or it might take over.
It might help solve some scientific problems that you don't care about. I think that regular person might just say, why don't we just unplug it? I mean... Why don't we just stop the development? Why don't we just unplug it? We don't need this. You know, we don't need... We already have intelligences, human beings. We don't need artificial ones. It's not unreasonable to say we'd be better off without this. It's not worth the risk.
Just as we might have been better off without fossil fuels. We'd have been far more primitive, but it may not have been worth the risk. But it's not going to happen. Because of the way society is, because of the competition between different nations, no one nation could stop it. If you had a very powerful world governance, if the UN really worked,
Possibly something like that could stop it. Although even then, it's just so useful. It has so much opportunity to do good, like in medicine. You just aren't going to stop it. It's also got so much opportunity to give advantage to a nation via autonomous weapons. I mean, the US is already developing autonomous weapons. So yes, that might be a sensible move, but it's far too late. Yeah.
So my last question in this vein is, what should we do? I don't know. It would be great if it was like climate change, where someone warning about climate change could say, look, we either have to stop burning carbon, or we have to find an effective way to remove carbon dioxide from the atmosphere. And if we, one of those two is essential. And if
You know what the solution looks like. It's just a question of the political will to actually implement something because it's going to be painful. Here, it's not like that. I have no advice. All I'm doing is just warning that this may well be coming. Smart young people should be thinking hard about, is it possible to prevent it ever wanting to take over? Josh, we know the names Bill Gates, Steve Jobs, Alan Turing,
But only very recently have we learned the name Jeffrey Hinton. And we've learned it, A, as somebody who is a great innovator in AI, known very commonly as the godfather of AI, but now also as the apostate of AI. Why has he only emerged now? I mean, it's like the first question you want to ask is, like, it's 2023. So how come in all the decades you didn't freak out before? And he told me, yeah,
you know, first, no one thought this would work as quickly as it did. If you rewound the tape to the 80s, the 90s, the 2000s, people thought, you know, maybe in 50 years' time, he thought maybe in 100 years' time, we would reach the place where we currently are. And so his view was like,
why worry about it? You know, there's plenty of time to try to sort this out down the road. A lot of people who work in AI have mentioned this to me. It's that when you start using ChatGPT or another modern AI, the first thing you notice is all the ways it's bad. So the first thing you see is the way it's not really human or the mistakes it makes, the things it's not capable of. And your early impressions are, gee, you know, like nice, it's a lot of window dressing.
But the more time you spend, the more you get impressed. I mean, it reminds me a little bit. I have two small kids. I have a five-year-old son. And when you spend a lot of time around one kid, you notice every little improvement in their mind. You're always noticing things they're learning. But if you just visit with friends who have a kid, you're not impressed. You say, that's a little child. Yeah.
Josh, in this issue of the magazine, along with your profile of Jeffrey Hinton, there's a piece by Ale Press about AI and facial identification, which seems extremely fraught, quite dangerous. What's the issue at stake there? Well, the issue is an AI that provides quote-unquote evidence to the police could very well be wrong.
And the people who are using these systems might not fully understand how they work or what their limitations are. But once a computer system that appears objective, appears powerful, once it makes a judgment, it can be pretty hard to contravene the judgment.
It's a problem in policing. It could be a problem in medicine. Imagine you're a doctor. An AI system delivers a diagnosis. You think it's probably wrong, but are you going to go up against the computer and go on the record and say, I disagree with the neural network with billions of artificial neurons running in a huge data center? You might not say that.
And so the larger question is, as these systems get more useful, as they get more integrated into real-world contexts, and as real people have to start being responsible for them, have to start either disputing what they think or acting on what they think, on what the systems think,
it's going to require a level of literacy and of nuance around the technology that we're just really not equipped to have at the moment and that we need to start developing if we don't want really negative consequences. Josh, recently I had a conversation on this show with Sam Altman, one of the people behind ChatGPT. And in a very almost bland way, he said that
AI could put millions and millions of people out of work, which would cause the government to have to come around and start giving us all universal basic income while the machines do the work and the thinking and, and, and. He delivered this all in a very, I have to say, matter-of-fact way. Is this a crucial concern for Geoffrey Hinton? He certainly worried about what will the young people of today do for work?
How will the world of work be transformed? Well, I guess, can I back up and say one thing broadly? Sure. Is, you know, there's huge disagreement about this. You know, there's, there's, no one has a crystal ball. Um,
And technologists obviously are in love with what they've made. They really see the potential. They live in the future. That's why they do what they do. They're in love with it or they stand to make an immense fortune? Yes, and they stand to make immense amounts of money. And I think Hinton is a little different from Altman because he doesn't have any real financial skin in the game. He's not a businessman. He's an academic researcher fundamentally, although he worked for Google for a period of time.
And walked away, as I recall, with tens of millions of dollars, no? Yeah, he sold a company to Google in the early days of the current AI boom for $44 million. But it is something, job loss is something that Hinton is very worried about. But I think he's more worried about, I guess you could just say chaos, right?
His view is the technology is out there, bad people will learn how to use it, and the next period of history will be destabilized by this technology. And there isn't an obvious way to solve the problem. Isn't that a little easy? We've been through this now with the internet. Arguably. He didn't say this to me, but this is an argument many people make. The best path to understanding and controlling AI is simply to keep working on it. That sounds a little fatalistic.
Yeah, I think he's, I think the word he would use is a stoic. He thinks he's, I think he, I think he sees his position as a realist one. Um, and he thinks that these are problems that aren't going to be worked out through, um, people sitting around and talking about them. Um, they're problems that are going to have to be worked out through the actual development of the technology. Um,
And a lot of people say one thing and do another thing. So it's kind of like they say, the technology can't be stopped. And then at the same time, they say, let's sign a letter proposing that we stop it for six months.
There's a sort of wanting to have it both ways. And I don't mean this to say that they're being cynical. I think these are just two impulses when you look at a technology like this. One impulse is to say... You're talking about people like Sam Altman, to be clear. One impulse is to say, you know, we can control this. And the other is to say, you know, what do we mean by we? There's endless researchers around the world. This technology is available. It's visible.
Kinton's view, it's alarming. It freaked me out personally. It also struck me as at least consistent. And I think if you look at the history of what happened with, say, nuclear weapons, we do now have a regulatory regime around them. But that took a while to build. And bad things had to happen first.
Yeah, and you also have a Russian leader who, if he wants to, can threaten to use them in Ukraine. Right. So we've never really solved that problem. Do the benefits of AI outweigh the existential threat to humanity, as Hinton himself has posted? Before I wrote this piece, I think I had an idea of what AI was, which was that it was just statistics. It was just number crunching.
And learning more about the history of the field and learning more about the innovations that Hinton helped create and that other people helped create, I feel like this technology is incredible. I guess my overall feeling is people are really smart. If we can build it, we can hopefully control it in some way. I left the piece even more impressed by what AI can do, by what it really does. I think it's worth it.
I think some critics of AI or skeptics of AI feel that the endeavor is somehow anti-human. Like saying that what the AI does and what we do is the same, for example, is sort of to diminish what the mind does, the human mind. Right. If you want to be mystical about it and think that humans have some mystical special property that a machine could never have,
obviously it does diminish that. I don't believe humans have mystical properties that machines couldn't have. I believe we're just wonderful and very complicated machines. I shouldn't even use the word just. We're wonderful and very complicated machines. And trying to understand how these machines work gives us much more insight into what we are. It tells us about our true nature. I think this is giving us enormous insight into the kinds of machines we are.
And it's clearly a huge revolution. I mean, the Industrial Revolution was when we could replace physical labor with machine labor. And now we can replace intellectual labor with machine labor. And it's a revolution of at least the same scale. Do you think we'll have to think differently about what's valuable about ourselves and what makes us unique? That question has an assumption in it, which is what makes us unique. Maybe we're not.
You can read Joshua Rothman's profile of Jeffrey Hinton in The New Yorker. And you can find my earlier conversation with Sam Altman of the company that runs ChatGPT at newyorkerradio.org. I'm David Remnick, and thanks for listening today. See you next time.
The New Yorker Radio Hour is a co-production of WNYC Studios and The New Yorker. Our theme music was composed and performed by Meryl Garbess of Tune Yards, with additional music by Louis Mitchell. This episode was produced by Max Balton, Adam Howard, Kalalia, David Krasnow, Jeffrey Masters, and Louis Mitchell.
with guidance from Emily Botin and assistance from Mike Kutchman, Michael May, David Gable, and Alejandro Decke. We had additional help this week from Jared Paul. The New Yorker Radio Hour is supported in part by the Cherena Endowment Fund.
Stop by Sherwin-Williams and get 30% off duration products and Superdeck stains from August 23rd through the 26th. It's the perfect time to transform your space with color. Whether you're looking to revamp your bedroom, living room, or home office, we have you covered with bold hues, soothing neutrals, and everything in between. Shop the sale online or visit your neighborhood Sherwin-Williams store. Click the banner to learn more. Retail sales only. Some exclusions apply. See store for details.
Walmart has Straight Talk Wireless, so I can keep doing me. Like hitting up all my friends for a last-minute study sesh. Or curating the best pop playlists you've ever heard in your life. And even editing all my socials to keep up with what's new. Oh yeah, I look good. Post it. Which all in all suits my study poppy main character vibes to a T. Period. Find and shop your fave tech at Walmart.