We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Should We, and Can We, Put the Brakes on Artificial Intelligence?

Should We, and Can We, Put the Brakes on Artificial Intelligence?

2023/6/2
logo of podcast The New Yorker Radio Hour

The New Yorker Radio Hour

AI Deep Dive AI Chapters Transcript
People
D
David Remnick
S
Sam Altman
领导 OpenAI 实现 AGI 和超智能,重新定义 AI 发展路径,并推动 AI 技术的商业化和应用。
Y
Yoshua Bengio
Topics
David Remnick:人工智能可能导致大规模失业,以及对民主的潜在威胁。他强调了在人工智能发展早期阶段制定适当规章制度的重要性,避免重蹈社交媒体的覆辙。 Sam Altman:承认人工智能可能导致某些工作岗位消失,但他认为大多数人仍然会有工作,因为工作性质和社会需求会随着科技进步而改变。他主张通过诸如普遍基本收入等经济政策来应对这种变化,并强调国际合作在规范人工智能发展中的重要性,以确保其益处能够公平地分配给世界各地的人们。他还认为,人工智能是一个强大的工具,可以极大地提高人类的能力,但同时也需要对人工智能进行约束,以避免其被滥用。他认为,当前的人工智能系统是一个处理文本的系统,而不是一个有自主意识的生物,但他也承认,未来人工智能可能具有自主意识,这需要引起警惕。 Yoshua Bengio:对人工智能的快速发展表示担忧,认为其可能在短期内扰乱民主,长期内可能导致对人工智能的失控。他认为,需要一个适应性强的监管框架,并由专家而非政治家来制定具体的规范,以保护公众利益。他还强调了国际合作在规范人工智能发展中的重要性,以避免灾难性后果。他认为,将人工智能用于军事用途存在严重的道德风险,可能导致大规模的危险后果。 David Remnick: 人工智能的快速发展令人担忧,其潜在的风险与核武器或疫情相当。一些人工智能领域的先驱者也对人工智能技术发展速度感到担忧,甚至认为不受控制的人工智能可能与核武器或疫情一样危险。

Deep Dive

Chapters
Sam Altman discusses his early interest in AI, his journey into the field, and the unexpected breakthroughs that led to the development of neural networks and large language models.

Shownotes Transcript

Translations:
中文

Listener supported. WNYC Studios. This is the New Yorker Radio Hour, a co-production of WNYC Studios and The New Yorker. Welcome to the New Yorker Radio Hour. I'm David Remnick. Every technological revolution has frightened people, particularly people who've got something to lose. When Gutenberg began printing with movable type, he was a man of his word.

Religious and political authorities wondered how to confront a population that had new access to information and arguments to challenge their authority. So it's not surprising that artificial intelligence is now causing grave concerns because it will affect every one of us. Perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers, the loss of huge numbers of jobs. Congress has a choice now.

We had the same choice when we faced social media. We failed to seize that moment. What is surprising is that some of the very same people who have been racing to develop AI now seem deeply alarmed at how far they've come. In March, not long after ChatGPT began captivating and terrifying us all at once,

Over a thousand technology experts signed an open letter calling for a six-month moratorium on certain AI research. And some of those experts say that unchecked AI could be as dangerous to our collective future as nuclear weaponry or pandemics. So we're going to talk today about AI. How could it change the world? And how concerned should we be?

I'll start with Sam Altman, the CEO of OpenAI, the company that's been releasing ever more sophisticated versions of ChatGPT. Years ago, when the internet was in its earliest stages, we were surrounded, or at least I felt surrounded, by a sense of internet euphoria. And anyone who raised doubts about it was considered a luddite or ignorant or a charmingly fearful person past his sell-by date. Now, with the rise of AI...

We're hearing alarm for many quarters. So what I want to try to accomplish here is to have a rational discussion that at once gives a factual picture of where we are, where you think we're going, and at the same time airs out the concerns. So let's just start with the most basic thing. You've been working on AI for nearly a decade. How did you get into it and what were your expectations?

I mean, this was what I wanted to work on from when I was like a little kid. I was like a very nerdy kid. I was very into sci-fi. I sort of never dreamed I'd actually get to work on it. But then I went to school and studied AI. And

One of the memorable things I was told was the only surefire way to have a bad career in AI is to work on neural networks. And we have all these other ideas, but this is the one that we've proven doesn't work. In 2012, there was a paper put out. One of the authors was my co-founder, Ilya Sutskover, which was a neural network doing an amazing thing that performed extremely well in a competition to categorize images. And

That was amazing to me, given that I had sort of assumed this thing wasn't going to work. After that, a company called DeepMind did something with beating the world champion at Go. At the end of 2015, we started OpenAI. One of our first big projects was playing this computer game called Dota 2. And I got to watch that neural network, that effort, that system sort of grow up, number one.

We truly, genuinely know parlor tricks had an algorithm that could learn.

And it got better with scale. It took us a while to discover this current paradigm of these large language models. But the fundamental insight and the fundamental algorithms were all right from the beginning of the company. So GPT suddenly appeared on the scene. And you have talked a lot about its potential. And at the same time, you've, well, let's put it this way, you freaked a lot of people out. What do you see as its potential? And do you understand why people are

unnerved about it. First of all, even the parts that I don't agree with about what people are freaked out about, I empathize with. To the degree we are successfully able to create a computer that can one day learn and perform new tasks like a human. Even if you don't believe in any of the sci-fi stories,

You could still be freaked out about the level of change that this is going to bring society and the compressed timeframe in which that's going to happen. Well, let's slow down for a second. What does this imply in the much broader sense about what change is coming down the road in concrete terms?

I think it means that we all are going to have much more powerful tools that significantly increase what a person is capable of doing, but also raise the bar on what a person needs to do to be sort of a productive member of society and contribute. Because these tools will do, eventually, they will augment us so powerfully that they'll change what

one person or one small group of people can and do do. A lot of writers I know have naturally gotten very interested in ChatGPT and they somehow think it's going to eliminate them. I have to admit, I've used your latest version of ChatGPT to try and emulate my writing.

And without being overproud about it, it kind of didn't. What came out was like an encyclopedia entry with nouns that were subjects that I was interested in. So tell me where ChatGPT is in its development now.

But should I basically pack it in in a couple of weeks when ChatGPT is all the better? We get excited about where things are, but we also try always to talk about the limitations and where things aren't. And maybe a future version of GPT will replace bad writers. But it's very hard for me.

looking at it now uh every time i talk to someone like you i say this is you know this is really not it yeah and you think we're being defensive no no no i think you're right i think in the sweep of emotion about chat gpt and this new world it is so easy to say the writing is on the wall there's going to be no role for humans this thing is going to take over

And I don't think that's going to be right. I don't think we are facing this total destruction of all human jobs in the very short term. And I think it's difficult and important to balance that with the fact that some jobs are going to be totally replaced by this in the very short term. What jobs do you think will get eliminated pretty quickly, in your view?

I think a lot of customer service jobs, a lot of data entry jobs get eliminated pretty quickly. So this is maybe useful. The thing that you do right now where like you go on some website and you're trying to return something and you like chat with somebody sitting on the other side of a chat bot and they, you know, send you a label and blah, blah, blah. That job, I think, gets eliminated.

Also, the one where you call and talk to someone, that takes a lot longer. But I think that job gets eliminated too. But I don't think that most people won't work. I think for a bunch of reasons, that would be

to a lot of people. Some people want work, for sure. I think there are people in the world who don't want to work and get fulfillment in other ways, and that shouldn't be stigmatized either. But I think many people, let's say, want to create, want to do something that makes them feel useful, want to somehow contribute back to society. And there will be new jobs or things that people think of as jobs that we today wouldn't think of as jobs in the same way that

maybe what you do or what I do wouldn't have seemed like a job to somebody that was like,

doing an actual hard physical job to survive. As the world gets richer and as we make technological progress, standards change and what we consider work and necessity and a whole bunch of other things change too. So I think that's going to happen again with AI. I realize that some of this draws on your essay that was published a couple of years ago, Moore's Law for Everything. You suggest economic policies like a universal basic income policy.

taxes on land and capital rather than on property and labor. And all of those things have proven impossibly difficult to pass even in the most modified form. How would they become popular in the future? I think this stuff is really difficult, but A, that doesn't mean we shouldn't try. And the way things that are outside the Overton window eventually happen is more and more people talking about them over time. And B,

When the ground is shaking, I think, is when you can make radical policy progress. So I agree with you. Today, we still can't do this. But if AI stays on the trajectory that it might, you know, perhaps in a few years, these don't seem so radical. And...

We have massive GDP growth at a time where we have a lot of turmoil in the job market. Maybe all this stuff is possible. And the more time up front we have for people to be studying ideas like this and contributing new ones, I think the better. I believe we have a real opportunity to shape that if you take something, a good that has been super expensive and limited and important, and make that

easy to access and extremely cheap. I believe that is mostly an equalizing force in the world. And we're seeing that with Chachi PT. One of the things that we tried to design into this, and I think is an exciting part of this particular technological revolution, is...

Anyone can use it. Kids can use it. Old people can use it. People that don't have familiarity with technology can use it. You can have a very cheap, cheap mobile device that doesn't have much power and still get as much benefit out of this as someone with the best computing system in the world. My dream is that we figure out a way to let the governance of these systems, the benefits they generate, and the access to them be equally spread across every person on Earth.

This is the New Yorker Radio Hour, and I'm talking today with Sam Altman, the CEO of OpenAI, which developed ChatGPT and GPT-4. Sam, talk to me about artificial general intelligence, which seems to be a step even past what we've been talking about. I think it's a very blurry line. I think artificial general intelligence means to people

very powerful artificial intelligence. It's sort of shorthand for that. My personal definition is systems that can really dramatically impact the rate that humans make scientific progress or that society makes scientific progress. Other people use a definition like systems that can do half of the current economically valuable human work.

Others use a system that can learn new things on its own. That latter point is the thing that creates anxiety, isn't it? That it's a system that can operate beyond the bounds of human influence.

Well, there's two versions of that. There's one that causes a lot of anxiety even to me, and there's one that doesn't. The one that doesn't, and the one that I think is going to happen, is not where an AI is off writing its own code and changing its architecture and things like that, but that if you ask an AI a question that it doesn't know the answer to,

It can go do what a human would do. Say, hey, I don't know that. I'm going to go read books. I'm going to go call smart people. I'm going to go have some conversations. I'm going to think harder. And now I'm going to have some new knowledge stored in my neural network. And that feels fine to me. Definitely the one where it's off like writing its own code and changing its architecture. Very scary to me.

AI systems have already generated skills that its creators didn't expect or prepare for. Learning languages, it wasn't programmed to learn, figuring out how to code, for example. So the worry is that AI could break free from its human overseers and wreak havoc of one kind or another. The fundamental place that I find myself getting tripped up in thinking about this, and that I notice in others too, is, you know, is this a tool or is this a creature? And I think it's so tempting. That's fair.

to project creatureness onto this because it has language and because that feels so anthropomorphic. But what this system is, is a system that takes in some text, does some complicated statistics on it, and puts out some more text. And amazing emergent behavior can happen from that, as we've seen, that can significantly influence a person's thinking. And we need a lot of constraints on that. Um,

But I don't believe we're on a path to build a creature here. Now, humans can really misuse the tool in very big ways. And I worry a lot about that, much more than I worry about currently the sci-fi-esque kind of stuff of this thing, you know, wakes up and loses control. Sam, you've had quite a few conversations lately with lawmakers. You testified in front of a Senate subcommittee, and that was widely reported. But before that, you had a private meeting at the White House.

Tell me, who was there and what was the conversation about? It was a number of people from the administration led by Vice President Harris and then the CEOs of four tech and AI companies. And the conversation was about, as we go heading to this technological revolution, what can the companies do to help ensure that it's a good change and help sort of reassure people that we're going to

get the things right that we're able to get right and then we need to in the short term. And then what can the government do? What are the kinds of policy ideas that might make sense as this technology develops? One area in particular that I am worried about in the short term

is provenance of generated content. We've got an election next year. The already image generation is incredibly good. Audio generation is getting very good. Video generation will take a little bit longer, but will get good too.

I'm confident that we as a society with enough time can adapt to that. You know, we've learned when Photoshop came out, people were really tricked for a little while and pretty quickly people learned to be skeptical of images and people would say, oh, that's Photoshopped or that's doctored or whatever. So I'm confident we can do it again. But we also have a different playing field now. And there's sort of Twitter and these telegram groups and however else this stuff spreads.

There's a lot of regulation that could work, and there's technical efforts like watermarking images or shipping detectors that could work in addition to just requiring people to disclose generated content. And then there's education of the public about you've got to watch out for this. Ultimately, who do you think was the most powerful people in the room, the people on the government side or the people heading the tech companies? That's an interesting question. I think the government certainly is...

more powerful here in even the medium term. But the government does take a little bit longer to get things done. And so I think it's important that the companies independently do the right thing in the short, in the very short term.

But you understand that, again, years ago, tech and the cover of Wired, there was a kind of euphoria attached to technology that in the past several years... Doesn't feel like it this time around, does it? No, it doesn't feel that way at all. And not because I relish it, but the public images of places like Facebook and Google are not what they were. And I think...

trust in those companies to get things right. So when we hear about a conversation at the White House between the vice president and her colleagues and the heads of tech companies, we want to intensely know

What is going on, what the conversation is like, and what it's leading toward, who's in charge? It would be really good to know the details of that. The right answer here, very clearly, is for the government to be in charge. And not just our government. I think this is one of these places where, and I realize how naive this sounds and how difficult it's going to be, we need international cooperation. The example that I've been using recently is I think we will need something like the IAEA,

that we had for nuclear for this and it was going to require... Atomic weapons, obviously, and atomic energy. And I think that's so difficult to do. It requires international cooperation between superpowers that don't get along so well right now. But that's what I think the right long-term solution is, given how powerful this technology can grow. I'm actually optimistic that it's technically possible to do. I think...

The way this technology works, the number of GPUs that are required, the small number of people that can make them and the controls that could be imposed on them to say nothing of the energy requirements for these systems. Like, it is possible to internationally regulate this. So I think the government has got to lead the way here. I think we need...

serious regulation from the government setting the rules. I think it's good for the tech companies to provide input, say where we think the technology is going, what might work technically and what won't. But the government and really the people of the world have got to decide. Sam Altman, thank you very much. Thank you. Sam Altman is the CEO of OpenAI, which created ChatGPT.

We're going to continue on the risks and benefits of AI in just a moment. This is the New Yorker Radio Hour. ♪

I'm Maria Konnikova. And I'm Nate Silver. And our new podcast, Risky Business, is a show about making better decisions. We're both journalists whom we light as poker players, and that's the lens we're going to use to approach this entire show. We're going to be discussing everything from high-stakes poker to personal questions. Like whether I should call a plumber or fix my shower myself. And of course, we'll be talking about the election, too. Listen to Risky Business wherever you get your podcasts.

This is the New Yorker Radio Hour. I'm David Remnick. We're talking today about the promise and the danger of artificial intelligence. Computer scientist Yoshua Bengio began working on AI in the 80s and the 90s, and he's been called one of the godfathers of AI. Bengio focused specifically on neural networks. That's the idea that software can somehow mimic how the brain functions and learns. The brain itself is a kind of network.

Now, at the time, most scientists thought this would never really work out. But Bengio and a few others persevered. Their research led to advances in voice recognition and robotics and much more. In 2018, Bengio received the Turing Award, kind of the Nobel Prize of Computing, alongside Jeffrey Hinton and another colleague.

ChatGPT is also built on the foundation that Bengio helped to build. It's a neural network. But Bengio, instead of celebrating this remarkable achievement in his field, has had quite a different reaction. So in March, a group of very prominent people in tech signed an open letter that said that all AI producers should stop training their systems for at least six months. And you signed that letter. Even Elon Musk, who's not known for his

overweening sense of caution also signed the letter. Please tell me how that letter came about and what was the motivation. We saw the unexpected and rapid rise of the abilities of AI systems like ChatGPT and then GPT-4. We didn't ask to stop every AI research and development and deployment, only those very powerful systems that are of concern to us. I believe there is a

a non-negligible risk that this kind of technology in the short term could disrupt democracies and in the coming years with advances that people are working on could yield to

loss of control of AI, which could have potentially even more catastrophic impacts. So I just spoke to Sam Altman and I asked him about what seems to be the most frightening concern of all, that an AI entity could basically become a sentient creature that could rewrite its own source code

And somehow, as if in a horrifying science fiction movie, break free from human control. Altman assured me this is very unlikely. What do you think? Did he say it was unlikely with the current systems or in the future? The current systems, to be sure. Yes. So I agree with him.

But what about in the future? For the future, yes, there is a real risk. It's a risk we don't understand well enough. In other words, you could see experts like my friend Jan LeCun saying one thing and other experts like Jeff Hinton and I saying the opposite. The scenarios by which bad things can happen haven't been sufficiently discussed, studied. A lot of people talk about AI alignment. In other words, the fact that you may ask a machine to do something and it could act in a different way that could be dangerous.

There is an alignment problem of a different kind between what's good for society and the general well-being of people and what companies are optimizing, which is profit and their constraints of being legal.

It's actually interesting because I find that as an inspiration to better understand what can go wrong with AI. So you can think of corporations as special kind of intelligences that are not quite completely artificial because there are human beings in there, but that can behave in a similar way. We try to bring corporations

corporations back into alignment with what society needs, with all kinds of laws and regulations. And it's in particular in the case of AI, I think we need regulation, a regulatory kind of framework that's going to be very adaptive because technology moves quickly, science moves quickly. We don't want Congress or parliaments and other countries to be the ones dictating the details. They want to assign some

more professional body that are not politicians, but they are experts to find the best ways to protect the public. Well, how would you and Jeff Hinton and others describe a very bad outcome? What is the scenario that you envision is at least possible and unpredictable and dangerous? Imagine that in a few years...

scientists figure out how to build an AI system that would be autonomous and could be dangerous for humanity because it would have its own goals that may conflict with ours. And maybe we even also have figured out how we can build safe AI that wouldn't behave like this. The problem is we have that choice. And maybe those scientists in the labs would choose the good AI solution.

But there could be somebody anywhere in the world, if they have access to the required compute, which right now isn't that much. So think about chat GPT. You don't need to retrain it. You just need to give it the right instructions. Anybody can do that. What is the scenario that you see in specific terms as a possibility that you are trying to prevent? There's an organization called AutoGPT, which arose just in the last few weeks or months ago.

that made it possible to turn something that has very little or no agency like chat and GPT into something that actually pursues goals that the human would type, but then creating its own sub-goals to achieve those. It's increasing the

chances of an AI system becoming dangerous for humanity because they are connecting, for example, that system to the internet. It could ask people to do things for money through existing facilities for this. If we had, instead of chat GPT, something

that's smarter than humans, which may arrive in as few as five years, I don't know, then that could become catastrophic. You've raised the idea of AI being exploited in military use. How should the military use artificial intelligence, if at all? What are the dangers? Well, the danger is first that we're putting a lot of

difficult moral decisions in the hands of machines that may not have the same understanding of what is right and wrong as we do. You may know about the story of the Russian officer who decided not to press on the button in spite of the instructions that would have led to probably catastrophic nuclear exchange because he thought it was wrong. And it was a false alarm, right?

If we build AI systems with agency and autonomy and they can kill because they're weapons, it just makes the likelihood of something really catastrophic happening larger. But then there's just like simple, let's say, Putin wants to destroy Western Europe and take advantage of AI technology to do it in a way that might not be possible otherwise. If AIs are embedded into the

weapons, the military, then it just gets easier to have large-scale dangerous impacts. I have to ask you, you've been working in this field for many years. Why is it suddenly... Decades. Decades. Suddenly everybody's very concerned about it. There have been rumblings about it over the year, not only in the field, but beyond the field. It's exploded, this level of concern. What happened? And why wasn't it foreseen a little earlier?

Well, it was foreseen by some, as you said, kind of a marginal group. So there's the fact that most of us in AI research did not expect that we would get to the level of competence that we seem to see in the chat GPT and GPT-4. We expected something like this, that level to come maybe in 10 or 20 years. Yeah.

and the human level intelligence to come maybe in 50 years. I mean, our horizon for risk just got much shorter. If you're working on a topic, it's more psychologically comfortable to think that this is good for humanity than to think, oh, gee, this could be really destructive. And we have these natural defenses as part of the problem with humans. We're not always rational. Is there a possibility that

AI leads to an even greater disparity, social disparity, income disparity. What prevents a scenario where the benefits of AI are concentrated among a very small slice of the population and vast numbers of people

are experience dislocation, unemployment, and actually get poorer. In general, if you think about what AI is, it's just a very powerful tool. If you just think about having very powerful tools, it can clearly be used by people who have power to gain even more power. What prevents that tends to be governments, taxation, for example, and services offered by governments to everyone and so on.

So as to balance things out. Are you concerned that the warnings coming from Jeff Hinton, from Steve Wozniak, come across to some people as the warnings of an old guard complaining about a new generation of scientists? No. I mean, my students are concerned and there are young people who are concerned. I think that the battle that is...

shaping up in a way has a lot of points in common with the concerns and the requirement for policies about climate, climate change. And a lot of young people are fighting to preserve our, you know, the interests of future generations. I think something similar is at stake with AI. Yeah.

One of the confounding things about confronting the climate emergency is the requirement for coordinated international effort. With AI, you not only have that, but you also have, I think it's fair to say, a level of understanding of the basics of AI that's very low. In other words, people can understand dried up rivers, raising temperatures, rising sea levels, and all the rest.

The complications of AI and predicting those complications are even more complex, don't you think? So I think what may bring countries to this international table that is needed, indeed, is their self-interest in avoiding catastrophic outcomes where everyone loses. So a good analogy is what happened after the Second World War between Germany

mostly the US and the USSR, and then to some extent China, to come up with agreements to reduce the risks of nuclear Armageddon. It's, I think, in good part thanks to these international agreements that it has been okay. So the comparison is to arms control, nuclear arms control? Yes. It's not exactly the same thing, but I think it's a good model. Mr. Bengio, thank you so much. I appreciate your time. My pleasure. Thanks for having me.

Yashua Bengio is the scientific director of the Montreal Institute for Learning Algorithms. I'm David Remnick, and that's The New Yorker Radio Hour for today. Thanks for listening. I hope you'll join us next time. The New Yorker Radio Hour is a co-production of WNYC Studios and The New Yorker. All right.

Our theme music was composed and performed by Meryl Garbess of Tune Arts with additional music by Louis Mitchell. This episode was produced by Max Balton, Brita Green, Adam Howard, Kalalia, Avery Keatley, David Krasnow, Jeffrey Masters, Louis Mitchell, and Ngofen Mputubwele, with guidance from Emily Botin and assistance from Harrison Keithline, Michael May, David Gable, and Alejandro Teket.

The New Yorker Radio Hour is supported in part by the Cherena Endowment Fund. My Wrangler jeans from Walmart are legit my favorite go-to pants. They got that slim cut that's always fresh for going out. Hey, what's up? They're durable enough, even for my shift, and stretchy enough for when I want to kick back and chill with a movie. So basically, they can do it all, paying on my budget. I mean, come on. You really can't beat all that. Shop your Wrangler pants at Walmart.