We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Hype and Skepticism: Economist Paul Romer

AI Hype and Skepticism: Economist Paul Romer

2024/5/14
logo of podcast Me, Myself, and AI

Me, Myself, and AI

AI Deep Dive AI Chapters Transcript
People
P
Paul Romer
S
Shervin Khodabandeh
Topics
Paul Romer: 我对人工智能的炒作持谨慎态度,认为当前的真正革命是传感器和设备的进步,它们赋予人们前所未有的能力。我认为人工智能的重点应该是增强人类能力,而不是完全取代人类。以自动驾驶为例,它并非人工智能的成功应用,因为完全的自动化非常困难,并且公众对自动驾驶汽车的安全性存在担忧。我们应该关注如何利用人工智能来改善教育,提高人们的技能水平,以应对技术进步带来的挑战。工作中的技能培养与学校教育同等重要,甚至可能更为重要。政府可以通过税收等政策手段来补贴就业或调节收入差距,以应对人工智能带来的就业挑战。如果民主和法治受到威胁,那么减缓技术发展可能是必要的,但我们应该优先考虑如何公平地分享技术进步带来的收益,避免加剧不平等。 Sam Ransbotham & Shervin Khodabandeh: 我们对人工智能的未来发展方向存在不同观点。Shervin 认为人工智能正在快速发展,未来几年可能在更多领域超越人类的能力,改变人类的工作方式。Sam则认为,虽然人工智能在某些领域取得了快速进展,但过度依赖初始的快速进步是危险的,因为在许多情况下,要完全取代人类仍然非常困难。我们应该关注人工智能如何改善教育,尤其是在衡量学习效果方面,这对于人工智能的社会接受度至关重要。

Deep Dive

Chapters
Paul Romer discusses his skepticism about AI hype and the potential for AI to remove humans from the loop, emphasizing the importance of keeping humans in the loop and enhancing human capabilities with technology.

Shownotes Transcript

Translations:
中文

Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.

What does a Nobel Prize-winning economist think the future of AI holds? Find out on today's episode. I'm economist Paul Romer, and you're listening to Me, Myself, and AI. Welcome to Me, Myself, and AI, a podcast on artificial intelligence and business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, professor of analytics at Boston College. I'm also the AI and business strategy guest editor at MIT Sloan Management Review.

and I'm Sherwin Korubande, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.

Hi, everyone. Today, Shervin and I have a fun treat for you. We're joined by Paul Romer, winner of the 2018 Nobel Prize in Economics. He's a professor at Boston College and directs its new Center for the Economics of Ideas. Paul's got a fascinating background, including time as the chief economist at the World Bank, founder of multiple companies, and avid contributor to public policy discussion everywhere.

He's really at the forefront of thinking about how the economics of ideas differs from the traditional economics of scarce physical objects.

And beyond that, he's just kind of interesting and a nice person. So we're thrilled to have you with us today. Thanks for taking the time to talk with us. Well, I'm glad to be here. It's always fun to talk. All right, let's go ahead and start off with the elephant in the room. There is a whole lot of talk about artificial intelligence right now. If we think about the spectrum of technologies and how they'll affect society, you know, we might have like wheel and fire and electricity and steam engine on one side and maybe transistors over there.

On the other hand, we have something like Segways and Clippies. Where are we in this spectrum with artificial intelligence, do you think? Let me start by just repeating something I said in class yesterday, because my students are very excited about AI.

I'm a little more skeptical. I think there's a lot that will be possible because of AI, but I think people are buying a little bit too much of the hype and they're losing perspective. I told them that I don't think AI is actually the big revolution that we're living through right now. The real revolution, I think, is stuff we're seeing coming out of the war in Ukraine. And the nature of that revolution is that sensors and devices are

are giving people the ability to do things that they could never do before. So the problem with the way most people are framing AI is they're thinking about autonomous vehicles where you're

Taking the human out of the loop. The technology is the replacement for the human. That is not working that well. The autonomous vehicles were supposed to be the killer application of AI, and it's turning out to be a bust. There's a reason Apple just canceled its car project.

But if you go back to using technology to enhance what people can do, you have an interface that lets a human interact with data from a lot of sensors and software that then translates what the human does in the interface into things that are happening out in the world. Incredible things are going to come out of that. But this kind of just turn it over to the machine and let go, I think that's going to turn out to be very disappointing. Yeah.

Part of the way I tried to explain to the students how big this revolution is, if you look at an organization like the Marines, the Marines recognized a while ago, the tank is just history. Weapons like the tank are not going to be important going forward. And they're thinking about things like electric motorcycles and portable missiles.

The Air Force just canceled a big project that they had where they were building helicopters. They've decided the same thing. Helicopter just is not going to work in this new world of sensors and devices. So very big things are happening. I still think that this idea of you just train the machine, use unsupervised learning, it just gets really smart and it does things for us. I just don't think that's where the action is going to be. So I agree with you, Paul, that...

today you can't hand it over to the machine and expect it to take over. And I do believe also there is that we humans have this amazing ability to make everything into complete black and white. It's either AI or nothing, or human or nothing. Totally aligned with you there. You also made the point that

There is a real revolution going, which is the proliferation of information and data and just our ability in instrumentation that like just all of the measurement. Where I might disagree with you is that when, Sam, you started a parallel to other technologies, you know, the wheel and the electric motor and all those things were disruptions and disruptions.

changed the way of working. They changed how humans maybe worked in a factory, like the electric motor changed actually how people worked, whether all aligned with one main shaft or in departments or compartments. What I would say is AI has begun to do that. And the reason I believe that is if I look at

Things that were uniquely human, that we would talk about cognitive abilities, whether it's vision or it's logic or creating images and things like that. We're seeing evidence that AI is doing those.

We're also seeing evidence that AI is doing them now much faster than it used to do before. So whereas it maybe took 10 years for computer vision to surpass human ability, it took maybe one year or two. Of course, it built on decades of research, but since its introduction, it maybe took a couple of years for AI to be able to summarize books and text or create images and content.

So I guess where I would debate what you're saying is, don't you think that if you play forward this evolution, maybe another three or five or ten years...

that there will be a world where AI will do much more of the things that human had a monopoly on. Yeah. So to recap, and it's good to disagree. I mean, this is not like how academics, this is how we make our living is you disagree with people and it's the conversation which converges towards some notion of truth.

So I'm arguing the way it's always been, and in fact, the way it will always be, is that you keep the human in the loop and the tools make the human more powerful. Whereas you're saying, no, we're starting to see domains where you don't even need the human in the loop, the machines take over.

No, actually, I would agree with you that we always need human in the loop. Like that part, I agree with you. But I also would say that we need a lot less of a human in the loop. Therefore, that human could do a lot more other things, whatever those things might be. But I actually agree with you that you cannot have AI completely replace human. Like on that, I agree with you. But I...

But I also mean this not for like ethical reasons or to kind of supervise, but just to get the job done. I think you're going to need the human in the loop. And so I guess let me try and restate the distinction that you're thinking there's things that say people could just offload. We let the machine do it and the human can just build on that. There are some places where we're close to that.

I think one where I've actually used it myself is there's a whisper AI that will give you a text transcript of an audio recording. I don't know, maybe you produce text from this podcast. The machine is pretty good at that. There's still mistakes. I still have fixed up those transcripts, but they're pretty close to being things you could just turn loose and not participate in.

But they still make mistakes, and so I still have to do a few fixes there. And I think that's still going to be a lot of the experience. And I think that what we've seen repeatedly is that very rapid progress early on makes people think, oh, we're going to very soon get to point X.

We fail to anticipate how quickly the diminishing returns or the extra effort kicks in. Autonomous vehicles made rapid progress initially. Now they're finding that the few rare cases that come up that are very difficult to anticipate are very hard to solve. And so I think we've got to be careful about this extrapolation from rapid progress. And we've got to look and say,

Where are the cases where we've actually gotten all the way to the point where we've just taken the human out of the loop? And if it's hard to get there in most cases, we're going to make systematic mistakes if we're betting based on the high rate of initial progress.

Yeah, I totally agree with you. And I think that there might be just even no-fly zones altogether on areas where this expectation that AI is 100% removing human is a false expectation. But let me drill in on this. Do you think that autonomous vehicles are going to work? Like, is there money to be made investing in autonomous vehicles? Well, I don't know the full economics of it, right? But from a capability perspective, I would say that there is a future for

Maybe it's in 10 years, maybe it's in five. It's not in one or two, but not in 50. I think there is a future where there will be autonomous vehicles that in aggregate would make far fewer errors and fatalities that human drivers do. Yeah, sure. And that's an important point. Our benchmark isn't perfect humans. We had 43,000 deaths last year in the U.S. from human drivers.

One way to think about this would be to separate out the problem. For example, think about the difference between long-haul trucking and driving around narrow Boston streets. It's much easier for me to see that we'd make progress on long-haul trucking first. We talk about these problems in big clumps, but we just don't have to.

I think I'm still a lot more skeptical than you about the autonomous vehicles. I think we'll actually create some new domains where semi-autonomous or fully autonomous vehicles can operate, like tractors. You know, John Deere is pursuing this. But I think it's revealing that Apple's just decided there's no money to be made in this line of investment after investing a bunch of money.

And I also think it's important to remember that what matters here is like the social acceptability of these technologies. It isn't just whether somebody can argue in statistical terms that like this is safer than what we usually see with people. If you look at like airplanes, for

You could argue that we're making airplane travel way too safe because it's much more dangerous to ride on a bus or take a train. But it's very clear the public does not want aircraft travel to be more dangerous. So we have to just accept that even if, on average, the autonomous vehicles are, say, safer than the people, humans, voters are not going to accept machines that kill people.

Or the same could be said in medicine, right? I mean, there are examples of AI in radiology where it is performing better than radiologists. I would still like a real radiologist and a real doctor. But, Sharvin, you want the doctor because you live in...

Los Angeles, where you've got access to great medical facilities there, and so do we in Boston. I think we had a conversation with some different people. We might have a different perspective. Yeah, that's a good point, too. But the question for me is when I think about most of what humans do, when I think about

the rest of everything that people do in their regular jobs, right? Which is sort through documents, summarize things, search knowledge. And more and more of those things can be done with AI arguably better. Yeah. I think my answer is, well, let's take it one step at a time. Let's start with the things where I think we can make some adjustments, trying manufacturing

But watching that alone may not be enough. Then I think there are tools we could use. You could, for example, use the tax system to subsidize employment of workers.

say, manufacturing workers. So the trade-off for any one firm might be, okay, well, I can get subsidized workers, or I can just pay the full freight. Or you could tax higher-income workers, subsidize the lower-income workers. Or you could tax the machines or tax the transistors, but use the revenues to then, say, subsidize the employment of the workers. Do you think one of these tools could be or should be in regulation of...

And look, I mean, I'm hardly the person to advocate something against innovation, right? If you just look at like my own, like passion and my own career. And so I don't mean it in a way of completely stifling innovation. But do you think that there is a time and place for governments to play a much heavier, stronger sort of role in innovation?

No-fly zones with AI, things that you can and cannot do. I mean, not that different with what we did with some of the genetic sciences, right? There's certain things we just don't allow people to do, even though we can do it. Right, right. Let me get at what I think is part of your question, but even sharpen it.

Many people, like I was just in this idea of allowing the globalization of ideas and sharing of ideas. I was saying we're still committed to the discovery of new ideas and to innovation. Other people are saying, well, we not only want to limit how you turn those ideas into physical products. We may even want to steer innovation in a particular direction, like steer it away from some things and steer it towards other things. But that we might even get to a point where we say,

No, we just want to slow the innovation down. We want to slow down progress because it's just too hard to cope with the stresses it will cause. I haven't gotten to that point yet where I say, as a voter, I'm ready to slow things down. But I can see the possibility of coming to that conclusion. If you think the choice is, well, we're going to lose democracy and rule of law,

or we're going to slow down technology. I wouldn't have much trouble making that choice. I don't think we're there. That's not the first thing we should try, but I think we should think carefully about what our priorities are. And I don't think technology for its own sake is the be-all and end-all. I mean, I like the idea of pausing, and it's always appealing. I mean, I have...

teenage kids, sure, and as teenage kids, I think the idea of pausing and enjoying that period longer, that seems great, but I don't know how realistic that is. You know, my background was working with the United Nations and the weapons inspectors in the weapons of mass destruction phase. And

We've done a very good job of limiting that. We haven't had any big explosions since 1945. So, you know, but that had physical goods to it. It had a physical thing that you could control. And the same thing is true with, for example, food safety. We have inspectors. We have regulation supply chain. We have things we can regulate. But do we have the same tools that we had in prior iterations of technology like biotech?

Yeah, I was thinking the same thing before Sam spoke, which is that even if I agreed, well, things have gotten so serious, I'm ready to try and slow down discovery. You have to ask, how is it? Is it even feasible for us to slow down? And that's going to be a real constraint. So some of your ideas here about jobs seem like they depended on the idea that, oh, there's still going to be this Zenos paradox of approaching full

full automation. And as long as that exists, we'll still have a human role. So that's a pretty appealing argument as a human. I mean, I like that idea. Yeah. But if I go to this older idea of the race between education and technology, I think one of the things we have to keep in mind is that

We could slow down the technology, but we could also speed up education. And we should think really hard about what do we do to keep raising skill levels. Traditionally, we did that through things like extending the period of life that somebody spends in school. That's getting harder and harder to do. We might get better at getting more productivity out of the years that people spend in school. But a lot of people put work into that, and we haven't made a lot of progress there either.

The point that I got focused on at the World Bank was that a huge amount of skill acquisition actually comes on the job.

If you look at a society, typical society, it may produce about as much skill through learning on the job as it produces in its school system. The average amount of skill we think produced per year in school is higher than the amount for, say, one person for a year in school or one person for a year in work. You don't learn as much when you work, but there are a lot more people who are working. So, you can end up producing as much human capital on the job.

And I think we should be paying much more attention to the potential for jobs to actually enhance skill. And this is actually one of my biggest concerns about some of the applications of AI right now. If you had Saul Kahn on your podcast, you know who he is from Khan Academy.

I haven't spoken to him ever, but I know he's an optimist about how AI might actually be able to improve the rate of learning. We did have Duolingo. Yeah. Duolingo was a great example of exactly that. Yeah, they're another one. And they've kind of delivered, I think, on that. So those are the people who are optimistic about AI who I frankly put some stock in what they're saying. And if we took that...

seriously and said that the only way this will be socially acceptable is if we do a better job of educating. We better put a lot of resources in to figure out how to use AI to improve education and to measure so we know that we're really improving. And if we did that, we might come out okay.

We have to be careful because we're university professors here. But, yeah, the idea of this micro learning is a big deal. We've somehow gotten wrapped in this idea that 15 week semester is the magic unit of time and three hours per week is magic. And there's nothing that says that that's the case. Yeah. Yep.

My wife is a big user of Duolingo, and she keeps learning another new language, and it seems to work for her. But I'd love to have some more evidence about how Duolingo or how Khan Academy can work. Right now, the big successes in those areas seem to be in enhancing opportunities for people who are already pretty good at learning languages or learning skills. So what would really be great is to see how we use AI to

help the bottom half of the class in school. But maybe that's the optimistic message to come out of this. It's a little bit like what I was saying about instead of trying to slow down technology, let's point it in a direction that

I'm really impressed with what the military is doing and trying to understand what you can do with these technologies and do something to meet their mission. If we were as serious about using AI in improving education as the military is, part of what I like about the military is they know they can't survive on hype. It's really got to deliver or they're going to fail. But if we were that serious about using AI...

and developing it to help us in education, then we might actually end up with the better technology and benefits that are more widely shared. And now the question is, how can we use the capability

and direct it in that way. And I think what I'm taking away from this conversation is it's just not going to happen necessarily automatically. We need to, because automatically what would happen is probably what you said, which is like more efficiency, think more widgets per hour, less skills. That's probably for granted. And more inequality. Yep. It's interesting as I listened to you, I

used to say that I was the most optimistic economist I knew. Because back in the 1980s, as we were coming out of the inflation of the 70s and limits to growth and just this doomster kind of mindset, I was saying, no, look, technology can go like gangbusters. And I was kind of right about that. What I didn't anticipate, though, is that it could also, if we didn't take the right supporting measures, could lead to a lot of inequality. Right.

And I'm really disturbed by the degree to which US society has both benefited from very rapid technology, but has opened up these big growing inequality gaps. And it's very hard to get at this with just the straight income inequality data. But you look at life expectancy. People who are just high school educated

are not increasing their life expectancy the way people who are college educated have. They're even suffering real decreases in life expectancy. And so life is really not turning out as well for them. I really feel like the failure was not we didn't have enough technological development, but we didn't share the gains widely enough. And we've got to work hard at figuring out how to do a better job with that going forward. Yeah, well said.

We close this show with asking five questions. And our first question is usually, what's the biggest opportunity with AI? And I think you've just mentioned learning, and I think we'll take that as your first answer. But what's the biggest misconception about AI people have? Well, I think this idea that AI can write software. The idea that it can write code, I just think that's just claiming way too much. What's the first career you wanted to do?

Well, I wanted to be a physicist. I switched from physics as an undergrad to economics because I thought physics would be too hard. And it was also a time when the U.S. was like closing down the space program and just there weren't as many jobs in physics and so forth. But for a long, most of my life, I'd always, I'd been saying to myself, I wanted to be a software developer. And now I've had a chance to play at that for five years. Maybe, gosh, maybe, maybe I could try and be a physicist in another five years.

When are we using too much AI? When do we need less AI?

Well, you and I have talked about this, but I think we've got to make documentation easier for students to access, particularly the Python documentation. Because if it were easier, a little bit more responsive, I think they'd spend more time checking out the documentation. And they'd learn a lot from that. Just asking one-off queries like, how do I do this? How do I do that? I don't think they're seeing the big picture. So we're doing too much chat GPT type questions about Python and not enough documentation.

Students reading good documentation. Yeah, just the backstory. That's what's fallen. I spend a lot of time arguing about Python and how to help people get sharper on Python. What's one thing you wish the AI could do that it can't? Well, I wish I could manage my time better.

I think a lot of what we learn is not so much facts we retain, but we learn to keep going even when things are going badly. Like we're hitting a wall, we're stuck, and how do you keep going on things? So I wish AI could help me almost like a coach. I wish I had an AI coach that helped me know, because it's a very subtle decision. Sometimes you got to just drop it. It's not going to work. Give up on that path. Go do something else. I was going to ask, would you listen to it? Yeah.

If I controlled it, if I could control it, yeah, maybe. Exactly. Yeah, I mean, I think that's part of the big, we didn't even get into that, but the whole thing about the domination by the tech giants here. Paul, it's been great. I know we're running short on time here. It's been great talking to you. Thanks for taking the time to talk with us today.

Well, maybe we should make this an annual thing. Have me back again in a year and we can say, okay, let's update. Yeah, we'd love that. Which direction have things gone? Have they gone the way we thought or what was new? Or at the rate it's going, maybe it should be semi-annual. Yeah, maybe so. Thanks for joining us today. Next time, Sam and I meet Mario Rodriguez at GitHub. Talk to you then.

Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and learn more about AI.

and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.