We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Babbage: Sam Altman and Satya Nadella on their vision for AI

Babbage: Sam Altman and Satya Nadella on their vision for AI

2024/1/24
logo of podcast Babbage from The Economist

Babbage from The Economist

AI Deep Dive Transcript
People
L
Ludwig Siegele
S
Sam Altman
领导 OpenAI 实现 AGI 和超智能,重新定义 AI 发展路径,并推动 AI 技术的商业化和应用。
S
Satya Nadella
在任近十年,通过创新和合作,成功转型并推动公司价值大幅增长。
Topics
Sam Altman: 我认为,ChatGPT 最重要的改进在于模型本身会变得越来越聪明,能力会全面提升,这比任何单一的新功能或工具都重要得多。GPT-5(或其他名称)将更加智能、功能更强大。虽然具体改进难以预测,但模型整体能力的提升会带来各个方面的改进,例如编写代码能力的提升。语音模式也会得到改进,但模型的通用能力才是关键。 我对AGI持谨慎乐观态度。我相信总有一天我们会创造出符合AGI定义的东西,但它不会突然出现,而是一个渐进的过程。世界可能会短暂恐慌,但人们很快就会继续生活。AGI的到来比大多数人想象的要早,但在最初几年,它对世界的影响可能比人们预期的要小,长期来看影响会更大。安全不是简单的二元选择,而是一个持续的评估和调整过程,需要权衡风险和益处,并通过迭代部署来不断调整。我们不断地推迟或决定不发布产品,这体现了我们对安全的重视。 我们总是能找到新的事情做,人类的深层动机不会消失。 Satya Nadella: 新一代AI能够在各个方面都实现非线性地改进,并将其作为普遍的技术提供给知识工作者,从而大幅提升生产力。自从个人电脑以来,我们还没有见过这样能够让工作更高效、减少繁琐任务的技术。AI能够让知识工作者更快地完成任务,例如编写代码、撰写文章和进行数据分析。我们的目标是将AI打造成一种通用的技术,让知识工作者能够轻松获取专业知识,就像拥有‘专业知识触手可及’的能力。 我希望AI能够惠及所有国家,特别是像印度这样的发展中国家,避免重蹈工业革命时期某些国家被边缘化的覆辙。AI能够显著改善医疗和教育领域,为全球每个人提供个性化的医疗建议和教育辅导。AI将导致劳动力市场出现一些岗位消失和工资下降的情况,但也将创造新的就业机会并提高某些岗位的工资水平。劳动力市场具有很强的动态性,人们能够适应变化,并通过AI学习新的技能,从而更容易转换职业。 Zany Minton Beddoes: 在达沃斯论坛上,AI是人们关注的焦点。ChatGPT的出现让人们意识到生成式AI的巨大潜力,企业也开始思考如何利用这项技术。公众对AI的担忧与企业对AI的兴趣并存,这造成了巨大的关注热潮。Altman和Nadella的对话中,他们都强调了AI能力的全面提升,这令人印象深刻。Nadella认为,AI能够在各个方面实现非线性地改进。 关于AGI,Altman认为它的到来将是一个渐进的过程,不会突然发生,人们的反应可能也只是短暂的恐慌。我个人认为,Altman的观点有一定的道理,因为我们已经习惯了AI技术的不断进步,当AGI真正到来时,我们可能不会感到太惊讶。但是,我们也需要关注AGI可能带来的风险,以及政府在应对这些风险方面的能力。 Ludwig Siegele: Altman和Nadella都认为AI将成为一种通用技术,并带来各种新兴特性。然而,这仍然是一个愿景,目前AI还无法提供有用的医疗建议或个性化的教育辅导。虽然AI有巨大的潜力,但要实现这些目标,还需要克服许多障碍,例如互联网接入问题和激励机制问题。 关于AGI,我认为Altman对AGI的预测前后矛盾,既夸大其影响又低估其影响,这取决于哪种说法对他更有利。政府对AGI的担忧比对社交媒体或互联网的担忧更大,这促使全球范围内建立监管框架的早期努力,这值得肯定。然而,我们仍然需要关注开源模型的监管和发展,这将影响AI领域的竞争格局和风险水平。

Deep Dive

Shownotes Transcript

Hello, this episode of Babbage is available to listen for free. But if you want to listen every week, you'll need to become an Economist subscriber. For full details, click the link in the show notes or search online for Economist podcasts.

And now, a next-level moment from AT&T business. Say you've sent out a gigantic shipment of pillows, and they need to be there in time for International Sleep Day. You've got AT&T 5G, so you're fully confident. But the vendor isn't responding, and International Sleep Day is tomorrow. Luckily, AT&T 5G lets you deal with any issues with ease, so the pillows will get delivered and everyone can sleep soundly, especially you. AT&T 5G requires a compatible plan and device. Coverage not available everywhere. Learn more at att.com slash 5G network.

The Economist. At the start of this year, people from all over the world gathered at a ski town high in the mountains of the Swiss Alps. There, they shared their thoughts on the big problems, everything from the world's economic outlook to future prospects for trade and business, and of course, the various geopolitical tensions simmering around the world.

The World Economic Forum's conference at Davos has long been a magnet for world leaders and would-be world leaders.

This year, alongside the usual politics and business, there was another topic on delegates' minds. Artificial intelligence. AI, AI, AI this week in Dublin. This is going to be the most transformational moment. How can you implement it with the right safeguards? We'll see immense productivity increases. Some of the most profound social challenges. If you are unlucky, your job is gone.

One of the attendees at Davos last week was Sam Altman, the boss of OpenAI. OpenAI is, of course, the company that built ChatGPT. Over the past 18 months, it's become the breakout star of the AI world, a consistent, innovative leader. Which is why the turmoil at the company a few months ago rocked the world of technology.

A reminder for those who don't follow boardroom battles, OpenAI was thrown into chaos after Sam Altman was suddenly fired by its board. He was then rehired a few days later, after most of the company's staff had threatened to resign in protest. Yet another twist in the Sam Altman saga. He is back at the top of OpenAI. ChatGPT remains a hugely popular product.

And it has a major partner in Microsoft, which is already inserting GPT-4, OpenAI's large language model, into many of its products from Word to Windows. Microsoft's boss, Satya Nadella, has been supportive of Sam Altman and his company. Since their agreement started in 2019, Microsoft has already invested billions of dollars into OpenAI.

As companies and countries grapple with how to reap the benefits of generative AI, whilst also minimising the risks of the machines going rogue, we thought it would be instructive to hear from the technologists who are right at the tip of the spear.

In one of their first interviews together since the drama of 2023, Sam Altman and Satya Nadella sat down with Zannie Minton-Beddoes, The Economist's editor-in-chief at Davos. If you were ever in that room and you thought to yourself, this could actually have consequences that I would not want upon the world, would you then shout stop?

I'm Alok Jha and this is Babbage from The Economist. Today, two of the leading figures in the development of AI outline their vision for the future.

Freshly returned from Davos, joining me for today's show is Zannie Minton-Bedders, our editor-in-chief. Thanks for joining me, Zannie. I think it's your first time on Babbage, is that right? It is. I'm singularly ill-equipped to appear on your show. I'm a bit nervous about this. You don't have to be nervous at all. You are actually the expert in this. Also with me from Berlin is Ludwig Ziegler. He's been on Babbage many times before and he's The Economist's senior editor in charge of AI initiatives. Hi, Ludwig.

Hi, Alec. Yes, I've been on Babbage too many times. Too many times. No, that's never possible. All right. Well, look, we're going to start with Zannie. It was perhaps inevitable, Zannie, that AI was a big topic of conversation at the World Economic Forum in Davos this year. Just paint me a picture of how prominent a stage this was for AI. What were the highlights?

Well, you know, the thing about the World Economic Forum is that there's always something that is the kind of hype subject of the moment. You know, one year it was actually several years. It was sustainability, crypto. I remember one year and this year it was absolutely AI. Everything was AI obsessed. There were endless panels on AI. They were completely packed. There were lots of companies touting their AI credentials in the windows of the storefronts that they'd taken over to be their offices.

And I'm not surprised, right? It's just over a year since the arrival of chat GPT kind of exploded into our consciousness and suddenly we realized the dramatic potential of generative AI. And I think in this year, we're now at the stage where businesses across the board are thinking differently.

What does this mean for us? How can we ensure that we harness this technology? How are we going to do it? And at the same time, in the public consciousness, there's a sort of concern about does this mean the end of jobs? Are we going to have computers that are more intelligent than all of us? And so this combination of public concern and corporates real interest and concern about it is just the perfect combination for massive hype. And there was at Davos. It was all AI all the time.

Yeah, I completely agree with Zannie. I mean, I've covered technology for more than 20 years and I've never seen a period that busy. So I suppose it's a good time then to talk to people who are right at the forefront of it. And Zannie, you spoke to Sam Altman and Satya Nadella, the bosses of OpenAI and Microsoft, respectively.

What were you hoping to get out of the conversation? So I was, first of all, just fascinated to talk to the two of them. This was the first time the two of them had been on stage, I think, for an extended conversation in a while. They are obviously two people at the center of AI. Sam Altman, you know, founder and CEO of the leading company doing this transformative technology. And then Satya Nadella, CEO of Microsoft.

a company which, you know, depending on the day of the week is or is not the world's most valuable company, but is certainly a company that has taken a bold bet on AI at the center of its future strategy. And I was interested in two areas, one just to see how the two of them got on. It's worth remembering that Sam Altman once called the relationship between OpenAI and Microsoft the best bromance in tech.

And that bromance obviously had a bit of a hiccup in November when the OpenAI board booted him out and then reinstated him again. So there was a little bit of drama. But more importantly, I think these two individuals thinking about where they think the technology is going, what it means for productivity, how to regulate it, all the really big questions. But I think the first thing I really wanted to establish was what was actually kind of happening in the here and now and what should we expect in the next year or so. Sam, let's start with you.

What are the most important capabilities that ChatGPT will develop in the next year? The fundamental thing that I think doesn't get enough attention relative to how important it is, is we're just going to keep turning the crank and the model that underlies this is going to get smarter and smarter.

generally across the board. So more than any one new feature or new capability or new tool, it'll be the GPT-5 or whatever we end up deciding to call that, will be much smarter, much more capable. But be a little bit more specific. I mean, you've talked about multi-modalities, so audio and video. Is that coming? What does that mean for the election and all of those risks? What about reasoning?

So reasoning I put in the category of the model's capability, and that's an important improvement to this general purpose ability to solve tasks for you. And maybe the model can't just write five lines of computer code, but can write 50 or it can write five very difficult lines. That's a big improvement. Yet voice mode is something people have loved, so we're going to do like much better audio as a modality. So there'll be new things like that. But I totally get the push on specifics.

But I really think the magic of this is the generality. Do you know the specifics or is Chatuby Teak 5 going to surprise you too? Yeah, I mean, one of the, I think, surprises of all of this is how much improving on the general metrics improves it in every specific case. But let's say it'll be much better writing computer code. It'll be much better writing computer code. Okay, I'm now going to turn to you, Satya. You've started introducing generative AI through into all of your products.

What are the customers in this room, the many, many, many customers you have going to be able to do in a year's time that they can't do now? The thing about this generation of AI is its ability to generalize and generalize better with capability. We've not seen anything that has that sort of get better at everything capability.

by sort of non-linear ways, right? That's the fascinating thing, whether it's code or whether it's writing an essay or whether it's doing some analytics or what have you. Our focus is to really take this and then say, make it the general purpose technology that's available. For knowledge work productivity,

I would claim since the PC, we have not had sort of the real driver of getting more things done with less drudgery. In fact, my first belief in this was Sam and his team were the ones who came to us and convinced us, saying, hey, this

this thing will be fantastic in writing code. I mean, you think about it, we're a company that was founded as a tools company. And so when they first said that, nobody believed it. So code is the one area where everyone says it's incredibly powerful already. Where else? It started in code, and now it's for every knowledge worker. If I look at inside of Microsoft, right, Bill in 93, I think, gave this speech at Comdex, if I remember right, called information at your fingertips. That was his metaphor of that age.

I think this is about expertise at your fingertips. I like that because that's what's happening. Like one of the workflows which I love a lot are our supply chain folks. They would usually wait till end of the quarter to get insights on what they got wrong because finance will analyze all of their decisions and what have you. Through the quarter now, they're able to essentially summon the expertise of finance, essentially interrogating the model by giving it all sorts of unstructured data.

That productivity driver, I have not seen anything like this since the PC. Since the PC, right. Computer interfaces started off very difficult to use. You were like arranged in punch cards. Apparently if you dropped them, you had to reorder thousands. Like it took you all night and it was this big thing. Then we had the command line, then we had Windows, then we had touch. And computer interfaces have gotten more and more natural. And now all of a sudden, we can talk to computers.

Like we talk to a person and they can understand us. They can do limited cognition. That's going to get much better. But it is a new way to use a computer. It may eventually become a new operating system. But the fact that so many people are able to use it for productivity in their workflow in so many different ways. People can use it to learn things, change education, healthcare practitioners use it in all these ways. That's the power. And I know it's not as satisfying as saying here's each thing it will do. But it's that it becomes a companion for knowledge work. It becomes a way to use a computer.

So Zannie, how satisfying were those answers from Altman and Nadella? It sounded like you had to push for specifics.

Journalists always push, right? So it's sort of yes and no is the answer. I mean, no, of course, I would have wanted more specifics about what to expect in the next year. But yes, they were satisfying in the sense that what I was struck by was that both of them laid real emphasis on something that is clearly true, which is that it is the across-the-board nature of the improvements in these capabilities that is so striking. And I think Satya Nadella put it very well. He said...

The thing about this technology is that it gets better at everything in non-linear ways. Ludwig, do you agree with Sam Altman's view that products like ChatGPT might end up being a kind of de facto operating system for the digital future? So as more and more AIs exist doing all sorts of different things, the ability of normal humans to interact with them will be mediated by another AI, basically.

Yes, I mean, that's certainly one possible outcome. And you would see lots of different AI services built on top of these big models, which are then de facto operating systems. But it's also a bit self-serving for him to say that. That's what he wants to happen. And that's what also Nadella wants to happen, because if you're the operating system of a technology, you make a lot of money.

And OpenAI is trying to build such a platform. Just recently, they opened the marketplace for what they call GPTs. It's almost like apps in an app store. And so they're trying to become kind of like what Apple's app store is to apps for smaller AI services.

But the reality could also turn out to be different. You could see a much more fragmented system where you have smaller models which serve specific purposes, and it's not going to be kind of the emergence of this intergalactic platform run by OpenAI or whatever company. And Ludwig, what do you think is actually the next stage for this technology of generative AI as far as you can see?

Well, if I knew that, I wouldn't be a journalist anymore. Probably would be an investor. But what I found interesting is Altman and Nadella in different ways talked about the magic of generality, that these systems get better all the time on all levels. And if he avoids going into specifics on that, it's not that he doesn't want to. Probably he doesn't know.

Because the more compute you put in, the more data you train them on, all of a sudden they can do things they couldn't do before. So he probably doesn't really know what's going to happen with GPT-5 or whatever they're going to call it. That said, I hope to learn more from him what he thinks they should do or they could do or what he wants them to do.

And he apparently wasn't willing to do that. Not sure why, but again, they're trying to be the platform. They're trying to build an operating system. So it's perhaps not so surprising that he would emphasize the magic of generality of these models. Well, both Nadella and Altman seem to be adamant that generative AI will be a kind of general purpose technology and there will be all sorts of emergent properties and things that will come from it. And then they've compared it to the advent of the computer or even the early days of the Internet.

And some of the concerns with the current state of generative AI are similar to the deployments of those technologies, that perhaps it won't reach the parts of the world that could benefit most from it. Zannie, I'm wondering, how are tech leaders thinking about that particular problem in your conversations with them?

I think it's not just tech leaders that are thinking about that challenge. I think with every new technology, there is a question about who will benefit? Will those gains be equally distributed? Who will lose out? And I was struck actually last week during the World Economic Forum, the International Monetary Fund, the IMF came out with a report that said 40% of jobs around the world, they estimated would be affected by

And that advanced countries had both greater risks of job displacement, but also greater potential benefits. And poor countries, there would be fewer disruptions, but less ability to harness the benefit. And so the IMF concluded that there was a risk that over time, quote, the technology could worsen the inequality among nations.

And I was struck by that when I was talking to Satya Nadella, because I remembered that he had said that it still haunted him that the country of his birth had been bypassed by the Industrial Revolution. And it really stuck with me. You know, he was born, he grew up in India, and that his hope was that this new technology, which was more powerful than the Industrial Revolution, could actually have the opposite impact and be one that countries like India could harness from the very beginning.

We've not had this happen where there has been some general purpose technology whose diffusion happened instantaneously everywhere. The thing that struck me when I went to India in January of 23 and saw a rural farmer use a WhatsApp bot

to ask a question in a vernacular language of a government program around subsidies. And I said, I've never seen anything like this. You know, something developed in the West Coast of the United States, suddenly available to a developer in India that then can be sort of composed with some digital public good being built in India to impact a rural Indian farmer's life.

That is my dream, right? That this time around, yes, I can't predict for certain, but I can tell you it's within our grasp. And here's another thing. Take that personalized health advice and personalized tutors. Any place, healthcare and education is most of the government's spend.

You now have the ability to give every student and every citizen of the globe better health advice and a better personalized tutor. No other technology ever. So the potential is really there this time. And the economics also work. And the diffusion is there. So that, to me, is a magical moment. And I hope we as a global community grab onto it.

Ludwig, let's be clear here that what Satya Nadella was just talking about there is still very much a vision. ChatGPT is not yet a personalized tutor and it can't actually give useful health advice as much as we would like that to be the case. No, it can't do that. That said, I mean, a lot of firms are working on these problems. And I don't doubt that in due course we'll have chatbots that are good at answering health questions or very good AI tutors.

But the question is, will that really improve situations in poor countries? And I think, yes, it could potentially. It could be huge. But there are many other barriers or problems that have to be solved before that. And one, of course, is access to the Internet, which is not pervasive in many poor countries. And the other thing is also incentives. I mean, do they want that? And I'm not sure that's going to happen with AI. I mean, AI or these services may be used for entertainment more than asking them what to do with health.

So yes, the potential is huge, but it's not clear to me that it's going to happen quickly. Zannie, I'm wondering from the conversations you were having at Davos, did you get a sense about whether AI access could be as far reaching as it needs to be to achieve the things that Satya Nadella was just talking about?

I think the potential is certainly there. And I think there's going to be a lot of discussion about how to ensure that access. And actually, spoiler alert, this is one of the reasons that we've put the whole issue of what will AI mean for poor countries on the cover this week. I think it's a really, really important question. And so we've got a couple of very interesting articles coming. Obviously, Ludwig is right that there are limitations to what chat TVT can do right now. But the potential is clearly there, both in health and education.

In the short term, I guess this is going to, as with most technologies, take longer than we think. It's not going to happen by the next Davos. But in the medium to long term, I really do think it can have a huge potential. And then it's really up to whether developing countries can harness the technology, can put in place the means needed to make the most of it. And that's often down to things like institutions. It's down to governments. But the potential is clearly there.

Ludwig, what do you think AI companies, governments and others have learned on the issue of access from the introductions of previous waves of technology such as the internet? Much more has to happen for this huge potential, as Zannie rightly says, to kind of

do its magic. And I'm a bit hesitant to say, oh, yeah, we have AI now and intelligence is free now, and that will change the world. I think there's so many barriers in many countries in terms of educational or what have you, that these also have to be solved. This is where Ludwig is always playing his, you know, role of being the voice of sober reason when I'm getting massively enthusiastic. The same happens in our internal working groups on AI when we're thinking about AI for the economists, you know, I've got some mad idea and then

Ludwig brings me back down to earth fast. Of course, you're right. There are lots and lots of barriers. People said the same about MOOCs, mass open source educational courses. There have been lots of

you know, we've reached the technology that will solve the development problem. So I absolutely take your soberness and it's probably right to be sober. Zannie, while we're on access and economics, was there a lot of talk at the conference as well about AI's impact on labour markets? Yes. Although actually, interestingly, I think the conversation was still at the big picture excitement of the technology. But yes, there is obviously a real question about

disruption of jobs. And the IMF, as I mentioned earlier, had this report that said 40% of jobs would be affected globally. And I asked Sam Altman and Satya Nadella about the impact on wages. And Satya put it in a very polite way. He said that some wages would go up and others would be, quote, commoditized, which I guess means go down. Or disappear. Or disappear. But there was actually an interesting other framing that they had, which was

that it was clear that a large and growing number of tasks would be done better by generative AI, but that didn't necessarily translate into jobs. Take a frontline worker. Like my best examples are, we used to have a platform pre-generative AI called Power Platform, right, where you could essentially had Excel level skills. You could create apps. Now, with just natural language prompting, you can create an app.

So if you're a domain expert in frontline auto, frontline retail, frontline healthcare, you don't have to wait for the IT backlog to get cleared for you to be able to digitize something, automate some workflow. So that is a new task and it's a new job.

Guess what? It also comes with better wage support, after all, because that would have driven productivity up. And so if it's going to drive productivity up for the company they work for, then they should get better wage support. So I do think there will be new jobs. There will be displacement in the labour market, but the labour market is much more dynamic. The adjustments will cause perhaps some wages to go up, some wages to get commoditised perhaps. Here's another thing. Take one of the classic issues we have, which is mid-career transitions. How do I pick up new skills? Guess what? The thing that

that, in fact, this generation of AI does is the learning curve comes down. So we can tackle that a lot more effective. So I think there'll be more dynamism in people being able to even switch jobs. Sam, do you agree with that? There will be jobs for people? There will be jobs. We always find new things to do. And yet, it does seem somewhat different if AI can have more cognitive power than any of us. I think one thing I say a lot is no one knows what happens next. I think

I think this is like a good modest way to live your life and try to make predictions. And I can't see to the other side of that event horizon with any detail, but

It does seem like the deep human motivations will not go anywhere. This is when people start getting alarmed. Why? That we have no idea. Well, I think that's just... We're going to have some intelligence that is more intelligent than all of us across every... And we have no idea what happened. No, no. I think when we invented computers, we had no idea. Like, one thing I love to do is go back and read about the contemporaneous accounts of technological revolutions at the time. And the expert predictions are just always wrong.

We always find our way through it. We always figure out how to manage it. I think what's important and why we believe in this strategy of iterative deployment of this technology is we need to have these discussions. The world is now paying a lot of attention. The more people that work on this together, the more likely I think we are to get to a good place. And so...

We're very happy this conversation is happening, but the expert predictions about every previous technological revolution have been totally wrong. And you need to be modest about that. You want to have some flexibility in your opinions and have a tight feedback loop with how it's going with the world.

Okay, Zannie, quick question. Are you still alarmed after that exchange? So actually, to be clear, I said people are alarmed. I didn't say I was alarmed. I'm definitely at the non-alarmist end of the spectrum. I think this is still going to take years rather than, you know, jobs gone by the end of the year. And so we will have time to adjust. But having said that,

A number of people who I respect disagree and say that when we get to AGI, however we define it, when computers are better than humans at everything, put it that way for us, a very broad definition, then the world will be very different because everything can be done better by AI. And I'll just give you an example, which was not from this conversation, but it was from a conversation that

Bill Gates had on his podcast with Sam Altman, and I did a panel, another conversation with Bill Gates in Davos, and we spoke about it. And the way he put it, which I thought was wonderful, he said, When the machine says to me, Bill, go play pickleball. I've got malaria eradication. You're just a slow thinker.

And I'm still kind of getting my head around that. So I don't completely discount that. I also don't discount that even if the diffusion of this technology is relatively slow, that governments will screw up how to prepare people for it. Because if you look at what happened with globalization over the last 30 years, it's a similar kind of thing, right? Globalization happened, a large number of jobs were displaced. It was perfectly possible to prepare people for

for future jobs and indeed compensate the losers. And yet governments across the rich world fail to do it well enough to prevent a massive political backlash. So we could mess up again.

But I'm not sure that I'm in the camp that all jobs are going to disappear and we're going to have huge numbers of people sitting around angry with nothing to do. Well, you've brought up the topic of AGI, which is something that causes a lot of alarm, but also excitement. I mean, AGI being artificial general intelligence, which is a goal that a lot of companies in this area have. And it's generally defined as where computers are able to perform tasks.

a wide range of intellectual tasks, as well as humans, the kind of artificial intelligences you see in science fiction films. In your conversations, Annie, what was your sense of how OpenAI and Microsoft are approaching that particular challenge?

Sam, and certainly relative to things that he had said in the past, I think was really trying to emphasize that AGI would arrive sort of gradually and wouldn't be a sudden, aha, big impact moment. And so it was both more imminent than people thought, but more gradual. The world had like a two-week freakout with GPT-4, right? This changes everything. AGI is coming tomorrow. There are no jobs by the end of the year. Right.

And now people are like, why is it so slow? And I love that. I think that's a great thing about the human spirit, that we always want more and better. And I think that's why we're never going to run out of things to do. And we're never going to run out of desire to create and ways to do useful things for people and feel useful and play silly status games and whatever. But GPT-4 was a big deal in some sense and did not change the world as much as everybody lost their collective, had their meltdown about it.

I believe that someday we will make something that qualifies as an AGI by whatever fuzzy definition you want. The world will have a two-week freakout, and then people will go on with their lives. We are making a tool that is impressive, but humans are going to do their human things, and society has a lot of inertia. And I

I think like we will both invent AGI sooner than most of the world thinks and in those first few years it will change the world much less and then the long term it'll change it more. Someone just said the world will only have a two week freak out when we get to AGI. That's quite a statement to make. It might be long. It's a directional statement.

Okay, Zannie, that particular exchange from your interview seems to have generated a lot of interest. Sam Altman said that GPT-4 didn't change the world as much as everyone thought. I mean, that's true, but also it's only been a year, so there's lots of time. And there's an oft-quoted aphorism in the tech world, isn't there, that people overestimate the impact of new technology in the short term and underestimate it in the long term. And it felt to me anyway like his answer was kind of an easy way to bat away some serious concerns that people might have about these technologies

rapidly developing technologies. I mean, now that you've had a bit of time to reflect on what Sam Walton said about this two-week freakout, have you had any further thoughts?

You're right. It was quite a story. When he actually said it, I was, you know, what a two-week freakout. And then what? Did we all go back to normal and just carry on? Yeah, but actually, you know, I've been reflecting on that. And at one level, it was a very deft way of amusing the audience, having a clever one-liner and failing to answer the question. So 10 out of 10 on all counts. But as I've thought about it a bit more, actually...

Actually, and I'd be interested to hear what Ludwig thinks on this, but I think he may be onto something because, for example, the AI that is embedded in the technology that we use all the time, we don't really think about that anymore. It's perfectly normal. We just assume that our phones are capable of the things that they're capable of doing now, whereas, you know, 10 years ago, it would have been absolutely extraordinary. So I think he's probably right.

We won't suddenly go from where we are today to AGI overnight. This will be an incremental process with incremental increases in capability. And at some point, some learned person will say, on every possible human capability, the AI now exceeds humans.

The day before that, we'll have been very close to it. And you and I, we won't suddenly think, what's the point of our existence? I mean, we'll say, oh, gosh, wow, we've reached AGI. And then we'll go on. And whether it's a two-week freakout or not, I think he's probably right that we will be so used to near AGI capability that when it's officially AGI, we won't be so surprised.

At the time, I kind of laughed and raised my eyebrows. But actually, you know, now with some days reflection, I think he's onto something. And Ludwig, the easy question for you. How far off is AGI? You know, I have no idea. And I also don't know what it is. That's the best answer to any question I've ever asked. Yeah.

Yeah, on what Zeni just said, yes, it's a bit like boiling the frog. It will be very incremental. At the same time, to compare that to like a two-week freakout, I think that's a bit Pollyannish. I think you're taking him too literally. This is Ludwig's serious...

doubter of things and precision meets Sam Altman. I mean, I think this was a metaphor, the two-week free camp. Okay, somebody has to. But life always goes on with new stuff. But that doesn't mean that we will like the outcome. I think there's a difference between something like the browser and the AGI and the impacts will be much more important. And yes, we've had moral panics before when it comes to technology, like the train. People were supposed to get dizzy watching the train. Didn't happen, of course. But AGI

AGI will have quite significant consequences. And we just talked about one, which means, for example, what happens to the economy kind of if the human work is commoditized, so to speak. So yes, it's kind of a joke, the two weeks freak out. But I think this stuff is much more serious. But perhaps again, I'm too German here. No, I think I think your seriousness is warranted. I think before our listeners think I'm, you know,

horrifically mean on you. I must tell everyone that I'm also half German, so I have some excuse to be able to make these claims. Okay, well, we'll join you both again in just a moment to explore more about how soon to expect artificial general intelligence, or AGI. And we'll also dive into how Sam Altman and Satya Nadella think about the looming regulatory challenges that face generative AI.

First, though, a quick reminder that this is a free episode of Babbage. To listen to us every week, you'll need a subscription. To learn more, just search the web for Economist Podcasts.

And now, a next-level moment from AT&T business. Say you've sent out a gigantic shipment of pillows, and they need to be there in time for International Sleep Day. You've got AT&T 5G, so you're fully confident. But the vendor isn't responding, and International Sleep Day is tomorrow. Luckily, AT&T 5G lets you deal with any issues with ease, so the pillows will get delivered and everyone can sleep soundly, especially you. AT&T 5G requires a compatible plan and device. Coverage not available everywhere. Learn more at att.com slash 5G network.

Today on Babbage, we're reflecting on an interview that our Editor-in-Chief Zannie Minton-Beddoes hosted with OpenAI's Sam Altman and Microsoft's Satya Nadella at the World Economic Forum recently. With me is Zannie herself and The Economist's in-house AI mastermind Ludwig Ziegler.

Zannie, what were the people at Davos saying about the timeline towards artificial general intelligence? You know, I kind of discounted what most people said because there were an awful lot of people who had absolutely no idea holding forth, which is actually why I... Didn't stop them from holding forth, I guess. No, but I don't put too much weight on that. But that is why I asked Sam Waltman and Satya Nadella the question because they are actually two people whose opinions on this I valued and I wanted to ask them specifically how far off they thought it was.

The issue is I don't think anybody agrees anymore what AGI means. And so what I think happens instead is it's just this surprisingly continuous thing where every year, you know, we put out a new model. It's a lot better than the year before. If you hold an iPhone 1, the thing from like 2007 in one hand, and iPhone 15 in the other, they are like unbelievably different. And you're like, wow, this thing was so bad. And at the time, it felt like such a revolution.

But I never remember any one year in there where I was like, the phone went from so bad to so great. And yet when I get to see the whole thing start to finish, it's amazing. And that's kind of what I think about with AI is GPT-2 was very bad. Three was pretty bad. Four is bad. You know, five will be okay. Eventually we'll have a good one. Ludwig, AGI will surely be a bigger moment than Sam Altman.

kind of flippantly made out? Or are we just overthinking everything? We never do. But to be honest, I don't quite understand Altman. So earlier last year, he signed some statement warning that AI could lead to the extinction of humanity. And now he's telling us it's going to lead to a two-week freakout. So I think he's both overhyping and underhyping this thing, depending on what is convenient. I mean, I'd love to have a serious discussion with him about what he actually thinks here and what needs to be done.

But Ludwig, it's quite clear that governments are worried about AGI and AI. What's the sort of thinking around things like regulation and where that should go?

I think governments are worried, and they're certainly more worried than they were about social media or the internet. And I think that's a good thing. I mean, the comparison is imperfect, but I think they should be worried about AGI at least, as much as they are worried about nuclear proliferation. I'm actually quite heartened that we see early on kind of the emergence of a global regulatory regime. I mean, it's going to take some time to build it, but the first AI safety institutes are being created. And I think that's pretty good. I mean,

At the same time, there is, of course, the danger that you regulate too early and you kind of stymie innovation and you do too much. But I think at this point, it looks to me it's quite reasonable. And Zannie, where are the companies on this? Are they keen to be regulated? So I think the leading companies are keen to be having this conversation and keen to be part of a regulatory regime, in part because they have the most powerful models, in part, frankly, because it builds barriers to broadening the technology.

Like Ludwig, I am heartened by the fact that there is so much focus so early on in the creation and diffusion of a technology on its safety. For me, the nuclear analogy only goes so far because with nuclear weapons, the difficulty of actually building these things and creating them was enormous, whereas with open source technology,

LLMs, you actually do have a much lower barrier to creating them. The question for me, or I guess there's two. One is, do we have a suitable regulatory regime for the really powerful models, which is still only a handful? They tend to be the non-open source ones. And I think there we are making progress. The second order question is, what about the proliferation of open source models? And are we going to have some kind of accident in

in that area. And I use another nuclear analogy, which is a civil one. I sort of say, are we going to have an AI Three Mile Island? And that would be something bad happening that's not sort of civilization destroying, but bad enough that it scares everybody, understandably, in the case of Three Mile Island, and sort of stymies the evolution of a technology, in this case, in a particular part of the world. We need to think about those two separately. I think the companies are thinking about it. And when I talked to Satya Nadella, he made that very clear.

The good news, if there is one, the governments of the world are interested, civic society is interested, academics are interested. So if I look at what the White House EO is or what the UK Safety Summit is, what's happening in Europe, what's happening in Japan, they are going to have a say. Nation states are absolutely going to have a say

on what is the regulation that controls any technology development. Most importantly, what is ready for deployment or not? And so I feel like we will all be subject to those regs, and that's what will govern even this fuzzy definition. Nobody knows how to define AGI. And the people who are good at defining things are people who think about safety, whether it's in car safety or airline safety or health safety.

Ludwig, there's a debate in the field, isn't there, about open source models and how they could play into the questions around safety. Zanni talked about this a little earlier. Talk me through what you see the outlines of that debate are. Yes, open source models. These are models that are freely available and can be modified by whoever has the right expertise.

And that, of course, means that it's easier that these models fall into the wrong hands or are being abused either to, let's say, build bioweapons if they're good enough, or at least to kind of create lots of deepfakes, disinformation and all of that. And so that's the debate. And lots of security types, they say we need to regulate those in the same way or even more strongly regulate propriety models.

Now, to some extent, I buy that argument, but it's only half the story. Because having open source models, for instance, makes AI more transparent. So the proprietary models, the ones provided by OpenAI and Tropic and Inflection and the like,

They are black boxes, so we don't know how they actually work, which makes it more difficult to assess how dangerous they actually are. Having open source models helps that. We learn more about the technology and perhaps one should also add we have more competition. Just taking that conversation on then, Zannie, where does OpenAI sit on this sort of quite complicated field of deciding how you sort of think about safety versus regulation? That's a very good question, Alok. That's exactly what I asked Sam Wiltman.

My claim is not that no harm at all will come from the open source models. It is that safety is always a sort of negotiation between different stakeholders. And it's, you know, is it safe enough? What does it mean to be safe enough? Are the upsides worth it? We think of airplanes as generally very safe things, much safer than they used to be. But it doesn't mean no one ever gets injured or dies on an airplane. And I think this is an example of why it's important to put technology out in the world early. By the time we get to an AGI-like thing,

I think there will be very comprehensive and thoughtful regulation about how a system like that gets deployed in the world. And it won't be our decision, which I think is a good thing. There will be international standards. But on the other hand, we can see what happens in some cases where technology gets over-regulated too early. And we've got to look at the risks of AI. And for sure, there are some big ones and small ones. And we need to address those. But it

it would be very easy to just sort of say, we're going to stop progress. And I think given the tremendous upside, what this will do for education, healthcare, pick your category, it'd be a real mistake to stop that. And in the history of the computer industry, open source has played a very important role.

So Ludwig, both Sam Altman and Satya Nadella seem to be confident that good enough regulation will be in place by the time AGI turns up, whenever that is. First of all, do you think that that's the case? And what do you make of the safe enough argument? No, I kind of have to contradict myself here. I'm still quite pessimistic that we'll get to a good regime by the time it really counts.

Because even if these companies like OpenAI or Microsoft and Google, they argue in favor of regulation, the question is always what type of regulation do they want? How far, for example, do they want to go in terms of transparency, revealing what training data they use and all of that. So I think we'll get to good regulation, quote unquote, I think only once we have something like a three mile island for AI.

with a case of real damage. So there might have to be something like that to drive people to action. I mean, even if regulation is on its way and things are happening immediately,

It strikes me that there's still quite a lot of power concentrated in some very small number of hands. Sonia, I wonder if you talk to Sam Altman about that issue. It's still up to him in some respects where this goes. You're right. We are now seeing governments talk more, but it is true that right now these decisions are all still in companies' hands. And I pushed Sam Altman.

open a bit on this because it struck me that there was an enormous responsibility for people like him. - If you were ever in that room and you thought to yourself, this is getting dangerous and this could actually have consequences that I would not want upon the world, would you then shout stop and would you stop? - We delay things or decide not to ship things all the time. From when we finished GPT-4 to when we shipped it, I forget now, but I think it was like eight months, something like that, maybe seven in that range.

I think people think of safety as this binary thing. And in practice, it's like we built this thing. Let's study it. Let's take enough time. Let's figure out new research to do safety mitigations. So I think that the hard question for me is not-- there's no one big magic red button we have that blows up the data center, which I think some people assume exists. But the non-cartoon version of that is how you decide when something is safe enough.

How you decide how to predict what the risks of the future are going to be, how to mitigate those. Also, when you need to relax, I mean, because you're stopping good use cases. So I think it's not this binary go-stop decision. It is the many little decisions along the way about allow this, don't allow this, set this new value here, things like that.

Zannie, what were you thinking when Sam Altman was talking just there? There's quite a lot that could go wrong. And again, it's up to a small group of individuals to decide how far to go. When he was actually saying that, I was looking at him thinking, there is this man right in front of me who has got the mental image of a big red button. It was a kind of, you know, James Bond-y image going through my head, blowing up the data centre, as he put it. More seriously, I was comforted

by his emphasis that they are incremental and that there is a benefit, I think, to the step forward, test, see what the dangers are, see what the benefits are, another step forward. I also think that it's all but certain that there will be bad things that come from this. But the reason I'm basically optimistic is that I think the potential benefits that could be harnessed far, far outweigh that if we do it right.

But I guess my main takeaway from that conversation was that the progress of this technology is actually unstoppable. And so the question is not, should it happen? Will it happen? Will we do it? But how do we do it? And there I'm a little alarmed that it seems to be a handful of people who are essentially shaping the future of humanity. But I think...

I think they take that responsibility seriously and there will be more and more people. And on balance, I'm very much a glass half full person on this. I think the potential is really extraordinary and I'm excited to, you know, I'm still young enough. I hope that I will see the benefits of all of this. Ludwig, just give us listeners a guide of what to look out for the next year or so of what we should be thinking about as this technology keeps going.

Personally, I think what people should look out for is what happens with open source models. That's a really important thing. Depending on whether these get over-regulated, then that will limit competition and that would be a bad thing. On the other hand, if you really let them lose, that could increase risks. And so you would want a much more diverse ecosystem of models rather than just someone controlled by just a few very, very big companies in Silicon Valley? Yes, definitely. We'd like this to be more fragmented, more varied and more competitive. Okay.

Okay, Ludwig, Zannie, thank you both very much. Now, before I let you go, I've got to ask you, from our recent content on The Economist, what have you watched or listened to or read that you would like to recommend to listeners? Ludwig, why don't you go first? If I may, I'm going to plug my last piece, which is a profile of ASML, a maker of chipmaking gear and Europe's most valuable tech firm.

I think it's proof that Europe, after all, despite all the criticism, is able to do big tech. You absolutely can recommend that. Thank you, Ludwig. And Zannie? Well, I would second Ludwig's recommendation of his own work. It's an excellent piece. Do read it. And it's exactly the kind of detailed inside representation

reporting that you would come to expect from Ludwig. I'm actually going to highlight another package, which is the Trump and business package, which was the package that was on the cover of The Economist last week. Henry Trix, our Schumpeter columnist, spent many weeks talking to business leaders in the US about how they were thinking about the prospect of a Trump presidency. And it perfectly captured the

the mood of the CEOs that I spoke to in Davos. He got it absolutely spot on and it's definitely worth reading. He got it absolutely right. Okay, Zannie, Ludwig, thank you both very much for your time today. Thank you. It's great to be here. Thanks, Alec.

If you're an Economist subscriber, you can find all those articles that Zannie and Ludwig mentioned on your Economist app, along with Zannie's conversation with Sam Altman and Satya Nadella in full, where they discuss the commercialisation of AI technologies and reflect on Microsoft and OpenAI's relationship after a testy end to 2023. It's definitely worth a watch.

And don't forget that if you're not yet a subscriber, but you want to listen to Babbage every week or any of our other specialist weekly shows for that matter, click the link in the show notes or search the web for Economist podcasts. Babbage this week was produced by Jason Hoskin with mixing and sound design by Nico Rofast. The executive producer was Marguerite Howell. I'm Alok Jha and in London, this is The Economist.

And now, a next-level moment from AT&T business. Say you've sent out a gigantic shipment of pillows, and they need to be there in time for International Sleep Day. You've got AT&T 5G, so you're fully confident. But the vendor isn't responding, and International Sleep Day is tomorrow. Luckily, AT&T 5G lets you deal with any issues with ease, so the pillows will get delivered and everyone can sleep soundly, especially you. AT&T 5G requires a compatible plan and device. Coverage not available everywhere. Learn more at att.com slash 5G network.