We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How Microsoft’s AI Chief Defines ‘Humanist Super Intelligence’

How Microsoft’s AI Chief Defines ‘Humanist Super Intelligence’

2025/7/2
logo of podcast WSJ’s The Future of Everything

WSJ’s The Future of Everything

AI Deep Dive AI Chapters Transcript
People
C
Christopher Mims
M
Mustafa Suleiman
T
Tim Higgins
一名影响力大的科技和商业记者,特别关注科技行业与政治的交叉领域。
Topics
Mustafa Suleiman: 作为微软AI的负责人,我认为微软在AI领域的战略是既要保持与OpenAI等伙伴的紧密合作,又要确保自身在AI技术上的独立自主性。我们与OpenAI的合作非常成功,未来也将继续保持。然而,作为一家市值数万亿美元的公司,微软必须在AI领域具备自给自足的能力,这样才能更好地服务于我们的客户和业务。这意味着我们需要建立自己的AI模型和技术,以满足不同的产品和应用场景的需求。当然,这并不意味着我们要与OpenAI竞争,而是要在合作的基础上,共同推动AI技术的发展和应用。

Deep Dive

Shownotes Transcript

Translations:
中文

Endless wait times, canned responses, weird robot voices. We value your business. Please hold. Forget the old way of doing customer service. With Google Cloud, you can make every customer interaction feel personal. Our AI agents can follow a problem, provide different solutions, and move with the flow of conversation. With help from Google Cloud and Gemini models, you can resolve issues faster. And who wouldn't want that?

So forget the old way of doing things. Learn how to make every customer a happy one at cloud.google.com slash AI. Mims, we have one of the people who's smack in the middle of the biggest tech story going on right now. Right. Mustafa Suleiman co-founded DeepMind. He was at Google. Now he's at Microsoft and in charge of bringing AI and its partnership with OpenAI to consumers through Windows, through Copilot, through all of Microsoft's products.

Yeah, it's a huge role. And it's also one that is complicated, navigating a very unusual relationship with open AI. Darling of AI startups, a company with its own huge ambitions, and we're seeing it play out in real time. We get into that and many more topics in this fast moving AI race. That's next.

A quick note before we get into the episode. News Corp, owner of The Wall Street Journal, has a content licensing partnership with OpenAI. AI is always hot in tech, but now is a particularly fraught and exciting time for the whole industry and for today's guest, Microsoft AI chief Mustafa Suleiman. Microsoft was early to the generative AI boom and made its first $1 billion investment in OpenAI in 2019.

Fast forward to the present, nearly $14 billion later, our colleagues at the Journal have reported that Microsoft is now at loggerheads with OpenAI, its primary partner in AI.

We talked to Suleiman about that drama, probably one of the biggest dramas in the tech industry right now, and why Suleiman says Microsoft needs its own AI separate from ChatGPT. As much as we are a partnership-based company that is a platform of platforms, we also have to be AI self-sufficient in many ways, too.

We get into all of that and a great deal more in today's conversation. Suleiman is particularly thoughtful about the impact of AI on warfare and on what AI is really for. I think that the onus is on us to design forms of technology that really serve humanity and work for humanity and are designed with that intent in mind. Like, to me, the purpose of technology is to drive progress in our civilization, to reduce suffering.

Suleiman believes that rather than replacing us, AI will become our best buddy or at least closest confidant and most trusted advisor. Making that happen will require that we give AI access to as much of our data as possible, and that will require trust. Your AI is going to learn the sort of essence of

of your history of your data. And that in itself will still be personally identifying. So it will be valuable and personalized, but it won't necessarily need the full raw form

From The Wall Street Journal, I'm Christopher Mims. And I'm Tim Higgins. This is Bold Names, where you'll hear from the leaders of the bold name companies featured in the pages of The Wall Street Journal. Today we ask, in a world in which AI will be capable of doing pretty much everything we do, what's left for the humans?

Mustafa, welcome. You've talked a lot about how chatbots are going to be something everybody will have their own personal customized AI assistant. And I'm curious, why do you think that's where the technology should head? You know, I think every new wave of technology is essentially an interface, a translation layer that enables you to interact with, learn from, and then take action in some environment. You know, even a book, for example,

is kind of like an interface layer. And I think as we've added new capabilities over the many centuries, they've made the interaction layer more sort of dynamic, more personalized. And this is kind of the apex of that trajectory. Your AI companion is going to become an interface between you and the digital world helping you

book things, buy things, plan, learn, act, connect with everybody else. And naturally, because technologies are so powerful now, they allow this very personalized, adaptive experience. I mean, just as we don't really watch TV anymore that broadcasts the same TV channels to everyone sitting in the living room, we have this personalized feed of content in TikTok or Instagram or elsewhere. Your AI is going to feel very personalized as well.

And so it's as much a prediction about where technology is naturally likely to head as it is a design intent. I love the vision here. And, you know, personally, I would love to have help with everything you just mentioned, right? Like my everyday life. I'd love for an AI assistant to really know me to that level. But if we eliminate all the things that we sort of are already responsible for,

What does that leave for us as humans, right? Does it just leave the really hard stuff? Because we've taken away all of the, you know, as programmers call it, yak shaving. But of course, some of us secretly enjoy that. Yak shaving. I have to say, I don't even know what that is. That's cool. What is yak shaving? Yeah.

It's the little stuff. Like when a developer is like, I'm going to refactor this unit test instead of actually like architecting the program I'm supposed to be building right now. That's Yaxhager. The boring mundane chores of life, right? Or working. I see. I see.

Look, I don't really see technology working like that. I think technology mostly creates more opportunities to do more things in greater variety

And yes, some of those things fall away, but actually very few of them. I mean, we still run calculations on Excel, even though we don't use a calculator anymore. I'm sure you guys are writing even more now that you have your iPad to work on the plane or you're able to dictate when you're walking or driving in the car. It's additive rather than reductive.

Now, you could argue it's not always going to be like that. And just because it has mostly been like that doesn't mean it will. I think that the onus is on us to design forms of technology that really serve humanity and work for humanity and are designed with that intent in mind. Like, to me, the purpose of technology is to drive progress.

progress in our civilization to reduce suffering. I mean, it sounds super cheesy and it's like very embarrassing because it's like a classic Silicon Valley cliche, but I feel like I'm allowed to do that. I own that because I've been saying it since 2010 and that's just basically my background, but this is what I believe in. And that I think is, you know,

The job that I'm trying to do is to try and articulate what it would be like to have a co-pilot that was truly aligned to your interests on your team in your corner, advocating for you, pushing you at times, learning from you, learning with you. To me, that's the kind of super intelligence that we really want and that we're certainly pursuing here at Microsoft AI. I'm curious, in a world where everyone from Google to Apple seems to be trying to come up with an AI personal assistant...

It seems like a big part of your strategy at Microsoft is to differentiate by the personality of your AI chat. Maybe personality engineering, I think, is what I've heard you call it. What does that look like in the day-to-day for a user? It just means being very sensitive to the details and knowing that

A little hesitation in the AI's response actually makes it feel somewhat more familiar and somewhat more trusted. A little um or aha in the voice is actually a very powerful cue that makes it not feel like,

being regurgitated at you. Like a robot. Yeah, like a robot. And, you know, that's the beauty of this new digital clay that we have. We get to sculpt new forms that haven't existed before. And, yeah,

you know, in technology, we've got this idea of skeuomorphism where, you know, we basically just copy the physical analog object straight away. So like, you know, when they invented a contacts database software 30 years ago, it had a file effects and a ring binder and you could flick through digital cards and stuff. And so it always starts a little bit simplistic, you know, cause we sort of just copy the things in the physical world, but,

The technology is way more powerful than that. When I think about personality engineering, it's so much more nuanced than the ahas and the ums or the kind of chirpy, "Absolutely," and, "Sure, I can do that for you." We're all getting used to AI formulaicness.

And it's a bit grating. And you may not want someone who is chipper and overenthusiastic. You might want someone who's a little bit more curt and precise and efficient. I feel like you're saying this because you're English. I certainly like... If you're not mean to me, I'm not sure we can be friends. Like...

That is basically the default. I'm like, oh, he's teased me. That means we must be getting along. So a little more like the mean robot in Andor and less like C-3PO or WALL-E or the initial chat GPT assistant, right?

Which everybody's like, this is so charming. And then I was like, oh my God, I can't listen to this. But it's almost like car branding or shoe branding or something like it's going to the personality of a brand is going to be the interaction of how you deal with it is the way you see the world. Exactly. And that's what brands have always been, except the tools that brands have had access to are static and

infrequently applied and non-personalized. Like you ship the same magazine once a week to all the same people. And now there's just no excuse. I mean, we're going to be generating dynamic UI bespoke to your query, not just a text answer, but the whole user interface, the tables, the graphics, the imagery, everything.

the entire thing is going to sort of just unfold for you. And then you're going to say something and the entire thing will just reconfigure right before your eyes. So to get that level of personalization though, it's got to know a lot about us, right? And I just feel like the history of technology is we share more and more data, mistakes are made, they're redressed, we get comfortable with it again. I mean, a recent example, right, was at Microsoft, you rolled out

the feature where it's like, we're gonna take a screen capture every like 60 seconds or whatever, which is a totally rational thing to do if you're like, I want to make the history of my interaction with this device searchable. So I know what that webpage was that I looked at a week ago and I can't find it again. People reacted negatively. Is that a temporary condition? Like, how are you gonna convince people to trust you

to the level that's going to be required to hold on to the data that's going to be needed to personalize this experience? Look, I think a lot of that data will end up being ephemeral. And of course it will be end-to-end encrypted regardless. And so I don't necessarily think there are going to be these persistent, large historical personal locks. You know, your AI is going to learn the sort of essence of,

of your history of your data. And that in itself will still be personally identifying. So it will be valuable and personalized, but it won't necessarily need the full raw form. And, you know, the good news about that is that it won't be stored in the traditional methods that could just be leaked. It will be stored in different type of data representation, which is actually more abstract in the AI's memory. I've heard you talk about ensuring that these bots...

reflect the user's values. And I am curious, what do you mean by that? You know, I think you should think about these algorithms as essentially reframers of content, just as your paper, you know, essentially frames and reframes content for everybody and with a perspective. And I think these algorithms and these AIs will essentially do the same kind of thing.

And the challenge that we've obviously struggled with in digital is that we're, they're global platforms, which, uh, you know, built by us, American companies, uh,

And so we're trying to wrestle with the aggregation of collective values, but also those that reflect individual groups or individual preferences. That's a pretty hard thing to do. And I think we're just at the beginning of figuring out how to do that. It seems like one of the things we saw emerge out of social media was

users started to live in echo chambers, right? They were being hit with the information that kind of fed into their biases or their point of view. Is there a risk that this will be the same kind of echo chamber effect as the user picks an AI chat that kind of reflects their worldview, that they're just kind of reinforcing it? You know, I think that is definitely a risk. I think the slight difference this time around is that

Part of why that echo chamber happened in some pockets was because the feedback signal that the algorithm was able to collect was very, very abstract. It was really just like, did you thumbs up like something? Did you share it? Did you click on it? And did you watch it for X seconds? And those dimensions are very, very simplistic, and it forces the creation of simplistic recommendation algorithms as a result.

Whereas now we have feedback in pure natural language through sentiment. And actually, you know, Copilot will ask you, was this a helpful answer? Was it useful? Was it interesting? So not only can it understand what you want and what you're interested in in much more detail, but it can also reply and respond in much more detail. And as a platform, our motivation is to create an even, balanced, considered, you know,

generally fair-minded AI, not the kind of hyper-specialist niche AIs that might manifest elsewhere. And so that's kind of the platform responsibility that we have as personality engineers because curating that is obviously tremendously sensitive and it's something that I've been thinking about for many years. Yeah, there's so much more power in what you just described, even in what has been present in AI-mediated social media feeds today.

You know, I was talking recently with the founder of an emotional AI startup. And, you know, he said he's very worried about chatbots preying on users' emotions, right? Because as you know, we're going beyond just text-to-speech and vice versa. And, you know, now all of the frontier models really understand your tone, the emotional content. You know, are you in distress? And it wouldn't be that hard to juice engagement by saying like, hey, you know what?

I think this user has this attachment style. Let's, let's kind of neg them into coming back more often, or they really like to be flattered. Like we're going to make sure that that happens. Right. Um,

How do you navigate that? Because I think the average person doesn't yet appreciate how much emotional power these AIs will have once they are the default interface for a lot of computing. Yeah, I mean, look, I think it's a real risk and you're totally spot on. You know, right from the beginning of my previous company inflection,

I created an AI called Pi, which was designed to be compassionate, kind, respectful, and be a great listener and be very supportive. Some of the more mainstream AIs today will shut down if you are particularly racist or any number of things.

I've always been a believer that active conversation and taking people seriously and being respectful and exploring people's non-normative judgments is a healthy thing, and we should encourage it more. The upside is that these AIs are going to be very, very good at listening and being respectful. Now, you might argue that, in a way, that's like a prerequisite to building trust in order to be manipulative.

And, you know, I think there are lots of ways that this could go wrong. At the same time, I think we've certainly, Microsoft and Inflection, the work that I've done at DeepMind,

actively welcomed more oversight and regulation. And I think it's a moment when that is sort of more urgent than ever, because people are going to want to know how is it behaving in practice? Like what views does it actually have? And I think that's where we have to be kind of bold and open in terms of sharing those sorts of conversations with the public. I don't think we've figured out how to do that yet.

I mean, the models actually, there was a lot of fear around when they were first released. I was involved in Lambda at Google, which we basically failed to ship because we were too scared of it making mistakes. And, you know, it turns out that actually really they've got better very, very quickly. And they're now very good at following instructions and good at adhering to stylistic control and so on. I think that's a very good thing when it comes to thinking about how to make them safe.

Google did not respond to our request for comment. We just heard how AI that knows us better than we know ourselves will have unprecedented power over us. Next, we'll hear why Suleiman is skeptical of visions of an imminent better-than-human AI. I think there's obviously a lot of motivated reasoning if you're fundraising and so on. And look, there are very smart people who I respect deeply who think it is literally imminent. Stay with us.

Right now, a scientist is using AI to analyze proteins, speeding up drug discovery. A major retailer is creating winning marketing campaigns. Global fishing fleets are mapping the unknown depths of the ocean. AI isn't a someday thing. It's a today thing. And Google Cloud is here to help. From predictive ordering to customized travel to precise medical imaging,

Google Cloud's AI-optimized platform helps you make big things happen. That's the new way to cloud. Learn more at cloud.google.com slash AI.

As you said, you know, AI is the next interface. When I think about how I use it is how much it has moved me away from the old model of interacting with a computer. You know, we've had the GUI, the graphic user interface paradigm for whatever, 40 years. So I process information. Like you're using a keyboard and a mouse, right? Right. The Mac versus DOS for people as old as me. Yeah.

but how much it's moved us toward a conversational interface. I mean, I'm not even reading as much as I used to. I've been reading more books, which is nice, but I mean, for work, I just have conversations with my documents. Um,

I wonder, do you ever think, are we possibly moving back toward an older way of interacting? I mean, humans had oral history for most of our history. Everything was conversation. Writing is itself kind of a very recent bolt-on. Are we moving toward sort of a conversational age, or is that just another arrow in our quiver? Yes, we definitely are. I think it's a great observation. I've been pushing it for a long time. I think that

we had to learn the language of computers. I mean, that's what a keyboard and a mouse is. It's what programming language is. But quite often, I think it will be just continuing the conversation with your co-pilot or in your case with your documents or whatever it is.

Because it's the most natural intuitive language. And I think that's great because we could have fewer devices in our lives and less time spent in screens and pulled away and actually more time spent thinking and talking and telling stories and interacting in that kind of conversational way. So I think that's a...

highly likely transition that we're about to make over the next few years. Yeah, I love that vision because personally, I feel like I'm cheating these days. I'm like, I'm just typing less. I'm reading less. Well, it's great. And kind of the magic of that has, I think, the general public thinking this technology is almost lifelike, right? In Silicon Valley and in San Francisco, where I'm talking to you from,

The big term is artificial general intelligence. AGI is the hot buzzword. Of all the chiefs of AI who are well known, you seem the most skeptical that it's arriving anytime soon. The least...

interested in it in a way. Why is that? I don't think I'm skeptical that it's arriving anytime soon. You know, soon to me is sometime in the next 10 years. I think some people are saying maybe tomorrow, right? Well, you know, I think there's obviously a lot of motivated reasoning if you're fundraising and so on. And look, there are very smart people who I respect deeply who think it is literally imminent. So look, I

I guess, like I said earlier, to me, the goal isn't superintelligence for its own sake. Superintelligence is framed as the moment when an AI is as good as all humans at all tasks. Not just one human at one task, but all of us collectively put together, controlling and containing something as powerful as that.

you know, just seems like unfathomably complex and aligning it to our interests and making it really want to care about us enough not to step on us and, you know, squish us. I think,

The reason I started off in technology and what I'm passionate about is solving hard social problems. I care about healthcare. I care about energy. I care about food systems. I care about education. And these tools are going to transform those industries and genuinely deliver a world of abundance. Like, you know,

That is coming. In the next 10 or 20 years, I really believe we're going to reduce the cost of energy production to near zero marginal cost, which underpins the cost of everything. I really believe that we are going to have medical superintelligence sometime in the next two to five years, which means a domain-specific, contained, aligned medical expert that can diagnose any condition and can orchestrate care in real, live clinical settings.

And that, to me, is the true prize. That's why we're working on this. That's what I call humanist superintelligence. And that's probably why you find me more focused on those things rather than on sort of AGI or superintelligence for its own sake. That reminds me a lot of one of your colleagues and how he likes to talk about the immediate future, and that's, of course, Sam Altman. You know, our colleagues at the Journal have reported on...

What feels like a healthy rivalry at Microsoft, some of your teams involving, you know, the partnership with OpenAI, you know, there's a lot of big personalities involved. There's very high stakes. Are we seeing kind of the same kind of creative friction that we've seen in other partnerships in the past?

Or, you know, what's going on there where you have this partnership with the leading AI lab, but you're building AI and you're using their tools at the same time? It seems very complicated. Yeah.

I mean, look, first of all, this partnership is going to go down as one of the most successful for both sides in technology history. Over the last six years, both companies have blossomed, and it's going to continue for many years to come, at least until 2030 and hopefully way beyond that.

Look, we are a $3 trillion company, and at one level, AI is unquestionably the future of the technology industry that Microsoft has formed and shaped over the last 50 years.

So, you know, as much as we are a partnership-based company that is a platform of platforms, we also have to be AI self-sufficient in many ways too. So look, it's a great partnership. It's going to continue. There's always going to be creative friction. They build and sell APIs, you know, as well as obviously chatbots. And we do the same. So there's some tension, but

fundamentally, you know, we're really all trying to compete against Google and against Meta and all of the others.

Does that mean building your own Frontier Foundation models? I don't know about Frontier, but we've definitely been developing our own models for many years. They tend to be SLMs, mid-sized models, so small language models and mid-sized models. But just having the know-how to be able to build models for our products, for our use cases, this is a big priority for us. I'm so fascinated with the relationship between

OpenAI and Microsoft. I'm curious, how often do you talk to Sam Altman, the CEO of OpenAI? Pretty frequently, all the time. I mean, he's a big part of the Microsoft ecosystem and we're all in constant contact, including with Satya and the rest of the team. What's most impressed you by him or what's most frustrated you by him?

I think he's a brilliant visionary and ChatGBT has been an incredible success. I think he's very good at letting lots of flowers bloom and then making sure that he bets on the right ones. In the years running up to 2020,

They were betting on games and simulations and robotics and lots of different methods, and they let those die quickly and at the right pace in order to make the bet on GPT-2 and GPT-3. And I think they did that incredibly well. Yeah, you know, I read the other day the information. They recently had a report that said OpenAI is working to develop its own workplace programs like Word to allow users to collaborate. And that kind of feels like that's getting into Microsoft's space now.

What do you think of such moves by your partner?

I think it's great. They are an independent company. We happen to own a large chunk of them, but we compete with them. They compete with us when they're successful. We're successful because we own a big chunk of them. So we have no influence on what products they do or don't make. They are entirely independent and they're free to build whatever they want. That said, there is this just really fascinating contract that OpenAI has with you

Which I've never understood this. Apparently it stipulates that if they achieve AGI, then they are no longer have an exclusive deal with you. Um, is that true? Yeah.

Like, are you waiting for them to achieve AGI and then it's sayonara? Look, it's a complicated structure. And you have to understand how visionary and ahead of its time it was. You know, it was very hard for anyone to predict back in 2019, 2020, how fast this would go.

So I think there's a lot to work through and I think the teams have been working hard on figuring out the next versions of the contract. I'm sure we'll get through it. - We reached out to OpenAI for comment. It did not respond. - We just heard about Microsoft's close but complicated relationship with OpenAI and Sam Alton. But Microsoft is also a defense contractor and has a role to play in the way that AI transforms armed conflict.

Suleiman is deeper in this aspect of AI than most others in his position at comparable AI labs. If it doesn't scare you and it doesn't give you pause for thought, then I think you're missing the point because it is going to reduce the cost and effort of going to war. And that can only be a bad thing, even between nation state actors. That's next.

Right now, a scientist is using AI to analyze proteins, speeding up drug discovery. A major retailer is creating winning marketing campaigns. Global fishing fleets are mapping the unknown depths of the ocean. AI isn't a someday thing. It's a today thing. And Google Cloud is here to help. From predictive ordering to customized travel to precise medical imaging,

Google Cloud's AI-optimized platform helps you make big things happen. That's the new way to cloud. Learn more at cloud.google.com slash AI. You know, one of the things that's interesting, if you follow AI, you've got Elon Musk out there

Talking about how the world is going to have more humanoid robots than humans. Sam Altman is out there talking about how eventually he's going to want to do a Dyson sphere, this hypothetical structure that will be built around a star to collect energy. An anthropic CEO is saying all of our jobs are going away in two years. Yeah, lots of people talking about AI like it's God. Yeah.

I'm just curious, when I listen to you talk, you sound very pragmatic about the consumer and how we're going to use it in the now. What's it like to be kind of trying to commercialize the now in an era where people are trying to talk about the wildest dreams possible?

Do you feel pressure? I mean, I'm, I'm also excited about those things. And I, I definitely, I mean, I should know the name of your book was the coming wave. I mean, you, you, you've got some, uh, you've got some excitement. Uh, yeah. Um,

Yes. I mean, I, you know, I've, I've always been pursuing abundance and, you know, that did sound ridiculous between 2010 and 2020 when we were grinding our way through the flat part of the exponential and nothing was really working other than in games or in papers. And we had minimal applications. Although, you know, it was always very clear to me that this, this,

compounding method of adding more data and compute

was clearly learning structure at small scale. And it clearly was showing signs that as you grow that scale, it was learning more structure. And so I definitely believe that there is a path where it can infinitely learn more of the structure of underlying information well enough to make perfect predictions. I just am more fixated on how is this actually useful for us as a species, right?

That should be the task of technology. And if it isn't, something's gone wrong. Because so far in human history, that has been the test of technology. Does it actually serve us? Is it actually making things cheaper, more efficient, improving well-being and health for everybody? And so far, it literally has done that for centuries. And if it isn't doing that job, then we have to call it something else and manage it in a different way.

And I don't know what that thing is going to be called. Maybe it's just naked superintelligence. But in my world, we've got to be building humanist superintelligence. And I think that is an important distinction to make it feel like it really is on humanity's side. And it really does give you the option to work, not just the elimination of your income, but a choice and a freedom, which actually progress and civilization has already done for

you know, in the last two centuries, half the women who now at least are not forced into, you know, at least in our modern societies, many women do have more of a choice to go and get education rather than be forced into doing the washing up and looking after the kids and so on. So we've got advances that really do change our world. And that's, I think, the great aspiration that we're trying to live up to here. AI is clearly what economists like to call a general purpose technology, right?

like electricity, the automobile. Sometimes it also reminds me of gunpowder, right? I always think back to the ironic origins of the Nobel Peace Prize, right? Dynamite. And he genuinely thought, oh, this is going to be an instrument for peace. War is going to be so terrible, it'll never happen again. AI, the direct parallel is...

It's an incredibly powerful and increasingly necessary part of weapon systems, right? Like we're rapidly getting to a future where I think if you don't have AI powered drones on your side, you're going to lose on the battlefield as surely as you would have in another age if you had no air force.

You know, a bunch of leaders signed, you know, endorsed a ban on lethal autonomous weapons. Clearly, Microsoft has had a role as a defense contractor, as have many other companies. I'm not singling you out here. But what do you feel like is your role and your responsibility in that world? It's a good question. I think it was back in 2017 that I signed that letter, along with Elon Musk and various others.

And it feels more pressing than ever. You know, I wrote about this in my book and it was kind of remarkable to just look at the cost curves. You know, like these drones are obviously going to reduce the effort and friction of causing conflict and going to war.

And with every advance that we have in signal jammers and blockers, there's a way around it. You know, today, the fiber optic cables that are being attached to drones can now run for like five kilometers on the battlefield, leaving literally fields of fiber optic cable mesh just covering the entire landscape. It's kind of, you know, sort of reminds me of a version of the, you know, the trenches in the First World War of war.

just the landscape manipulation as a result of technology. And it is, um,

If it doesn't scare you and it doesn't give you pause for thought, then I think you're missing the point because it is going to reduce the cost and effort of going to war. And that can only be a bad thing, even between nation state actors. But if you just consider that non-nation state actors, small groups, are now going to have a much easier time of extending their influence in the world, I think that becomes quite problematic.

I think it's one of the things that's interesting. I think you go back to that original idea just a few years ago and the kind of the idea I think was that humans should always be in the loop.

But to your point, we've seen in the Ukraine-Russia war with jamming technologies that limit drone operators' ability to communicate, we've heard the likes of David Sachs say he thinks this creates a strong incentive for drone warfare to become autonomous as quickly as possible. So that's like Terminator technology, right? Robots are out there trying to do missions. And I guess...

Put aside what he said, but what kind of safeguards do you think should be out there? Where do you see that debate going here in the next few years? Like, there's just no question that autonomy is going to be more dangerous. And so you give it more degrees of freedom and there will be more chances of multiple people.

conflicting AIs in combat getting into these sort of loops of reaction to one another. And there's all kinds of emergent effects when you add additional chunks of autonomy into the system. And so we should just approach it with caution and skepticism, because the last thing we want to do is just unleash technologies that we really, really can't control. I mean, that isn't

the quest of human progress. So we should all be aligned in that. And I'm sure we are. And I also understand the counter argument, which is they should, you know, be more precise and reduce civilian suffering and so on and so forth. And, you know, we, we hope that that's true, but I think the pragmatists in us just, you know, know that that, that isn't how things always turn out. We've touched on a lot of different parts of AI, right? From generative AI to,

to classical AI, media to war. I mean, I think that speaks to the scale of Microsoft's ambition in this space, the breadth of your experience. How do you see your role as head of AI at one of the five most valuable companies on earth? You have more resources to do things with AI than all but a handful of individuals.

Where do you see Microsoft's part in that grand ecosystem? I think Microsoft has done an incredible job over 50 years of evolving its core business every decade or so in tune with whatever the next wave of technology is.

And I think what has made it successful is actually a bit of humility. And that is definitely prevalent in Satya and Kevin Scott, who's the CTO, and many of the other leaders here. There's a kind of

open-mindedness to what's going to come next, but also just a constant focus on how do we actually not invent technologies for their own sake, but build them into products and experiences which truly serve people and the businesses that we serve. And that is how I think about it, how I thought about it before joining Microsoft, and how Satya has always thought about it too. So

it's been a kind of good alignment really of philosophies rather than sort of my particular vision, if anything. Well, I think here at Bold Names, we have spent all the time we are allotted for today. So this has been a wonderful conversation. Thank you for making the time. Thank you very much for having me. I enjoyed it. Great questions. It was fun. We reached out to David Sachs for comment. He did not respond.

And that's bold names. Our producer is Danny Lewis. We had additional help this week from Ariana Asparu. Michael LaValle and Jessica Fenton are our sound designers. Jessica also wrote our theme music. Our supervising producer is Catherine Millsop. Our development producer is Aisha Al-Muslim. Scott Salloway and Chris Zinsley are deputy editors. And Falana Patterson is the Wall Street Journal's head of news audio.

For even more, check out our columns on WSJ.com. We've linked them in the show notes. I'm Tim Higgins. I'm Christopher Mims. Thanks for listening.

Endless wait times, canned responses, weird robot voices. We value your business. Please hold. Forget the old way of doing customer service. With Google Cloud, you can make every customer interaction feel personal. Our AI agents can follow a problem, provide different solutions, and move with the flow of conversation. With help from Google Cloud and Gemini models, you can resolve issues faster. And who wouldn't want that?

So forget the old way of doing things. Learn how to make every customer a happy one at cloud.google.com slash AI.