Nadella recognized the secular shift towards mobile and cloud computing as the future of technology. He simplified his initial vision of 'ambient intelligence and ubiquitous computing' to 'mobile first, cloud first' to make it more accessible. This strategy aimed to leverage Microsoft's structural position and assets to capitalize on the growing demand for cloud services and mobile solutions.
Nadella introduced the concept of a 'growth mindset' as a cultural meme for Microsoft. He encouraged employees to move from being 'know-it-alls' to 'learn-it-alls,' fostering a culture of continuous learning and adaptability. This mindset was crucial for navigating technological transitions and maintaining a competitive edge.
Microsoft invested in OpenAI because of its groundbreaking work in language models and transformers, which aligned with Microsoft's core focus on natural user interfaces and information management. The partnership allowed Microsoft to gain a competitive edge in AI, particularly in areas like GitHub Copilot and enterprise AI applications.
Nadella believes that while there will be fierce competition among major players like Google, Amazon, and Meta, the AI landscape will not be winner-take-all. He sees multiple winners at different layers of the stack, including infrastructure, models, and applications. Microsoft's structural advantage lies in its enterprise-focused cloud infrastructure and global data residency capabilities.
Bing faces the challenge of transitioning from legacy search models to AI-driven conversational agents like ChatGPT. While Bing still benefits from distribution advantages on platforms like Apple and Android, the shift towards stateful agents that provide direct answers threatens the traditional search business model. Microsoft is managing this transition by integrating Bing, MSN, and Copilot into a unified ecosystem.
Microsoft is using AI to streamline processes across various departments, from customer service to software development. For example, GitHub Copilot has transformed the workflow for engineering teams, while M365 Copilot is enhancing productivity in areas like document preparation and data analysis. The goal is to create operating leverage by reducing costs and increasing efficiency through AI-driven automation.
Nadella believes that AI agents will become more autonomous, capable of handling tasks like booking travel or managing supply chains. However, he emphasizes the need for verifiable actions, memory systems, and governance to ensure safe and effective use. The co-pilot approach remains central, allowing users to interact with AI while maintaining control over exceptions and permissions.
Microsoft is managing CapEx by focusing on efficient utilization of its hyperscale infrastructure, which includes data centers and AI accelerators. The company is building out its capabilities to meet demand for AI inference, ensuring a balance between supply and demand. Nadella emphasizes the importance of software to enhance the return on investment (ROIC) from capital expenditures.
Nadella views open source and closed source as different tactics for creating network effects rather than religious battles. He supports Meta's approach of commoditizing complements to drive convergence. However, he also believes that safety and regulatory concerns are orthogonal to the open vs. closed debate, with governments playing a crucial role in ensuring safe AI development.
I think the company of this generation has already been created, which is open AI in some sense. It's kind of like the Google or the Microsoft or the meta of this era. ♪
Well, it's great to be with you. You know, when Bill and I were talking, Satya, and looking back at your tenure as CEO, it was really quite astonishing. You know, you started at Microsoft in 1992, for those who may not know. You took over online in 2007. You launched Bing Search in 2009. You took over servers and launched Azure in 2011.
and you became CEO in 2014. And it was just before that, that a pretty now well-known essay entitled The Irrelevance of Microsoft had just been published. Now, since then, you've taken Azure from 1 billion to 66 billion run rate. The total revenues of the business are up 2.5x. The total earnings are up over 3x. And the share price is up almost 10x. You've added almost $3 trillion of value to
to Microsoft shareholders. And as you reflect back on that over the course of last decade, what's the single greatest change that you thought you could do then to unlock the value, to change the course of Microsoft, which has been just an extraordinary success? Yeah. So the way I've always thought, Brad, about sort of that entire
period of time is some sense from 92 to now. It's just one continuous sort of period for me, although obviously 2014 was a big event with the accountability that goes with it.
But what I felt was essentially pattern match when we were successful and when we were not, and do more of the former and less of the latter. I mean, in some sense, it's as simple as that.
I've sort of lived through, when I joined in 92, that was just after Windows 3.1 was launched. I think Windows 3.1 was, you know, May of 92, and I joined in November of 92. In fact, I was working at Sun, and I was thinking of going to business school.
And I got an offer at Microsoft and I said, ah, maybe I'll go to business school. And then I somehow or the other, the boss who was hiring me convinced me to just come to Microsoft. And it was like the best decision because the thing that convinced me was the PDC of 91 in Moscone Center when I went and saw the...
basically Windows NT. It was not called Windows NT at that time. And x86. And I said, God, this, you know, what's happening in the client will happen on the server. And this is a platform company and a partner company, and they're going to ride the wave. And so that was sort of the calculus then. Then, of course, the web happened.
We managed that transition. We got a lot of things right. Like, for example, I mean, we recognized the browser. We competed and got that browser thing eventually right. We missed search, right? We sort of felt like, wow, the big thing there was the browser because it felt more like an operating system, but we didn't understand the new category, which is the organizing layer of the internet happened to be search.
Then we kind of were there in mobile, but we really didn't get it right. Obviously the iPhone happened and we got the cloud right. So if I look at it and then we are here, we are on the fourth one on AI.
In all of those cases, I think doing things which are not coming out of because somebody else got it and we just need to do the same. Sometimes it's okay to fast follow and it worked out, but you shouldn't do things out of envy. That was one of the hardest lessons I think we've learned. Do it because you have permission to.
to do this and you can do it better. Like both of those matter to me, the brand permission. Like, you know, Jeffrey Moore once said this to me, which I say, hey, why don't you go do things which your customers expect you to do? I love that, right? Which is cloud was one such thing, which is the customer, you know, in fact, when I first remember showing up in Azure, people would tell me, oh, it's a winner take all, it's all were, and Amazon's won it all.
I never believed it because after all, I'd compete against Oracle and IBM in the servers. And I always felt like, look, it's just never going to be winner take all when it comes to infrastructure. And all you need to do is just get into the game with a value proposition.
In some sense, a lot of these transitions for me has been about making sure you kind of recognize your structural position, you really get a good understanding of where you have permission from those partners and customers who want you to win, and go do those obvious things first. And I think you could call it, hey, that's the basics or strategy.
But that's sort of what I feel, I think at least has been key. And there are things that cultivated to your point, Brad, which is, there's the sense of purpose and mission, the culture that you need to have,
All those are the most, I would say, those are the necessary conditions to even have a real chance for shots on goal. But I would just say getting that strategy right by recognizing your structural position and permission is probably what I have, you know, hopefully done a reasonable job of. Sachin, before we move on to AI, I have a couple of questions about the transition and just
Echoing what Brad said, I mean, I think that it's definitive that you may be the best CEO hire of all time. I mean, $3 trillion is unmatched. So one, I read an article that suggested, and maybe this isn't true, so you tell us, that you wrote a 10-page memo to the committee that was choosing the CEO. Is that true? And what was in the memo?
Yeah, it is true. Yeah, because I think our CEO process was pretty overwhelming.
open out there. And at that time, quite frankly, it was definitely not obvious to me that one in the beginning, remember, I never thought that first Bill would leave and then second Steve would leave. It's not like you join Microsoft and think, oh, yeah, founders are going to retire and there's going to be a job opening and you can apply for it. I mean, that was not the mental model
growing up at Microsoft. So when Steve decided to retire, I forget now, I think in August of 2013, it was a pretty big shock. And at that time, I was running our server and tools business, as it was called, in which Azure was housed and so on. And I was having a lot of fun. And I didn't even put up my hand first saying, oh, I want to be CEO, because it was not even like thing that I was thinking that'll happen.
And then eventually, the board came around and asked, and there were a lot of other candidates at that time, even internally at Microsoft. And so, at some point in that process, they asked us to write. And quite frankly, it's fascinating. That memo
Everything I said in it, right, you know, one of the terms I used in that memo, which I subsequently used even in the first piece of email I sent out to the company, had ambient intelligence and ubiquitous computing.
And I dumbed it down to mobile first, cloud first later because my PR folks came and said, what the heck is this? Nobody will understand what ambient computing is or other ambient intelligence and ubiquitous computing. But that was the mobile first, cloud first. How do you really go where the secular shift is? Then understanding our structural position, thinking about Microsoft Cloud, what are the assets we have? Why is M365? In fact, one of the things I've always resisted
is thinking of our cloud the way the market segments it, right? The market segments it, oh, here is IaaS. Even Brad, the way he described Azure. To me, I don't allocate my capital thinking here is the Azure capital, here is the M365 capital, here is gaming. I kind of think of, hey, there's a cloud infra,
That's the core theory of the firm for me. On top of it, I have a set of workloads. One of those workloads happens to be Azure. The other one is M365, dynamics, gaming, what have you. And so in some sense, that was all in that memo and pretty much has played out. And one of the assumptions at that time was that this was, you know,
We had a 98%, 99% gross margin business in our servers and clients. And people said, oh, you know, good news. You now can move to the cloud and maybe you'll have some margin. And so that was the transition. And my gut was, it is going to be less GM, but the TAM is bigger.
We'll sell more to small businesses. We will sell more in aggregate in terms of even upsell. The consumption would increase because we had sold a bit of Exchange. But if you think about it, Exchange, SharePoint, Teams, now everything expanded. So that was the basic arc.
that I had in that memo. Was there any element of cultural shift? I mean, the number of CEO hires, there's CEO hires made in this country,
in the world all the time. And many of them fail. I mean, Intel's going through a second reboot here as we speak. And as Brad pointed out, there are people that are arguing, oh, Microsoft's the next IBM or DEC that it's better days are over. So what did you do and what would you advise new CEOs that come on to kind of reboot the culture and get it moving in a different direction?
Yeah, one of the advantages I think I had was I was a consummate insider, right? I mean, having grown up pretty much all my professional career at Microsoft. And so in some sense,
If I would even criticize our culture, it was criticizing myself. So in an interest, the break I got was it never felt like somebody from the outside coming and criticizing the folks who are here versus it's about mostly pointing the finger right back at me because I was pretty much part of the culture, right? I couldn't say anything that I was not part of. And so I felt like, to your point, Bill,
I distinctly remember, I think the first time Microsoft became the largest market cap company, I remember walking around the campus, all of us, including me, we were all strutting around as if we were like, you know, the best thing to humankind, right? And it is all our brilliance that's finally reflected in the market cap. And
And somehow it stuck with me that, God, that is the culture that you want to avoid, right? Because as I always say, from sort of ancient sort of Greece to modern Silicon Valley, there's only one thing that brings civilizations, countries, and companies down, which is hubris.
And so one of the greatest breaks is my wife had introduced me to a book by Carol Dweck a few years before I became CEO, which I read on growth mindset more in the context of my children's education and parenting and what have you. And I said, God, this thing is like the best book.
You know, all of us are always talking about learning and learning cultures and so on. And this was the best cultural meme we could have ever picked. So I attribute a lot of our success culturally to that meme because it is not... The other nice thing about that, Bill, was it is not trademarked, you know, Microsoft or it's not some new dogma from a CEO.
It's a thing that speaks to work and life. You can be a better parent, a better partner, a better friend, a neighbor, and a manager and a leader. So we picked that. And the pithy way I've always characterized it is, hey, go from being the know-it-alls to learn-it-alls.
And it's a destination you never reach because the day you say, I have a growth mindset means you don't have a growth mindset by definition. And so it has been very, very helpful for us.
And, you know, it's like all cultural change. You got to give it time, oxygen, breathing space. And it's both top down and bottom up and it middles out, right? Which is, there's not a single meeting that I do with the company or even my executive staff or what have you, where I don't start with mission culture. Those are the two bookends. And I've been very, like the other thing is, I've been super disciplined on my framework. To your point about that memo, right?
Pretty much for the last now close to 11 years, the arc is the same. Mission, culture, it's the worldview, right? That ambient intelligence, ubiquitous computing, and then the specific set of products slash strategies. That frame, I pick and choose every word. I'm very, very deliberate about it. I repeat it until I'm bored stiff, but I just stay on it.
Well, speaking of that, you've, you know, you mentioned the phase shifts that we've been through. And I've heard you say that
As a large platform company, most of the value capture is determined in that first three or four years of the phase shift when the market position is established, Satya. I've heard you say you basically, Microsoft was coming off of having missed search, having largely missed mobile. And I've heard you say caught the last train out of town on cloud.
So as you started thinking about the next big phase shift, it appears that you and others in the team, Kevin Scott, sniffed out pretty early that Google was likely ahead in AI with DeepMind. You make the decision to invest in OpenAI. What convinced you of this direction versus the internal AI research efforts that you had underway? Yeah, it's a great point because...
There are a couple of things there, right? One is we were at it on AI for a long, long time. Obviously, you know, when Bill started MSR in 1995, I think, you know, the first group, I mean, he was always into this natural user interface. I think the first group was speech.
Rick Rashid came. In fact, Kaifu worked here. We had a lot of, I'd say, focus on trying to crack natural user interface.
Language was always something that we cared about. In fact, even Hinton worked, like some of the early work in DNNs happened when he was in residency in MSR, and then Google hired him. So we missed, I would say even in the early 2010s, some of what could have been doubling down
at around the same time that Google doubled down and bought even DeepMind, right? And so that actually bothered me quite a bit.
But I always wanted to focus, like for example, Skype Translate was one of the first things I focused on because that was pretty cool. Like that was the first time you could see transfer learning work, right? Which is you could train it on one language pair and it got better on another language, right? That was the first place where we could say, wow, machine translation is also with DNS, like it's different.
And so ever since I've been obsessed with language and along with Kevin,
In fact, the first time, actually, Elon and Sam, they were looking for, obviously, Azure credits and what have you, and we gave them some credits. And that time, they were more into RL and Dota 2 and what have you, and that was interesting. And then we stopped for, I forget even exactly what happened, and then they, I think, went to GCP. And then they came back,
to talk about sort of what they wanted to do with language. That was the moment, right, which they talked about transformers and natural language. And because I always felt like, look, because that's, to me, our core business. And it goes back a little bit to how I think, which is what's our structural position? I knew always that if there was a way to have a nonlinear breakthrough in terms of some model architecture that sort of exhibited, you know, like one of the things that Bill –
You know, you'd always say throughout our career here was there's only one category in digital. It's called information management. The way he thought about it was you schematize the world, right? Take people, places, things, you know, just build a schema, right? We went down many ways. You know, there was this very infamous project called WinFS at Microsoft, which was all about schematize everything. And then, you know, you'll make sense of all information.
And this was-- it's just impossible to do. And so therefore, you needed some breakthrough. And we said, maybe the way to do that is how we schematize. After all, the human brain does it through language and in a monologue and reasoning. And so therefore-- anyway, so that's what led me to OpenAI and, quite frankly, the ambition that Sam and Greg and team had.
And that was the other thing, right? Scaling laws. In fact, I think the first memo, weirdly enough, I read on scaling was written by Dario when he was at OpenAI and Ilya. And that's sort of what, like I said, let's take a bet on this, right? Which is, hey, wow, if this is going to have exponential performance, why not go all in and give it a real shot? And then...
And then, of course, once we started seeing it work on GitHub, Copilot, and so on, then it was pretty easy to double down. But that was the intuition. One of the things that...
has happened, I think, in previous phase shifts is some of the incumbents don't get on board fast enough. You even talked about Microsoft perhaps missing mobile or search or that kind of thing. I could argue, especially since I'm old and I've seen these shifts, that everyone's awake on this one. It's the most awake. Have
heavily choreographed. Everyone's maybe at the starting line at almost the same time. I'm curious if you agree with that and how you think about the key players in the race, Google, Amazon, Meta with Lama, Elon has entered the game. Yeah, it's an interesting one. To your point about, I always think about it, right? If you sort of say, take the late 90s, there was Microsoft,
And there was daylight. And then there was the rest. Interestingly enough, now, you know, people talk about the Mag 7. There is probably more than that, even to your point about everybody's awake to it. They all have amazing balance sheets.
There are even, I think, I'll call it chat. If you think about OpenAI, in some sense, you could say it's Mag8, because I think the company of this generation has already been created, which is OpenAI in some sense. It's like the Google or the Microsoft or the meta of this era. There are a couple of things. Therefore, I think it's going to be very competitive.
I also think that I don't think it's going to be winner-take-all, right? Because there may be some categories that may be winner-take-all. For example, on the hyperscale side, absolutely not, right? I mean, the world will demand, you know, even ex-China,
multiple providers of frontier models distributed all over the world. In fact, one of the best structural positions that I think Microsoft has is, you know,
Because if you remember, the Azure structure is slightly different, right? We built out Azure for enterprise workloads with a lot of data residency. We have 60 plus regions, more regions than others. So it was not like we built our cloud for one big Azure.
We built cloud for a lot of heterogeneous enterprise workloads, which I think in the long run is where all the inference demand will be with Nexus to data and the app server and what have you. So I think there is going to be multiple winners at the infrastructure layer.
There is going to be in the models even there, just the model and the app servers that each hyperscaler will have a bunch of models and there will be an app server around like every app today, even including Copilot, it's just a multi-model app.
And so there's, in fact, a complete new app server. Like everyone, like there was a mobile app server, there was a web app server, and guess what? There's an AI app server now. And for us, that's Foundry, and we're building one, and others will build. There'll be multiple of those. Then in apps, I think there will be more folk, you know, I would say network effects is always going to be at the software layer, right? So at the app layer.
There'll be different network effects in consumer, in the enterprise and what have you. And so to your fundamental point, I think you have to analyze it structurally by layer. And there is going to be fierce competition between the seven, eight, nine, 10 of us at different layers of the stack. And as I always say to our team, which is watch for the one who comes in, you know,
adds to it. That's the game you're all in, where you're always looking at who is the new entrepreneur will come out of the blue. At least I would say OpenAI is one such company, which at this point has escape velocity. Yeah. If we think about the app layer for a second, start with consumer AI a little bit here, Satya. Bing's a very large business.
You and I have discussed 10 Blue Links was maybe the best business model in the history of capitalism, but it's massively threatened by a new modality where consumers just want answers, right? For example, my kids, they're like, why would I go to a search engine when I can just get answers? So do you think, you know, first, can Google and Bing continue to grow the legacy search businesses in the age of answers?
And then what does Bing need to do or your consumer efforts under Mustafa need to do in order to compete with ChatGPT, which really looks like it's broken out from a consumer perspective? Yeah. I mean, I think the...
The first thing is what you said last, which is chat meets answers. And that's ChatGPT, both the brand, the product. And it's becoming stateful, right? I mean, like ChatGPT now is not just, you know, in fact, search was a stateless product.
There was such history, but I think more so these agents will be a lot more stateful. In fact, that's why I was so thrilled. I've been trying to get an Apple search deal for 10 years.
And so when Tim finally did a deal with Sam, I was like the most thrilled person, which is it's better to have ChatGPT get that deal than anybody else because we have that commercial and investor relationship with OpenAI. So to that point, the way I look at it and say is at the same time,
Distribution matters. This is where Google has an enormous advantage. They have the distribution on Apple. They're the default. They are obviously the default on Android. They touch, so therefore I think... And the habits don't go away. The number of times you just go to the browser URL and just type in your query. Even now, even though I want to go to Copilot, my usage is mostly Copilot.
And like if I have to think about Bing versus Copilot, it's kind of interesting, right? Some of the navigational stuff, I go to Bing. Pretty much everything else, I go to Copilot, right? That shift I think is what's happening universally.
And we are away maybe one or two of these agents for shopping or travel away from even some of the commercial query. That's the time when the dam breaks, I think, on traditional search, when some of the commercial intent also migrates into the chat. Right now, mostly the business has withstood because the commercial intent has not migrated. But once commercial intent migrates, that's when it suddenly moves. Yeah.
And so I think, yes, this is a secular shift. The way we are managing it is we have three properties in Mustafa's world, right? There is Bing, MSN, and Copilot. So we think, in fact, he's got a crisp vision of what these three things are. They're all sort of one ecosystem. One is a feed, one is search in the traditional way, and then the other is this new agent interface.
And they all have a social contract with content providers. We need to drive traffic. We need to have paywalls. Maybe we need to have ad-supported models, all of those. And so that's what we're trying to manage. We have our own distribution. The one advantage we do still have is Windows. We get to relitigate. We lost the browser. Even Chrome became the dominant browser, which is a real travesty because we had won against Netscape only to lose to Google. And we are getting it back.
Now, in an interesting way, both with Edge and with Copilot, guess what? Now even Gemini has to earn. The good news about Windows for at least is it's the open system. ChatGPT has a shot. Gemini has a shot. You don't have to call Microsoft. You can go do your best work and go over the top. But that also means we also get to. Having lost it is great sometimes because you can win it all back.
And so to me, even Windows distribution, I mean, I always say Google makes more money on Windows than all of Microsoft. I mean, literally. I mean, and I say, wow, this is the best news for Microsoft shareholders that we lost so badly that we can now go contest it and win back some share. Hey, Sathya, one thing that everybody's talking about these agents, and if you just
kind of think forward in your mind a bit, you can imagine all kinds of players wanting to
enact action on other apps and other data that may be on a system. And, you know, Microsoft's in an interesting position because you control the Windows ecosystem, but you have apps on like the iPhone ecosystem or the Android ecosystem. And how do you think about, and this is, you know, partially in terms of service question, partially a partnership question, will
Apple allow Microsoft to control other apps on iOS. Will Microsoft let ChatGPT instantiate apps and take data from apps on Windows OS? I mean, you get the question. It goes all the way to when you start thinking about search and commerce, like, will booking.com let Gemini run transactions on it without their permission or knowledge?
Yeah, I think that this is the most interesting question, right? I mean, to some degree, it's unclear exactly how this will happen. There is a slight, very old school way of thinking about some of this, which is, if you remember, you know, how did business applications of various kinds change?
managed to do interop, right? They did manage that interop using connectors and people had connector licenses. So there was a business model that emerged, right? I mean, SAP was one of the most classic ones where you could say, hey, you can access SAP data as long as you had connectors. So there's a part of me which says something like that will emerge as when agent-to-agent
interface occurs, it's unclear exactly what happens in consumer because consumer, the value exchange was a lot of
advertising and traffic and what have you, some of those things go away in an agentic world. So I think the business model is slightly unclear to me on the consumer side. But on the enterprise side, I think what will happen is everybody will say, "Hey, in order for you to either action into my action space or to get data out of my sort of schema, so to speak, there is some kind of an interface to my agent.
that is licensed, so to speak. And I think that that's a reason, like today, for example, when I go to Copilot at Microsoft, I have connectors into Adobe, into my instance of SAP, obviously our instance of CRM, which is Dynamics. So it's fascinating. In fact, I...
When was the last time any of us really went to a business application? We license all these SaaS applications. We hardly use them. And somebody in the org is sort of inputting data into it. But in the AI age, the intensity goes up because all that data now is easy. You're a query away. I can literally say, hey, I'm meeting with Bill. Tell me about all the companies, the benchmarks. Yeah.
invested in. It's both taking the web, anything that's in my CRM database, collating it all together, giving me a note, what have you. So to some degree, all that, I think, can be monetized by us and by even these connectors. But more explicitly, like the thing that could happen really quickly, because there's been talk about it, like would you allow ChatGPT
on the Windows OS to just start opening random apps and take-- - Now that's an interesting one, right? So that over the top computer use, who is going to permit that, right? So which is, is it the user or is it the operating system? Like on Windows, there is quite frankly not anything I can do to prevent that other than some security guardrails, right? So I could sort of definitely, because I think if they became a secure, like one of my big fears is the security risk.
If the malware got downloaded and that malware started sort of actioning stuff, that's when it's really dangerous. So I think those are the ones that we will build into the OS itself, which is some elevated access and privilege that this computer use stuff happens. But at the end of the day, the user will be in control on an open platform like Windows. And I'm sure Apple and Google will have a lot more control, so they won't allow it.
And so that's, in some sense, you could say that's an advantage they have or, you know, depending on how AT rules on all of those, you know, ultimately, it'll be an interesting thing to watch. We can flip that around and then we can move on. But like, would you allow the Android OS or let's just call it the Android AI or the iOS AI to...
read email through a Microsoft client on that smartphone. Yeah. I mean, we kind of like, for example, today, one of the things I always think about is, I don't know whether that was value leaking or did it actually help us, right? Which is we licensed
the sync for Outlook to Apple for Apple Mail. It was an interesting case. And I think that there was a lot of value leaked perhaps, but at the same time, I think that was one of the reasons why we were able to hold on to Exchange. It would have been doubly problematic if we had not done that. And so one of the things I think is
Going to your point, Bill, we are building out. The reason we are going to do this is we have to have a trust system around Microsoft 365. We just cannot sort of say, hey, any agent comes in and does anything because, after all, first, it's not our data. It's our customers' data, right? And so, therefore, the customer will have to permit it. The IT folks in the customer will have to permit it. It's not like some blanket flag I can set.
And then the second thing is it has to have a trust boundary. So I think what we will do is it's kind of, it's an interesting way. It's kind of like what Apple intelligence is doing. Think of it as we will do that around M365. I played with it a lot today. I'd highly recommend people download it. It's super interesting. Yeah.
So, Sachin, you know, clicking on this, you know, Mustafa has said that 2025 will be the year of infinite memory. And Bill and I have talked a lot, dating back to the start of this year, that we think the next 10x function, you know, it sounds like you agree on ChatGPT, is really, you know, this persistent memory combined with being able to take some actions on our behalf. So,
We're already seeing the starts of memory, and I'm pretty convinced as well that 2025, you know, it seems like that one's pretty well solved. But this question of actions, when am I going to be able to say to ChatGPT, book me the four seasons in Seattle next Tuesday at the lowest price?
Right. And Bill and I, you know, have gone back and forth on this one. And it would seem that computer use is the early, you know, test case for that. But do you have any sense? Is it, you know, does that seem like a hard one from here to you? Yeah. I mean, the most open-ended action space is still hard, but to your point, there are two things or maybe three things that are really exciting beyond, I'll just say,
I'm sure we'll talk about it, the scaling laws itself and capabilities of the raw models. One is memory. The other is tools use or actions. And the other one I would say is even...
entitlements, right? Which is, you know, what can you like, you know, one of the most interesting products we have even is Purview inside of Microsoft, because increasingly, what do you have permissions to? What can you get? You know, you have to be able to access things in a safe way. Somebody needs to have governance on it and what have you. So if you put all those three things together and this agent is going to then be more governable,
and when it comes to actions, it is verifiable, and then it has memory, then I think you're off to a very different place for doing more autonomous work, so to speak. I still think one of the things I always think is I like this co-pilot as the UI for AI because even in a fully autonomous world,
From time to time, you'll raise exceptions, you'll ask for permission, you'll ask for invocation, what have you. And so, therefore, this UI layer will be the organizing layer. In fact, that's kind of why we think of Copilot as the organizing layer for work, work artifacts, and workflow. But
To your fundamental point, I don't think the models, like take even 4.0, right? Not even going to 0.1. 4.0 is pretty good with function calling. So you can do in the enterprise setting significant more, more so than consumer because consumer web function calling is just hard.
where at least in an open-ended web, you can do it for a couple of websites. But once you say, "Hey, let's go do a book me a ticket on anything." And if there's schema changes on the backend and so on, it'll trip over.
And you can teach it that. That's where I think O1 can get better if it's a verifiable, auto-gradable sort of process on Rails. But I think we are maybe a year, year to two years away from doing more and more. But I think at least from an enterprise perspective,
Going and doing, here's my sales agent, here's my marketing agent, here's my supply chain agent, which can do more of these autonomous tasks. We built 10 or 15 of them into Dynamics, right? Even looking into sort of my supplier communications and automatically handling my supplier communications, updating my databases, changing my inventories, my apply. Those are the kinds of things that you can do today, I would say.
Mustafa made this comment about near infinite memory and I'm sure you heard it or hear it internally. Is there any clarification you can offer about that or is that more to come?
I think that, I mean, at some level, the idea that you have essentially a type system for your memory, right? That's the thing, right? Which is it's not like every time I start, you have to organize. I get the idea. He made it sound like you guys had an internal technical breakthrough on this front.
There's an open source project even. I think it's, I forget, it's the same set of folks who did all the TypeScript stuff who are working on this. So what we're trying to do is essentially take memory and
schematize it and sort of make it available such that you can go. Like each time I start, let's just imagine I'm prompting on some new prompt. I know how to cluster based on everything else I've done. And then that type matching and so on, I think is a good way for us to build up a memory system.
So shifting maybe to enterprise AI, Satya, you know, the Microsoft AI business is already reported to be about $10 billion. You said that it's all inference and that you're not actually renting raw GPUs to others to train on because your inference demand is so high.
So as we think about this, there's a lot of, I think, skepticism out there in the world as to whether or not major workloads are moving. And so if you think about the key revenue products that people are using today and how it's driving that inference revenue for you today and how that may be similar or different from Amazon or Google, I'd be interested in that.
Yeah, I think that's a good-- so the way for us this thing has played out is you got to remember most of our training stuff with OpenAI is more investment logic. So it's not in our quarterly results, it's more in the other income based on our investment. So that means the only thing that shows up in-- Or loss, is it maybe? Or loss. Other income or loss, right?
That is right. That is right. Right now, that's how it shows up. And so, most of the revenue or all the revenue is pretty much our API business or in fact, to your point, ChatGPT's inference costs are there. So, that's a different piece. And so, the fact is the big hit apps of this era
are what? ChatGPT, Copilot, GitHub Copilot, and the APIs of OpenAI and Azure OpenAI, right? So in some sense, if you had to list out the 10 most sort of hit apps, you know, these would probably be in the four or five of them. And so therefore, that's the biggest driver. The
The advantage we have had and OpenAI has had, which is we've had two years of runway, right? Pretty much uncontested. To your point, Bill made the point about, hey, everybody's awake, but, and it might be, I don't think there will be ever again, maybe a two-year lead like this.
Who knows? You say that and somebody else drops some sample and suddenly blows the world away. But that said, I think it's unlikely that that type of lead could be established with some foundation model. But we have that advantage. That was the great advantage we've had with OpenAI. OpenAI was able to
to really build out this escape velocity with ChatGPT. But on the API side, the biggest thing that we were able to gain was, you know, take Shopify or Stripe or Spotify. These were not customers of Azure. They were all customers of GCP or they were customers of AWS. So suddenly we got access
to many, many more logos who are all "digital natives" who are using Azure in some shape or fashion and so on. That's one. When it comes to the traditional enterprise, I think it's scaling. Literally, people are playing with Copilot on one end and then are building agents on the other end using Foundry.
But these things are design wins and project wins, and they're slow, but they're starting to scale. And again, the fact that we've had two years of runway on it, I think I like that business a lot more. And that's one of the reasons why the adverse selection problems here would have been lots of tech startups all looking for their H100 allocation in small batches, right? That...
Having watched what happened to Sun Microsystems in the .com, I always worry about that, which is, whoa, you just can't chase everybody building models. In fact, even in the investor side, I think the sentiment is changing, which is now people are wanting to be more capital light and build on top of other people's models and so on and so forth. If that's the case--
you know, everybody who was looking for H100 will not want to look for it more. So that's kind of what we've been selective on. And your sense is that for the others, that training of those models and those model clusters was a much bigger part of their AI revenue versus yours.
I don't know. I mean, this is where I'm speaking for other people's results. I don't know. I mean, it's just, I go back and say, what are the other big hit apps? Right? I don't know what they are. Like, I mean, where do they, like, what models do they run? Where do they run them?
And I would imagine-- that's kind of-- I mean, obviously, Google's Gemini-- I don't know. When I look at the Dow numbers of any of these AI products, there is ChatGPT. Right. And then there is-- even Gemini, I'm very surprised at the Gemini numbers. I mean, obviously, I think it will grow because of all the inherent distribution. But it's kind of interesting to say that they're not that many. In fact, we talk a lot more about AI
scale, but there is not that many hit apps, right? There is ChatGPT, GetUp Co-Pilot, there's Co-Pilot, and there's Gemini. I think those are the four, I would say, in a DAO. Is there anything else that comes to your mind?
Well, I think there are a lot of these startup use cases that I think are starting to get some traction, kind of bottoms up. A lot of them build on top of Lama. But if you said, oh, and that's meta. But if you said there are 10 more, what are the apps that have more than $5 million? Wow.
I think Zuckerberg would argue MetAI certainly has more, et cetera. But I think you're right in terms of the non-affiliated apps, you name them. And Zach's stuff all runs on his own cloud. I mean, he's not running on public cloud. That's right, yeah.
Satya, on the enterprise side, obviously, the coding space is often to the races, and you guys are doing well, and there's a lot of venture-backed players there. On some of the productivity apps, I have a question about the co-pilot approach. And I guess Mark Benioff's been kind of obnoxiously critical on this front and called it Clippy 2 or whatever. Yeah.
Do you worry that someone might think kind of first principles AI from ground up and that some of the infrastructure, say in an Excel spreadsheet, isn't necessary to know if you did an AI first product? And the same thing, by the way, could be said about the CRM, right? There's a bunch of fields and tasks that may be able to be obfuscated for the user.
Yeah, I mean, it's a very, very important question. The SaaS applications or biz apps, so let me just speak of our own dynamics thing. The approach at least we're taking is, I think the notion that business applications exist, that's probably where they'll all collapse in the agent era. Because if you think about it, they are essentially CRUD databases with a bunch of business logic.
The business logic is all going to these agents.
And these agents are going to be multi-repo crud, right? So they're not going to discriminate between what the backend is. They're going to update multiple databases and all the logic will be in the AI tier, so to speak. And once the AI tier becomes the place where all the logic is, then people will start replacing the backends, right? In fact, it's interesting.
As we speak, I think we are seeing pretty high rates of wins on dynamics, backends, and the agent use. And we are going to go pretty aggressively and try and collapse it all, right? Whether it's in customer service, whether it is in, you know, by the way, the other fascinating thing that's increasing is just not CRM, but even our what we call finance and operations.
because people want more AI native biz apps, right? That means the logic tier can be orchestrated by AI and AI agents. So in other words, co-pilot to agent to my business application should be very seamless. Now, in the same way, you could even say, hey,
Why do I need Excel? Like, interestingly enough, one of the most exciting things for me is Excel with Python is like GitHub with Copilot, right? That's essentially, so what we have done is when you have Excel, like this, by the way, would be fun for you guys, right? Which is you should just bring up Excel, bring up Copilot and start playing with it because it's no longer like, oh, you know, it is like having a data analyst, right?
And so it's no longer just making sense of the numbers that you have. It will do the plan for you, right? It will literally like how GitHub Copilot Workspace creates the plan and then it executes the plan. This is like a data analyst who is using Excel as a sort of row column visualization to do analysis scratch pad.
So it's kind of tools use. So the co-pilot is using Excel as a tool with all of its action space because it can generate and it has Python interpreter.
That is, in fact, a great way to reconceptualize Excel. And at some point you could say, hey, I'll generate all of Excel. And that is also true. After all, there's a code interpreter, right? So therefore you can generate anything. And so, yes, I think there will be disruption. But so the way we are approaching at least our M365 stuff is one is build Copilot as that organizing layer, UI for AI.
get all agents, including our own agents. You can say the Excel is an agent to my co-pilot. Word is an agent. It's kind of specialized canvases, which is I'm doing a legal document. Let me take it into Pages and then to Word and then have the co-pilot go with it. Go into Excel and have the co-pilot go with it. And so that's sort of a new way to think about the work and workflow.
You know, one of the questions I hear people wringing their hands about a lot today, Satya, is the ROI people are making on these investments. You know, you have over 225,000 employees. Are
Are you leveraging AI to increase productivity, reduce costs, drive revenues in your own business? If so, kind of what are the biggest examples there? You know, and maybe to a finer point on that, you know, when we had Jensen on, I asked him, you know, when he two or three X'd his top line, what did he expect?
his headcount to increase by. And he said 25%. And when asked why, he said, well, I have 100,000 agents helping us do the work. So when you two or three X your revenue for Azure, do you expect to see that similar type of leverage on headcount? Yeah, I mean, it's...
It's top of mind and top of mind for both us at Microsoft as well as our customers. Here's the way I come at it. I love this thing of, I've been going to school on learning a lot about
What happened to industrial companies with lean? It's fascinating. They're all GDP plus growers. It's unbelievable. The discipline they have in how the good industrials can literally say, "Hey, I'll add 200 to 300 basis points of tailwind just by lean," which is increased value, reduced waste. That's the practice.
So I think of AI as the lean for knowledge work. You know, we are really going to school on it, like, which is how do we really go look at, that's why I think, you know, the good old, you know, we remember in the 90s, we had all this business process reengineering, I think it's back.
in a new way where people who can think end-to-end process flows and say, hey, what's the way to think about the process efficiency? What can be automated? What can be made more efficient? So that's a little bit of, I think. So customer service is the obvious one. Like we are on course. We spend around $4 billion or so. This is everything from Xbox support to Azure support.
This is really, I mean, this is serious one year because of the deflection rate on the front end. Then the biggest benefit is the agent efficiency, right, where the agent is happier, the customer is happier, and our costs are going down.
And so that's, I think, the most obvious place and that we have in our contact center application that's also doing super well. The other one is obviously GitHub Copilot. That's the other. And with GitHub Copilot workspace, right, that's the first place where even this what is agentic sort of side comes in, right? You go from an issue to a plan or to a spec, right?
to a plan and then multi-file edit, right? So it's just completely changes the workflow for the Ainge team. As I said, and then the O365 is the catch-all, right? So the M365 co-pilot is where, I mean, just to give you a feel, like even my own, right? Every time I'm meeting a customer,
I would say the workflow of the prep of the CEO office has not changed since 1990. Basically, I mean, in fact, one of the ways I look at it is just imagine how did forecasting happen pre-PCs and post-PCs, right? There were faxes.
Then inter-office memos, and then PCs became a thing. And people said, hey, I'm just going to put an Excel spreadsheet in email and send it around, and people will enter numbers, and we will have a forecast.
The same thing is happening in the AI era right now all over the place, right? I prep for a customer meeting where I literally go into Copilot and I say, tell me everything I need to know about the customer. It tells me everything from my CRM, my emails, my Teams meetings, and the web, right? It grounds it.
I put it into pages, share it with my account team in real time. So just imagine the hierarchy, this entire thing of, oh, let me prepare a brief for the CEO goes away. It's just a query away. I generate a query, share a page if they want to annotate it. So I'm reasoning with AI and collaborating with my colleagues.
That's the new workflow. And that's happening all over the place. Somebody gave me this example from supply chain. Somebody said, supply chain is like a trading desk, except it doesn't have real-time information. That's kind of what it is. So you wait for the quarter to end, and then the CFO comes and bangs you on the head as saying all the mistakes you made. What if?
That financial analyst essentially can be in real time, be available to you and giving you like, oh, you're doing this contract for this data center in this region. You should think about these terms. All that intelligence in real time is changing the workflow and work artifact. So lots and lots of use cases all around. And I think your fundamental point is,
Our goal is to create operating leverage through AI. I think headcount will-- in fact, one of the ways I look at it and say is our total people costs will go down, our cost per head will go up, and my GPU per researcher will go up. That's the way I look at it. That makes sense.
Hey, let's shift ahead to something that you referenced earlier, just around what we're seeing out of model scaling and CapEx generally. You know, I've heard you talk about, you know, Microsoft's CapEx. I imagine in 2014, when you took over, you had no idea that the CapEx would look like it does today. In fact, you've said it looks increasingly, these companies look more like industrial company CapEx.
than traditional software companies. Your CapEx has gone from about $20 billion in 2020 to maybe as high as $70 billion in 2025.
you've earned a pretty consistent return on that CapEx. So there's actually a very high correlation when you look at your CapEx to revenue. Some people are worried that that correlation will break. And even you have said, maybe at some point there's going to be, CapEx is going to have to be spent ahead of the revenue. There may be an error pocket. We have to build for this revenue.
resiliency. So how do you feel about the level of CapEx? Does it cause you any sleepless nights? And when does it begin to taper off in terms of this rate of growth? Yeah, I mean, a couple of different things, right? One is
This is where being a hyperscaler, I think structurally super helpful because in some sense, we've been practicing this for a long time, right? Which is, you know, hey, data centers have 20 year life cycles, power you pay only when you use.
The kits are six years. You know how to sort of drive utilization up. And the good news here is it's kind of like capital intensive, but it's also software intensive. And you use software to bring the ROIC of your capital higher, right? That's kind of like when people even in the early days said, hey, how can like a hyperscaler ever make money? Because what's the difference between old hosters and
and the new hyperscalers, it is software, right? And that, I think, is what's going to apply even to this GPU physics even, right? Which is, yeah, you buy leading, you build it out. In fact, one of the things that's happening right now is what I'll call catch up, right? Which is we built, after all, over the last 15 years, the cloud. Suddenly, a new meter showed up in the cloud. It's called the AI Accelerator.
Because every app now needs a database, a Kubernetes cluster, and a model that runs on an AI accelerator. So if you sort of say, oh, I need all three, you suddenly had to build up these AI accelerators in order to be able to provision for all of these applications.
So that will normalize. So the first thing is the build-out will happen, the workloads will normalize, and then it will be you will just keep growing like the cloud has grown. So that's sort of the one side of it. And that's where avoiding some of these adverse selection issues, making sure it's not just all supply side, you know, everybody's sort of building, only hoping demand will come, just making sure that there is real diverse demand
demand all over the world, all over the segments. I watch for all of that. So I think that that's, I think, the way to manage the ROIC. And by the way, the margins will be different, right? This goes back to the very early dialogue we had on
when I think about the Microsoft Cloud, the margin profile of a raw GPU versus the margin profile of fabric plus GPU or foundry plus GPU or GitHub Copilot add-on to M365. So they're all going to be different. And so if you're having a portfolio,
matters here, right? Because if I look at even the Microsoft, why does Microsoft have a premium today in the cloud? We are bigger than Amazon, growing faster than Amazon with better margins than Amazon because we have all these layers. And that's kind of what we want to do even in the AI era. So I'd say there's been a lot of talk about model scaling. And obviously, there was talk
about kind of 10Xing the cluster size that you might do over and over again, not once and then twice. And NX.ai is still making noise about going in that direction. There was a podcast recently where they kind of flipped everything on their head and they said, well, if we're not doing that anymore, it's way better because we can just move on to inference, which is getting cheaper and you won't have to spend all this CapEx. I'm curious.
Those are two kind of views of the same coin. But what's your view on large LLM model scaling and training costs and where we're headed in the future? Yeah, I mean, you know,
I mean, I'm a big believer in scaling laws, I'll sort of first say. And in fact, if anything, the bet we placed in 2019 was on scaling laws, and I stay on that, right? Which is, in other words, don't bet against scaling laws. But at the same time, let's also be grounded on a couple of different things. One is...
these exponentials on scaling laws will become harder just because as the clusters become harder, everything, I mean, the distributed computing problem of doing large-scale training becomes harder.
And so that's kind of one side of it. So there is, but I would just still say, and I'll let the OpenAI folks speak for what they're doing, but they are, you know, continuing to, you know, pre-training, I think, is not over. It sort of continues.
But the exciting thing, which again, OpenAI has talked about and Sam has talked about is what they've done with O1. So this chain of thought with auto grading is just fantastic. In fact, basically, it is test time compute or inference time compute as another scaling law. So you have...
pre-training and then you have effectively this test time sampling that then creates the tokens that can go back into pre-training, creating even more powerful models that then are running on your inference. Therefore, that's, I think, a fantastic way to increase model capability. The good news of test time or inference time compute is sometimes running of those O1 models means
There's two separate things. Sampling is kind of like training when you're using it to generate tokens for your pre-training. But also, customers, when they are using O1, they're using more of your meters. And so you are getting paid for it. And so therefore, there is more of an economic model, right? So therefore, I like it. In fact, that's where I said I have a good structural position with 60 plus data centers all over the
It's a different hardware architecture for one of those scaling versus the other, for the pre-training versus- Exactly. I think the best way to think about it is it's a ratio. Going back to Brad's thing about ROIC, this is where I think you have to really establish a stable state. In fact, whenever I've talked to Jensen, I think he's got it right, which is, look, you want to buy some every year, not buy
Think about it. When you depreciate something over six years, the best way is what we have always done, which is you buy a little every year and you age it. You age it. You age it. You use the leading node for training and then the next year it makes it goes into inference.
And that's sort of the stable state I think we will get into across the fleet for both utilization and the ROIC, and then the demand meets supply. And basically, to your point about everybody saying, oh, wow, have the exponential stopped? One of the other things is the economic realities will also sort of stop, right? I mean, at some point, everybody will look and say, what's the economically rational thing to do? I agree.
which is, hey, even if I double every year's capability, but I'm not able to sell that inventory. And the other problem is the winner's curse, right? Which is,
You don't even have to publish a paper. The other folks have to just look at your capability and do either a distillation, it's just impossible. It's like piracy. You can have all kinds of terms of use, but it's impossible to control distillation. That's one. Second thing is, you don't even have to do anything. You just have to reverse engineer that capability and you do it in a more compute efficient way.
Uh, and so given all this, I think there will be a governor on how much people will kind of chase right now, a little bit of it. Everybody wants to be first. It's great. But at some point, all the economic reality will set in on everyone. And, uh,
and the network effects are in the app layer. So why would I want to spend a lot on some model capability if the network effects are all on the app layer? What I heard you say, I believe, so Elon has said that he's going to build a million GPU cluster. I think Meta has said the same thing.
I think the pre-training-- I think he said 200 and then he kind of joked about a million, but I can be wrong. I think he joked about a billion. But the fact of the matter is have your-- versus the start of the year, Satya, based on what you've seen around pre-training and scaling, have you changed your infrastructure plans around that? And then I have a separate question with regard to '01.
I am building to what I would say is a little bit of the 10x point, which is, hey, how do you-- we can argue the duration. Is it every two years? Is it every three years? Every four years? There is an economic model. This is where I think a little bit of disciplined way of thinking about how do you clear your inventory?
such that it makes sets, right? Or the other way is the depreciation cycle of your kit, right? There is no way you can sort of buy. Unless you find the physics of the GPU works out where suddenly
it flows through my P&L and it's actually, you know, it's in the same or better margin than a hyperscaler. That's simple. Like, so that's kind of what I'm going to do. I'm going to keep going and building basically to, hey, how do I drive inference demand and then keep increasing my capability
and be efficient at it, I absolutely, and Sam may have a different objective and he's been open to it, right? He sort of like, he may say, hey, I want to build because I know I'm deeply, have deep conviction on what AGI looks like or what have you. And so be it. So therefore, that's where I think a little bit of our tension is even. But to clarify something, I heard Mustafa say on a podcast that Microsoft is not
going to engage in the biggest model training competition that's going on. Is that fair? Well-
What we won't do is do it twice, right? Because after all, we have the IP from-- Right, right. Fair, fair. It would be silly for Microsoft today, given the partnership with OpenAI to do two unnecessary-- You're not going to create a redundant training set. Correct. So we are very-- and that's why we have-- and by the way, that's the strategic discipline we have had, which is-- that's why I always stress to Sam, we bet the farm on OpenAI.
and said, hey, we will concentrate our compute. And we did it because we had all the rights to the IP. And so that's sort of the give-gets on it. And we feel fantastic about it. And so then what Mustafa is basically saying is, hey, we will also do, in fact, a lot of focus on our end is post-training and even on the verification or what have you. So that's a big thing. So we'll focus a lot of our compute resources on adding more model attachments
adaptations and capabilities that make sense, while also having a principled pre-training staff that sort of gives us capability internally to do things. We anyway have different model weights and model classes for different use cases that we will continue to
go ahead and develop as well. - Does your question to Brad's question about, your answer to Brad's question about the balancing of GPU ROI, does that answer the question as to why you've outsourced some of the infrastructure to CoreWeave in that partnership that you have?
That we did because we all got caught with the hit called ChatGPT and OpenAI APIs. Just too much, too much. Yeah, we were completely-- I mean, it was impossible. There's no supply chain planning I could have done in-- Makes sense. What is it? None of us knew what was going to happen, what happened in November of '22.
That was just a bolt from the blue, right? So therefore, we had to catch up. So we said, hey, we're not going to, in fact, worry about too much inefficiency. So that's why, whether it's Coldweave or many others, we bought all over the place. Fair enough. Makes sense.
Yeah, and that is a one time thing and then now it's all catching up and yeah, so that was just more about trying to get caught up with demand. Are you still supply constrained, Satya? I am power, yes. I am not chip supply constrained. We were definitely constrained in '24.
What we have told the street is that's why we are optimistic about sort of the first half of 25, which is the rest of our fiscal year.
And then after that, I think we'll be in better shape. Going into 26 and so on, we have a good line of sight. So I'm hearing with respect to this level two thinking, the O1 test time compute, post-training work that's being done on that is leading to really positive outcomes. And when you think about that, that's also pretty compute intensive. Right.
Because you're generating a lot of tokens, you're recycling those tokens back into the context window, and you're doing that time and time again. And so that compounds very quickly. Jensen said he thought looking at 01, the inference was going to a million or a billion X, you know, just that it was the demand for inference is going to go up dramatically. In that regard, do you feel like you have the right long term plan to scale inference to keep up with these new models?
Yeah, I mean, I think there are two things there, Brad, which is, in some sense, it's very helpful to think about the full workload there, the full workload. Like in the agentic world, you have to have the AI accelerator. One of the fastest growing things of, in fact, OpenAI itself is the container service. Because after all, these agents need a scratch pad for doing some of those auto grading even.
to generate the samples, right? And so that is like where they run a code interpreter. And that, by the way, is a regular Azure Kubernetes cluster. So in an interesting way, there's a ratio of even what is regular Azure compute and its nexus to the GPU and then some data service. So to your point, it's not when we say inference, that's why I look at it and say there's, you know, people think about AI as separate from the cloud. AI is now
core part of the cloud. And I think in a world where every AI application is a stateful application, it's an agentic application that agent performs actions, then classic app server plus the AI app server plus the database are all required.
And so I go back to my fundamental thing, which is, hey, we built this 60 plus AI regions, I mean, Azure regions, they all will be ready for full-on AI applications. And that's, I think, what will be needed.
That makes a lot of sense. So let's talk a little bit, we've talked around OpenAI a lot during this conversation, but you're managing this balance between a huge investment there and your own efforts. At Ignition, you showed a slide highlighting the differences between Azure OpenAI and OpenAI Enterprise.
And a lot of those were about the enterprise grade, you know, things that you bring to the table. So when you look at that tension, you know, the competition that you have with OpenAI, do you think about them as ChatGPT is likely to be that winner on the consumer side? You'll have your own consumer apps as well. And then you'll divide and conquer when it comes to enterprise. How do you think about competing with them?
The way I think about at this point, given OpenAI is a very at-scale company, right? So it's no longer, it's a really very successful company with even multiple lines, if you will, of business and segments and what have you. And so I come at it very principally as,
like I would with any other big partner, right? Because I don't think of them, so I think of them as, hey, as an investor, what are their interests and our interests and how do we align them? I think of them as an IP partner. And because we give them systems IP, they give us model IP, right? So that's another side of it where we are very deeply interested in each other's success.
The third is I think of them as a big customer. And so therefore, I want to serve them like I would serve any other big customer. And so
And then the last one is the coopetition, which is whether it's co-pilot in the consumer space, whether it's co-pilot with M365 or whatever else, we say, hey, where is the coopetition? And that's where I look at it and say,
Ultimately, these things will have some overlap, but also in that context, the fact that they have the Apple deal is in some sense for the MSFT shareholder, you know, accretive.
right? Even in like the fact that their APIs, like to your point about the API differences, hey, you choose, right? The customers can choose which API front or like some of the, you know, there's differences, right? Azure has a particular style. And if you're an Azure customer and you want to use other services of Azure, then it's easiest to have an Azure and Azure Mac. But if you are on AWS and you want to just use
just the API in a stateless way, great, just use even OpenAI. So I think in an interesting way, there's sometimes having these two types of distributions is also helpful to the MSFT cost. Satya, the...
I would say the kind of curious part of the Silicon Valley community and even writ larger, I would say the entire business community is, I think, infatuated with the relationship between Microsoft and Apony. I was at Dealbook last week and Andrew Sorkin pushed Sam.
really hard on this. I imagine there's a lot you can't say, but is there anything you can say? There's supposedly a restructuring, you know, conversion to profit. I guess Elon's launched a missive in there as well. What can you tell us?
Yeah, I mean, I think those, Bill, are obviously all for the OpenAI board and Sam and Sarah and Brad and that team to decide what they want to do. And we want to be supportive. I mean, so this is where we are an investor. I would say the one thing that we care deeply about is OpenAI continues to succeed, right? I mean, it's in our interest.
And I also think it's a company that is an iconic company of this platform shift and the world is better with OpenAI doing well. And so therefore, that's sort of the fundamental position.
Then after that, the pace with which the tension, to your point, comes from, like in all of these partnerships, some of it is that competition tension. Some of it is, you know, Sam's somebody who is an unbelievable entrepreneur with great amount of sort of vision and ambition and-
the pace with which he wants to move. And so therefore, we have to balance that all out, which is what he wants to do, I have to accommodate for so that he can do what he does to do. And he needs to accommodate for the discipline that we need on our end, given the overall constraints we may have. And so I think we'll work it out. But I mean, the good news here, I think, is in this construct--
We have come a long way. I mean, this five years has been great for them. It's been great for us. And at least for my part, I'm going to keep going back to that. And I want to prolong it as long as I can. It will only behoove us to have a long-term stable partnership.
When you think about, you know, the separate funding, you know, and untangling the two businesses, Sacha, you know, are you guys motivated to do that relatively quickly? I've talked about thinking that the next step for them, you know, it'd be great to have them as a public company. You know, it's such an iconic, you know, business, early leader in AI. Yeah.
Is that the path that you see, you know, for these guys on the way forward? Or do you think that it stays kind of in the relationship that we are today?
And that's the place where, Brad, I want to be careful not to overstep, right? Because in some sense, we're not in the board. We're investors like you. And at the end of the day, it's their board and their management decision. And so at some level, I'm going to take whatever their cues are. In other words, I'm very clear that I want to support them with whatever decision they make.
To me, perhaps even as an investor, it's that commercial and IP partnership that matters the most.
We want to make sure we protect our interests in all of this. And if anything, bolster them going forward. But I think at this point, people like Sarah and Brad and Sam are very, very smart folks on this. And what makes the most sense for them to achieve their objectives on the mission is what we would be supportive of. Well, maybe we should wrap. And thank you for so much time today. But
I want to wrap on this topic of open versus closed.
and how we should cooperate to usher in safe AI. And so maybe I'll just leave it open-ended to you. Talk to us a little bit about how you think about some of these differences and debates and the importance of doing this. And one anecdote I would just throw out there is Reuters recently reported that Chinese researchers developed an AI model for potential military use on the back of MetaLama.
right? And, you know, there are a lot of supporters like Bill and I of open source, but we've also heard critics, and you said everybody can distill a model, you know, out there. So we are going to see some of these put to uses that we're not going to be happy about. So how do you think about, you know, us coming together really as a nation and as a collection of companies to usher in safe AI? Yeah, I think two things. I think that
I always have thought of open source versus closed source as two different tactics in order to create network effects, right? I've never thought of them as...
of just religious battles. I've thought of them as more like, hey, two different... I mean, that's why I think what Meta and Mark are doing is very smart, which is, in some sense, he's trying to commoditize even his compliment. It makes a ton of sense to me. If I were in his shoes, I would do that, which is get the entire world converged. I think he talks openly and very eloquently about how he wants to be the Linux of LLMs. And I think it's a beautiful...
In fact, there is even a model there, I think, that sometimes you're going back to some of your economics question. I think there is, like, the game theoretically, a consortium could be a superior model, quite frankly, than any one player trying to do it.
So unlike the Linux Foundation, where the contributions were mostly Apex contributions, which is, I always say Linux wouldn't have happened, but for, I guess, in fact, Microsoft's one of the largest committers to Linux.
And so was IBM, so was Oracle and what have you. And I think that there may be a real place for, and open source is a beautiful mechanism for that, right? Which is when you have multiple entities coming together and so on. And it's a smart business strategy.
then closed source may make sense in closed source. After all, we have had lots of closed source products. Then safety is an important but orthogonal issue because, after all, regulations will apply and safety will apply to both sides. And one could make arguments that, hey, if everybody's inspecting it, there will be more safety on one side or the other. So I think of these as perhaps best dealt with
In capitalism, at least, it's better to have multiple models and let there be competition, and different companies will choose different paths. And then we should be pretty hardcore, and the governments will demand that. I think in tech now, there's no chance of saying, hey,
We'll see what happens to the unintended consequences later. I mean, no government, no community, no society is going to tolerate that. So therefore these AI safety institutions all over will hold a same bar
And also national security, to your point, if there is national security leakage or challenges, the people will worry about that too. Therefore, I think states and state policy will have a lot to say about which of these models and what the regulatory regime will look like. Well, it's hard to believe that we're only 22 months into the post-Chatchy PT era.
But it's interesting when I reflect back on your framework around phase shifts, you have to put Microsoft in a really good position as we emerge into the age of AI. And so congrats on the run over the last 10 years. It's been really a sight to behold. But it's great. I think both Bill and I get excited when we see the leadership change.
You, Elon, Mark, Sundar, et cetera, you know, really forging ahead for Team America around AI. You know, I feel, I think we both feel pretty incredibly optimistic about how we're going to be positioned vis-a-vis the rest of the world. So thanks for spending some time with us. Yeah, can't thank you enough for the time, Sacha. Really appreciate it. Thank you so much. Thank you, Brad and Bill. Take care, Sacha. Cheers. Cheers. Cheers.
As a reminder to everybody, just our opinions, not investment advice.