Elon Musk predicts that every cognitive task that can be done by a human will be able to be done by AI within three to four years, potentially replacing trillions of dollars in human labor.
Interest rates and inflation are seen as potential risks because unexpected rises in rates, combined with high valuations, could dampen market performance. The 10-year rate has already risen by nearly 100 basis points, contrary to expectations of rate cuts.
The MAG-5 companies (Microsoft, Meta, NVIDIA, Amazon, and Google) are expected to see earnings growth drop from 44% in 2024 to 21% in 2025, reflecting a deceleration as they face harder comparisons.
Large tech companies are increasing their CapEx to invest in AI infrastructure, particularly in training and inference compute. This is driven by the belief in high returns on these investments and the need to stay competitive in the AI race.
Inference scaling is crucial as it is expected to increase by a million to a billion times, requiring significant new compute infrastructure. This scaling is driven by advancements in chain-of-reasoning models, which are compute-intensive but essential for AI's future.
There are concerns that federal spending, combined with potential tax cuts, could lead to higher inflation and interest rates. The market is skeptical about Congress's ability to cut deficits, which could further pressure the economy.
State-by-state AI regulation could create a patchwork of rules, increasing complexity and slowing innovation. This could harm the U.S.'s competitive position against China, as it would add regulatory hurdles for AI companies.
Reasoning models are expected to see significant scaling advantages in 2025 and 2026, with breakthroughs in coding and enterprise applications. These models are becoming more capable, blending pre-trained and O-series models for broader use cases.
Elon said he thinks that every cognitive task that can be done by a human will be able to be done by an AI within three to four years. Yeah. If you believe that to be true, the value of all that human labor that you're replacing is measured in trillions. ♪
Good to see you, Brad. Happy New Year. Happy New Year. You probably can't tell, Bill, I have on a blue shirt today, not a black shirt. I have on my green pants a little Notre Dame spirit for the big Notre Dame game tonight. I know you have a big Texas game coming up. Yeah, that's tomorrow in Dallas. We'll have to see if both those teams could win in advance. That'd be pretty spectacular. You and I were talking about some of this insanity going on in L.A., and I know you're
We both have a friend who's lost a house and there's a lot of this...
debate going on online, whether or not any of the policies that we've had are proximate causes, at least to the severity of the thing that's going on. And one of the things that I know you and I both really appreciate is, as we're entering 2025, just the opportunity, I feel like the conversation going on online, the debate going on online is more robust than it's been
And I think it rubs a lot of people the wrong way, but I tend to be in the camp that the more open debate and accountability, it just the better. Right. And no matter what side of the political, you know, aisle you end up being on, like we should all be in favor of just like doing better.
And when you look at how horrific these scenes are in L.A., it just you have to ask the question, like, what could we have done better? Yeah. And I totally agree with you. And I have this framework in my head that I think about in relation to these types of situations, which is, you know, a lot of people, I think, evaluate.
politicians and policies based on what they think the intent of the decision was. And they fail to follow up and then see if the output or the outcome is identical to what they thought the original intent was. And I think all too often the original intent of something may have sounded good or
or you might be voting for someone because you agree with their point of view. But if the policy fails to achieve that, or in many, many cases achieves the exact opposite of that, then you really have to ask yourself, what's the point? And so, yeah, I hope that this type of accountability and transparency, you know, shining a light in
could lead to better outcomes in the future. And there's always trade-offs, obviously. Like, you can't have everything. Yeah, for sure. For sure. Well, I thought we could do something unique and different if you're up for it. What?
related to this time of year. So obviously, large investment funds like yourself are typically operating on a calendar cycle. Sometimes, you know, the reporting is certainly looked at annually and sometimes even some of the fees and whatnot are calculated on an annual basis. And so I'm sure that creates a annual cycle for you. And
And as you've been going through that process, I thought it'd be really cool to, you know, expose the listeners to both the analysis that you're going on now, but a window into that process. And so, you know, how does someone, a large professional investor like yourself, think about this time of year and specifically what are you looking at now and give everyone a peek inside? Yeah.
Yeah, no doubt about it. And you know, we've been, I think, having this conversation for 20 years. But, you know, half of our business, the venture capital part of our business has a much longer cycle time. What informs how we think about the annual cadence of our public market positioning, which is the other half of our business, is a lot of times these big trends that, you know, we're having debates on. We ended last year in this great conversation with Dylan,
Right. About this tension or debate in the world about how much compute the world needs, all the investment going on, et cetera, that we'll get into today. But it's all those things, including a major political change.
right, going on in this country that all impact how we think about this year. And so, you know, at the end of every year, I not only look at kind of errors and omissions, like what could we have done better in our public trading and our public investing for 2024? And we had a great year and I'm proud of the team or whatever, but there's a lot we could always can do better.
And then I try to look ahead, not at January, but I try to put myself in the shoes of where I think the world will be in December of the following year.
So you really have to get in the habit of being an analyst, a forecaster, a prognosticator, thinking about the big trends, but also thinking about all of the competing things going on, interest rates, inflation, et cetera, the backdrop. And so we go through that exercise. I journal to myself, and I make everybody on the team do this independently, Bill, and
And then we get together. And this started at the beginning of December, but certainly informs how we're positioned at the start of the year. What's top of mind? What's top of mind? What are the big things you're thinking about, the big blocks? There are a lot of exciting things that can cause you to believe that this is the golden age, right? That 2025 could just be a really phenomenal year. Lower taxes, lower regulations, GDP growth is strong, employment looks good.
And at the same time, we have these megatrends that have been percolating for a decade that really seem to be bearing fruit, right? Obviously, AI being the biggest megatrend, but ancillary things, you know, like robotics and self-driving cars and all these things that are yielding productivity improvements to the economy, which, as we know, is a huge driver of GDP, right?
So there's a lot to be positive about. On the other side, you got to look at, you know, around the world and you say, well, we got a situation in the Middle East. Could that get better or worse? The situation with China, better or worse? Situation in Ukraine, better or worse? Those things can break both ways, but I could make an argument how all those things get better. And so I think on the one hand,
There's a lot of enthusiasm. At the same time, valuations, Bill, are quite high relative to where we started 23 and relative to where we started 24. So the world assumes that things are going to be better.
And as we look at it, I kept asking the team, what is the boogeyman? What's the thing that we're not thinking about that could go wrong here? And if you look at the start of this year, I would say that the boogeyman that's out there, you and I have a lot of smart friends, and some of them have been shorting the US tenure. That is, they expect rates to go up. And I really think that it's this inflation and higher rates, competition,
combined with higher valuations that could put a damper on at least the market in the first part of this year. And again, this is just like one of the potential outcomes, but we've seen the 10-year go up almost 100 basis points
And people didn't expect that. They thought the Fed was going to be cutting rates and the 10-year would come down and that would lead to more housing sales, more sales of a lot of things, be a tailwind for the economy. The exact opposite has happened. And so I think we have to explore why is that happening? Where do we expect that to go? So that would, I guess, be the overall framework. Let's dive in on the positive side and specifically the enthusiasm around AI. I would...
share with you, like just as an observant rather than as a participant. When I listen to you talk, when I just listen to the short interview that Elon did at CES, I'm hearing a level of enthusiasm that feels somewhat unprecedented to me, like over the past three decades.
in terms of just how much excitement there is about what's possible and how much investment's going into it. Why wouldn't that just be, well, first of all, do you agree with that? And then second, how do you interpret it from an investor point of view?
Well, certainly there's enthusiasm, but I would say there's plenty of wall of worry as well. For every conversation I have with people in Silicon Valley who say they're going to invest more in compute, this is the year of agentic AI that we may see AGI in the next 18 months, I have another call saying,
maybe with a big fund in New York who says, listen, this is going to lead to the telecom bust of 2000. This is Cisco all over again. There's a bubble, et cetera. And so there are the words that everybody says, Bill, and the byproduct of those words is the valuations for companies. So maybe the place to start is just like looking at where the valuations are for big tech as we start the year. And when I look at that, the S&P 500-
recently peaked at about 22 times. So in Q4 21, it was about 22 times earnings. It troughed in 22 at 16 times. And now we're at 23 times. So the S&P 500 as a whole is actually near its recent peak valuation. If you look at companies like Meta, they're trading at 23 times.
If you look at Google, trading about 21 times. If you look at Nvidia, trading
It peaked at 66 times. It's now at 36 times consensus estimates. And I would tell you closer to 28 or 30 times our estimates. And so I look at the valuations for these businesses, Bill. But growing at a rate that's unprecedented for a company of this size. Correct. Unprecedented. Correct. And so like, and we'll put all these charts in, but at the highest level, I look at that and I say, is that a bubble?
right? Does this feel like a bubble? I say, no, it's not as good a deal as you got at the start of 23. And it's not as good a deal as you got at the start of 24. But if we achieve the growth expectations we expect out of these businesses, then I don't think it's overly onerous. But the question really is, and you brought up, what does the market expect in terms of earnings growth? First, what was earnings growth last year?
And then what does the market expect for earnings growth this year? So here's a chart that shows the mega cap earnings growth broken out from the rest of the S&P 500. And in 2024, the big five, so in this case, Microsoft, Meta, NVIDIA, Amazon, and Google grew earnings at 44% in 2024. Unbelievable.
Stocks were up a lot, but bill earnings were up tremendously. 44% is extraordinary. If you look at the S&P 500 as a whole, because of the tailwind of the MAG5, the S&P 500 as a whole was up 9%. If you look at the other 495 companies in the S&P, their earnings only grew 2%.
So it was pretty anemic earnings growth in the S&P 500 in 2024 relative to the MAG-5. If you look at it now, what's expected, what the consensus forecast is for 2025, we see the MAG-5's earnings growth comes from 44% down to 21%. How much of that has already started, Brad? Do you know what Q4 was?
That deceleration. I'm just curious. There's certainly a deceleration in these businesses. Remember, they're coming off of easy comps. And by the time you get to 2025, it's very hard comps. And still growing 20% on the massive, massive earnings and revenue basis of these companies. It's pretty extraordinary. Five years ago, nobody thought they would be growing that fast.
But the interesting thing here that kind of worries me a little bit about the market, Bill, is that we expect non-tech earnings to grow from 2% to 11%. So a big acceleration in non-tech earnings. And so I asked myself, well, where does everybody think this is going to come from? And the next slide is the answer to that. So...
The yellow shaded box shows you that people expect healthcare earnings to go from plus 4% to plus 20%. They expect industrials to go from negative 4% earnings growth to plus 16%. Materials to go from negative 10% to plus 17%. Those are huge turnarounds, right? Huge accelerations in earnings. Now, a big component of that is associated with the tax cuts, but I think that only speaks for about 30%.
of that bump in earnings growth. So if you said to me again, Brad, where is there a potential boogeyman? Well, if those earnings aren't delivered, if you don't see this acceleration in non-tech earnings in the S&P 500, then it's hard for me to see how you can have a big year. I don't think we're going to see a year that looks anything like 2023 or 2024 anyway. But if you just said, can we get to 10% or 15% on the market?
A couple of things have to happen. You have to see this acceleration in earnings, number one. And number two, you really have to see interest rates not go above 5%. I think if interest rates go up a lot, that becomes a big albatross for
on market performance in 2025. Let's come back to the industry thing. I want to stick up with the large cap companies for a second. You and I were having a discussion about their CapEx trends. And this is something that's remarkably different than any window of past tech investing, right? This level of CapEx wasn't a part of the equation other than
you know, maybe in a manufacturing company. So why don't you set this up? I know you put together a slide, like what's happening with CapEx in these large companies?
CapEx was growing before the chat GPT moment, but nothing like we've seen it grow over the last two years and we're forecasting it to grow for the next couple of years. So this first slide just shows the combined CapEx of Google, Meta, Amazon, Microsoft, Apple, and Oracle, both what they did in 2024, which you can see that huge step up in 2024. Yeah, from about 160 billion to 2.6 billion.
60-ish. Exactly. And Bill, it consumed the vast majority of the incremental free cash flow of those businesses. So they clearly are all betting and believe that those are really NPV positive investments and we can dig into that.
So you heard Satya say, I think they are expected in their most recent quarter, they did about $20 billion in CapEx bill. And you just, you know, he said earlier in the year, which caused, or a couple of days ago, which caused a bit of a kerfuffle that they expected to spend $80 billion in CapEx this year. Which is run rate, basically. Yeah, it's just the run rate that he was on. But it is, when people hear the number, it's still pretty shocking. Yeah, on the CES interview, Elon goes crazy.
he said, yeah, that's a big number in anyone's universe. Coming from his point of view, he's like, yeah, that's big. Right. And so if you go company by company,
The one thing, you know, there's debate. A lot of people were debating. We talked about it with Dylan in December. Will people actually spend this money, right? Like, will they actually buy more compute in 2025? And I think what we can put to bed based upon the conversations we've had, what people are hearing out there, the conversations at CES, there are going to, these investments are going to show up in 2025, right? And the reason they're showing up, I believe, is,
is that people are still seeing a lot of returns on training. We'll get into the scaling laws, these different scaling laws that they're building for. So whether that's pre-training, whether that's post-training, whether that's test time scaling, they're investing in making the models better. But the other one, which is coming on really strong, remember Jensen in our pod with him, 40% of your revenue today is inference. But inference is about ready...
because of chain of reasoning. - Yeah. - Right? - It's about to go up by a billion times. - Right, by a million X, by a billion X. - That's right. That's the part that most people haven't completely internalized. This is that industry we were talking about, Buck. This is the industrial revolution.
That's the production of intelligence. That's right. Right? Yeah. It's going to go up a billion times. He expected inference to go up by 100,000 times, a million times, maybe even a billion times. So inference is scaling very, very quickly. And that's all new compute that's got to get built out to support that inference. So looking at this chart that you put together, you've got –
It looks like Meta and Microsoft popping above 25% of revenue on CapEx. Yes. You've got...
Apple and Amazon kind of in the middle in this 10% to 15% range. And then Apple, ironically, is falling below 5%. Which makes sense, right? Because they're not, you know, Apple is not investing in frontier models. And then the other thing is those same top two are bouncing up against 100% of free cash flow, right? Right.
you know, it was certainly a, a moment of, of pause, right? So how do you, how do you frame either of those? How do you think about whether percent of revenue should matter? Is there a limit that's too high? And then we'll do free cash flow. I would tell, I would tell you, uh,
this is a very, very robust debate. I was out in Omaha in December, and I'll tell you, they're pretty skeptical about the amount of dollars. They define who they are. Yeah, you know, they being the biggest investor in Omaha. I think that they're worried, like a lot of other investors who've kind of seen these moments before,
that this is like the telecom build-out, right? I will tell you, though, I have a lot of respect for Elon, for Satya, for Sundar, for the people who are making these investment decisions. And the numbers are there. As Satya told us on the pod, they have $10 billion in inference revenue. And I think he expects that to grow significantly. And that is the ROI, right?
right, on the dollars that he's putting into the ground. But they are making a bet, Bill, right? And you can make a bet for one of two reasons. You can make an offensive bet that this will drive future revenue growth or profit growth in my business, or it could be a defensive bet, right, the prisoner's dilemma. Like, I can't not do this because my competitors are doing it. And I think that's where some of these concerns emanate from.
But it's very clear to me that part of the reason the multiples on these businesses have not, you know, kind of gotten even higher is this wall of worry. They're spending a lot of money and they may not see the return. And so are there any rules of thumb? Like does 25 or 30 or like, does it matter to you as an investor with that percentage of revenue? Yeah.
Of course, of course. I mean, like, listen, you and I both know when we're investing in a startup, we want, you know, here's the irony, right? Over the last 10 years, people celebrated raising these bigger and bigger and bigger rounds. And you and I would look at each other quizzically and we'd say, you know, people lost the script. The goal is the least amount of money in for the most amount of money out. Yeah.
Not the most in for the most out. And I would say today, the thing that's very clear to me is that over the last decade, we've had super high returns. Yeah. Super high returns on the incremental capital that got invested. And there's no doubt in the short run here that those returns are compressing.
So you have to believe as an investor, you have to have an imagination to believe, right? Just like when Google was investing gobs in early 2000s, or when Amazon was investing gobs in 08, 09, 2010 in AWS, you have to believe that there is a pot of gold at the end of this rainbow. I have
to believe it, right? I see the revenue growth inside some of these businesses, the inference revenue growth inside these businesses. And when you replace human labor, you know, Elon said on this recent interview, I think it was with Bill Miller at CES, he was, it was a virtual interview. I mean, AI really within the next few years will be able to do any cognitive task. Like it obviously begs the question, what are we all going to do? You know, I'd say max three or four years maximum.
And Elon said he thinks that every cognitive task that can be done by a human will be able to be done by an AI within three to four years. Yeah. If you believe that to be true, the value of all that human labor that you're replacing is measured in trillions. Yeah.
And I think these are things that most of these CEOs have determined, even though that makes them a little bit uncomfortable. You said it makes the CFOs talking a little bit high pitches. And all that's true. But I think they've all determined they can't not make this level of investment in something that has the very high potential to be this big. When I look at the free cash flow, certainly it causes you to step back and say, we've
However, the past decade has been unprecedented free cash flow from the MAG7, right? And the amount of cash that's been put on their balance sheet was so unprecedented and everyone would talk about it. And like, what are they going to do with all this money? And yeah, to move from that to a place where 100% of incremental free cash flow is now spoken for is certainly a
a change, right? And my friend, Mike Mosen would get mad at me for worrying about it because he would say that all that matters is not that they're using up their free cash flow, but what's the return on the incremental investment. - Of course, of course. - But that's super hard to understand. - That's the great puzzle. And this is the thing where I think that in these early parts of a phase shift, in the first three to four years of a phase shift,
It's really hard for the traditional Wall Street analysts to model this. Remember, I give this quote in 2023, you had 26 analysts on Wall Street covering NVIDIA and the consensus forecast of all 26 of them missed by 80%. Okay. I mean, it's just like, that's how wrong you can be if you're thinking linearly at a moment of a phase shift.
And so I think we're still in that moment. And that's why I think it's an advantage to be in Silicon Valley because you're really spending all your time with the people who are actually doing these things. To get more exposed, yeah. Right. And seeing what they're actually building. You know, once it's well known, think about...
By 2016, the cloud was well known. And so everybody could model it reasonably well. But in 2012, 2013, right, when you're earlier on in that phase shift, there was a real opportunity early in those companies, Snowflake, Mongo, Okta, et cetera, to do things that people thought were not possible at that point in time. Let's spend a second on this, what you call the prisoner's dilemma argument.
And I guess you might include just the hyperscalers at this point. And there's, by the way, let me ask a quick aside. There has been talk that Meta has been hiring an enterprise group
And is structuring new deals around higher end versions of Lama or unlimited versions of Lama? Do you have a perspective? Are they entering the enterprise hyperscaler business? You know, I have no evidence that they are. I take Mark at his words last year in one of his podcasts.
where he said they already charge license fees to the large hyperscalers to use Lama and to provide it to them in certain ways, etc. I think it's de minimis revenue in the scheme of Meta, but he's smart enough. I've heard rumors in the billions and people have found that there are searches online
you know, on, you know, for executives to come in that have certain backgrounds. I don't know. No, no, no. That is, that is, that is in fact occurring. But again, you could have revenues of a couple billion dollars, but in the scheme of meta, that's still not all that material, but they're smart enough to maintain the optionality bill.
I mean, you saw NVIDIA release some models, which is the packaging of Lama models to make them easier for some customers of NVIDIA. So it would only make sense to me that they're thinking about that. Let's include them for the sake of this question and argument. So let's say there's four hyperscalers, and this is the prisoner's dilemma thing.
Every one of them is kind of being forced to announce their CapEx. And if any of them... Not being forced. They do it as a matter of their earnings calls. So they're all going to give us their CapEx. Sure. And so they... But my point is it's become this point of focus. Everyone's paying attention to it. Oh, for sure. And so, you know...
If Satchel were to say 60 next year instead of 80, does he have a concern that that's a perception that they don't believe as much or that they're losing ground to other people that are taking share? And therefore, you get into this kind of reflexive argument. Like, if we want to be perceived as leaders in AI with confidence that we're going to take our fair share, don't we have to be –
Yeah. Announcing that we're, you know, betting via CapEx. Yeah, I mean, it would be fairly conspiratorial to think that, you know, that they're doing this just so that the optics are that they are our leaders. What I will tell you is Satya has said that they have been compute constrained.
for all of last year and their inference revenues are going to accelerate in the first half of 25 because they will have less constraint. OpenAI has said publicly, SAM has said many times, that the reason that they can't release models widely, they can't release some of the voice stuff that you wanted, the reason they couldn't widely release Sora is because they were compute constrained. They didn't have the compute they needed to support these models.
Part of the reason they charge higher prices for pro models is because they have to artificially reduce the demand because they don't have the compute. And so I think across the board, there's compute constraint. Now, that occurs for two reasons, Bill. Number one, you have over 300 million people a week who've decided that I want to use ChatGPT instead of search or something else to answer my questions. So the usage is bigger than people expected.
On the other hand, these new models, inference time compute is a compute hog, right? These things require huge amounts of compute. And so those two things combined, we just don't have a compute infrastructure in 2024 that kept up with it. And so I think the investments we're seeing in 2025, frankly, I don't even think is going to get us to the point where we have compute surplus. I think we're going to end this year compute constrained. And I think a lot of the frontier labs feel the same.
By the way, one quick aside on that. You've seen Google release a
high-end pay-for consumer, you know, I think by consumer, I mean just a individual user because it could be a consumer at a company as well, but one with a price point on it. And when Elon was talking about their release of Grok 2 and potentially Grok 3, they said Grok 2 is always going to be free. I infer from that Grok 3 might be pay-for. And this to me is a
positive outcome for OpenAI because I think when people frame the fight as a competition with search, they assume that the frontier is going to be free and not pay for. If multiple people fall in line behind them with subscription pricing, that could start to get sticky.
Well, it's going to be very, very difficult for any frontier lab to invest at the level that's going to be required and not have a robust business model behind it. What I would tell you, without...
you know, say anything other than what's been said publicly about OpenAI's revenues, right? You know, they ended the year, I think it was rumored, four to five billion run rate growing very robustly. You know, you would expect a company at this stage to be growing at least triple digits
And so you start thinking about $10 billion plus in revenue for this company. Now, compare that to some of the other companies, Bill, right? Mistral, a lot of the other model companies have fully pivoted. Like, they've already raised their hand, white flag, we don't have the revenues that can support the spend to keep up, right? Right.
It's going to be interesting to see what Anthropic does, right? Their revenues are reported to be less than a billion dollars. So can they afford- And mostly on the enterprise API side, not in this consumer construction side. So can they afford to keep up? And I would tell you that as large as Google is and as large as X is, I don't think that you can go out and spend and not have a business model to support the spend. So what you're hearing would make sense to me. One other thing I just want to say about this, Bill-
One of the things we've spent a lot of time on internally is trying to really model out what we think of that compute demand relative to the compute supply. Like, are we really constrained? And I want to keep coming back to this point that Jensen made, and maybe we'll insert the clip here on the pod, where he talks about...
His inference revenue is already 50% of his revenue, so already upwards of 50% of his revenue. And I asked him, is the mix going to go up? And he said, well, Brad, of course, because I think inference is going to go up by a million X or a billion X. Now, what drives that, Bill? I think you've heard a lot of people say that 2025 is going to be the year of agents, right?
And so you have these O-series models, and now you have deep reasoning out of Google that launch this whole different vector of scaling intelligence and reasoning. But the thing about those, Bill, as you well know, is they are compute hogs.
The number of tokens that you have to produce, that you have to repopulate back into the prompt, the number of branches of inference that you go down, it dwarfs single shot kind of chat GPT. And now you add in personalization, right?
long-term memory about each individual users, you know, and actions, book my hotel, book my, you know, do things for me. Again, those things are all compute consumptive. So that's why I think the frontier labs like ChatGPT are modeling out that expected compute demand and then looking at what they have and saying, we're not even close, right? We're not even close.
I do want to come back to that, but I want to finish the CapEx thing real quick. Obviously, one way to look at it, we're looking at the slide, Google, Meta, Amazon, Microsoft, Apple, and they're all spending this money. And you can say, what does it mean for those stocks? But I'm sure as an investor, the easier thing to consider is these people are telling us, forecasting, and
that they're going to spend this amount of money. And the processes for spending that amount of money are sticky and slow. They're not fast. Like you can't back out. Like you pre-commit and you go build. Who are the recipients of this stuff? Like that's an easy win.
It's a great question. So we started the conversation, like, how do we think about the framework for the year? You know, you know, what I didn't say at the start is, you know, we started 2023 with 95%, you know, 95% net long. So think of that bill as all your chips on the table.
Right. So that's our longs minus our shorts. We started last year, I think around 80% net long. Now that can go up and go down, but it's less chips on the table. And you might say, well, Brad, if you think it's a golden age, and if you think all this money's being spent, then why the hell do you have fewer chips on the table? And I just told you, valuations are a lot higher, right? So let's start there.
Warren Buffett has told us the most important thing, Charlie Munger, the most important thing in investing is the price of entry. So the price of entry to play is higher. The variant perception is lower. When I look at Q1, when it comes to the MAG5 or MAG6, what I say is there's going to be a lot of FX headwind.
OK, so the strength of the dollar has gone up a lot. A lot of these companies have revenues denominated in non-U.S. dollars. So their revenues will have headwinds from that FX exchange. I'm not sure that's totally appreciated. And then secondly, their CapEx is going up a lot. I think Dylan said on our podcast, and I agree with him, that the CapEx is higher than people think.
Okay? So both of those things aren't particularly good for the biggest companies trading at high valuations. Now, I don't think it's going to cause some cataclysmic event, but, you know, it's a headwind. Okay? But who are the recipients? Obviously, NVIDIA is the biggest one, but who else is a recipient here? Yeah, well, first I want to show a slide on NVIDIA because, you know, I hear a lot and I get a lot. You know, we owned it since it was $120 a share. Today, split adjusted, it's $1,500 a share. So this, you know...
or 1450. It's gone up a lot. But people say, oh my God, I remember when it went up 2x and Jim Cramer was telling everybody, sell it, it's up 2x, and then 3x, sell it, it's up...
Every time you have to recalibrate based upon the facts on the field, right? It went up a lot because its earnings went up a lot. It went up a lot because its revenues went up a lot. So here's a slide, Bill, that just shows the consensus data center revenues, right, for NVIDIA. And the key here is...
to remember that NVIDIA prior data center revenues are mostly these chips that are driving training and inference. And this is a business that's got very high margins on it. And in 2023, it was $61 billion. In 2025, it's estimated to be close to $200 billion. $200 billion.
And so if you notice the jump between 24 and 25, again, this is consensus, that is about a $60 billion increase, 24 to 25. Well, if you just add up the hyperscalers that we just went through, that's 60 billion, right? And where else are they spending the money? Yes, they're spending some on custom ASICs, but it's tiny on a relative basis. Most of this stuff is NVIDIA wall-to-wall in 2025.
Jensen told you at CES, he's got 40 AI factories. So he doesn't have six people in the world building this stuff. He's got lots of people building this stuff. So I actually think if you look at the consensus numbers, they probably would imply that NVIDIA's market share is going down 24 to 25. And I don't think that to be true. But so NVIDIA is one of- Just comparing expectations for NVIDIA with this pre-committed CapEx spend.
Correct. And your expectation of what percentage of that. Correct. And remember, NVIDIA stock has not gone up from June of last year until now. It's basically because this debate's been going on. We had a lot of debates about, have we hit a wall on pre-scaling? Can the models continue to scale? We had the DeepSeq model come out. It was a smaller model, very capable. So people say, maybe you don't need all these chips. So there is real tension in the world about this name.
Well, I think there's also, I mean, another one that comes up is if we're moving from more –
spend on inference than pre-training do you need this big large holistic cluster can you use a more distributed approach would you use products that aren't gpus you know tpus from google's some of the some of the the startups as well of course they can't ramp to anywhere near this scale um anytime soon right and and google i think has said you know i'm pretty sure it's said publicly that
They're spending a lot on GPUs, not just TPUs. But you asked a question, who are the recipients? Well, NVIDIA is clearly high on that list. Now, remember, NVIDIA was up, I don't know, 140%, 150% last year, but Intel was down over 20%. AMD, I think, was
down on the year. And so it was not winter, you know, this was not a situation where everybody in the semiconductor complex won. But another group, you know, I know I share this feeling as well with my friend Gavin Baker, you know, we're also investors in the memory space. So SK Hynix, which is providing high bandwidth memory, this becomes very important, particularly in a world of inference time compute. And we think that we have a memory shortage, which is a key part
of the NVIDIA supercomputer ecosystem as far as we can see. So Micron and SK Hynix would be in that memory space. So what, pause on SK, for example. So when you look at
Companies like NVIDIA or others that are doing well in this environment, they have multiples that would suggest everyone understands that and buys into it. If I'm reading the numbers correctly, SK trades at like six times forward earnings, which is insane.
insanely low for most companies. What's going on here? Do people not believe the number or is this the commodity argument that happens with hard drives where you buy it with the highest multiple and sell it with the lowest multiple? Yeah, no, I think...
The reason it has that multiple is because people do think it's hard drives, it's commodity memory, that it's a boom and a bust, and that they don't add a lot of kind of sticky value that you can just easily put on a lot more supply. That supply will ultimately drive down the cost and the margins, and that demand is cyclical.
We believe those things are not true. We think this has become a way, way more sophisticated product, lots of software baked into it, that this is secular, not cyclical. And so there are two ways to win there. Number one is just you get the earnings growth of the business. But the second one is you could have a re-rating higher in terms of the multiple people are willing to pay. People come to believe that. In fact, SK gets treated more as a non-commodity company over time.
And then, of course, you and I talked a lot around the Diablo episode last year. We have power issues in this country, right? And so how are we going to power up this 5, 10, 15 gigs of data center that we need to bring online? Because remember, if they're spending this money, Bill, then they're telling you.
They are like you have to have power, you have to have a shell and you have to fill it with chips. And so, you know, one of the things that I continue to have people tell me, like I know they'll say, like, I know this is hard to believe, but we are power limited. Like we were power limited. We have CapEx that we would put online that we're not putting online because we can't power it.
I will tell you this. I had a call this week with the COO of PG&E and with the head of Diablo Canyon. And we, I'm very focused in talking to our friend David Sachs and others in this administration. We have to give regulatory relief to
Otherwise, we're going to have huge hurdles placed in the front of our AI industry. We know China's building 100x as much as we are in terms of power, and power is the single most important primitive to AI. So I'll just give you one example.
Gavin Newsom still has not signed the extension for Diablo Canyon beyond 2029. It's insane. We have to make that happen this year. It's 10% of California's green, clean power. We have to extend that so that they can begin doing the appropriate planning. Hell, you and I think they should probably be expanding it. But at a minimum, the idea that you would take 10% out of the grid at a time that we are already capacity constrained is totally insane.
Let me ask you this. So Altimeter's historically been tech-focused. Do you, when you're...
information leads you this direction? Do you start moving in and out of energy companies? Well, I'll tell you, a lot of my tech peers have. And if you look at some of the highest returning companies in what I would call the semiconductor-related complex last year, they were, in fact, companies in the energy space. And we looked at all of those. Ultimately, I think one of the overarching
Altimeter North Stars is essentialism, right? Like just don't make anything any more complex than it has to be. And so I would always ask my analysts when they would bring me a power company, I would say, why is that better than NVIDIA? Right? It's a derivative of NVIDIA. It's the exact same bet. So why not just buy more NVIDIA? So we tend to just take bigger companies
bets on our best ideas rather than diversifying out into these other things that we know less about. Because the truth of the matter is, I do know that we need a lot more power, Bill, but I don't necessarily know exactly how the regulatory is going to play out, exactly how one of these small nuclear reactors, all this other stuff. This falls into the Buffett comment, there's a fool in every market, and if you don't know who it is, it might be you. Yeah.
You know, one of the things we ought to come back to, though, Bill, is because you said, what could the boogeyman be? And I said interest rates. So maybe just spend a few minutes double-clicking on rates. What's going on? Why are rates going up? And how do I think that may or may not change?
But if you look at this chart, the black line on here shows the 10-year just over the course of the last couple months has been trending up and is now around 4.7%.
This despite the fact that inflation has largely, you know, like remember just a couple years ago, we had a nine handle on headline inflation. And I remember when I said a couple years from now, we'll be back to a two handle, people laughed. They said, no way. And in fact, that's where we were. And if you look at the Morgan Stanley consensus forecast for this year, they expect it to finish the year at 2.2, okay? But you have to ask the question, why then are people concerned about
Right. What's going on? Why are rates going up? And I think there are a few things at play here. Number one, the Trump election got people really excited about more growth in the economy. And you're going to have 400 billion of stimulus by way of on an annual basis.
If the Trump tax cuts are passed, that's a lot of stimulus into the economy. Regulatory relief will stimulate the economy. And so what's that stimulus going to do? A lot of people are concerned it could reignite inflation.
And they're just saying there's a chance that inflation goes higher because we have all this stimulus. And if it does, rates are not going to be able to go lower. So if you look at the next slide, Bill, we show the expectation in the world now has gone from a lot of rate cuts to only one rate cut in the first half of next year. Right? So basically, the market's saying that there's a lot of stimulus. We're not going to get rate cuts.
So I think in the first part of this year, there's going to be a lot of tension back to that core PCE. Is inflation continuing to roll over? We think that a lot of the components like rent equivalent and shelter, et cetera, will cause it to continue to go down. But the other thing going on here, and this is something I know you and I are close to and care a lot about with Doge,
The market is saying we expect this stimulus to come in, but I think it's also saying we do not expect Congress to have the courage to cut an equivalent amount out of the deficit. Right. So if you think about this, if we have 400 million of tailwind from this tax stimulus, then I think what the market would like to see is three or four hundred billion dollars of cutting come out of the budget.
Right. And when you look at what Congress has done over the last, I don't know, five, 10 years, there's no evidence. We've never done it. So, you know, the market's just saying, I'll believe it when I see it. Trump has talked about interest rates being too high. He talked about at the press conference the other day. He wants them to go lower. If he talks to his, you know, chief economists in the White House, I think they'll say these are the issues that are at play. So we really have to make these cuts. Right.
Well, we have a reconciliation package. So all of this is going to get determined, they've told us, in a single reconciliation package by April or May of this year. So this is what I'm telling you. This is the backdrop that is going to impact valuations that we have to grapple with. We have to try to understand. So here's my forecast.
My forecast is that Doge is actually, and this administration, are actually going to cut a lot out of the budget. And, you know, I think that you have the leadership in the House and the Senate. And so if they cut hundreds of billions of dollars out of this budget,
Then I think the market will say, great fiscal discipline. We don't have as much stimulus. And I think you'll see the 10 year come in. And if the 10 year comes in, then I think it could be off to the races. But if it doesn't, if we do the opposite, if we don't make the cuts, if the 10 year goes to 525 or 55 bill, it's going to be an anchor on the stock market. It's going to be an anchor on the economy. So I actually did a little analysis I shared with you on this.
that, um, I, I want to, I want to review with folks on the pod because given how important I just said that I think this is, I think it's important to try to, to try to get our arms around and understanding, is it possible? Is it even possible to cut three or $400 billion annually out of spending? And so here's this federal spending chart. And what we did here, Bill,
We went back to 2019, and you can see the total spending by the federal government in 2019 was $4.4 trillion.
And you can see the subcomponents. Social Security, we spent a trillion dollars. Medicare, $644 billion. Medicaid, $409 billion. You can see the different categories under that. Defense, $676, et cetera. And you can see what our net interest expense was at the time because interest rates were relatively low, right? And our debt was a lot lower. The next column is what we actually spent in 2024.
So this is our best estimate. The CBO has given us an estimate. You can see in the footnotes here. So total spending was $6.7 trillion. Okay. Well, the question is, is that what we would have expected? Not what we had expected. The next column, what we did, Bill, is what we call a fiscal year 24 baseline. How did we determine that baseline? We went back to 2019 and we grew every category roughly at 2.5%.
We said the GDP is growing at 2.5%, inflation is growing roughly that rate, the population is growing at that rate. So that's what government spending should roughly grow at. And if you see there, the baseline 2019 budget adjusted 2.5% CAGR, we should have spent $5 trillion, but instead we spent $6.7 trillion. That's a $1.7 trillion differential.
So when you hear Doge talk about the ability to find $2 trillion in savings, like this is, I think, really what has people optimistic that there's an opportunity to make cuts. So you go through there, and I think there are some interesting ways in which we can go after those savings.
You know, there are some proposals, and I'll post this proposal, which goes through and says here are $700 billion of easy deficit reduction that both Democrats and Republicans agree on over 10 years. So that doesn't get us there, but it is certainly a start. But I think this is going to be critically important.
that we get our arms around federal spending this year, because that to me is the biggest potential boogeyman out there. And there are ways that Trump can potentially do some of this unilaterally through rescission, challenging the Impoundment Control Act and, you know, just refusing to spend what Congress allocates. But given that the Republicans control both houses of Congress, I think that, you know, I'm expecting and I hope that
that we see cuts that at least offset the tax cuts. Yeah.
I mean, one of the challenges, obviously, is some of these spend, you know, if you take health care, for example, those categories on a per population basis have been growing way above your two and a half percent baseline assumption and expanding as a percentage of the overall spend for the government. And so it may require you to attack that specific problem within that industry, not just the
you know, say we're going to spend less. Listen, no doubt about it. There is a global demographic problem that you and I, you can look at any population chart of any country on the planet. The number of people over the age of 60 is increasing at a very fast rate. And the number of new people coming into the workforce is slowing down.
as a total percentage. That means that your working population that is paying taxes to pay for all of these things is a much smaller percentage. So demographically, that is going to be challenged. But, you know, so how do you deal with it, right? Well, you can either...
Just give less things to the people than what you've promised in the past. Or you can come up with more efficient delivery mechanisms. Obviously, we know all of the easy and crazy shit the government spends money on that Doge has been talking about that we can cut. My assumption is we're also going to have to harness technology, harness AI, do
Defense spending has got to get a lot smarter. You really have to go through every category. And if you zero base this, I tend to be where I think there are huge opportunities here. But listen, the track record of Congress, both Republicans and Democrats have not been good at cutting these costs. And what I would say is if we don't do it,
Then there's a real risk that the bond vigilantes come into the bond market. You end up with higher rates because they don't believe that we're on a stable fiscal path. Look, I mean, I think it's an assumption in American life that the government's inefficient. Like, I don't think there's anyone that anywhere that would stand up and say, we think our government is great at execution and spending dollars. There's no one that defends it.
They might argue that it has to exist regardless, but they don't argue that it's efficient. So there's plenty of opportunity. Elon said it's like shooting fish in a barrel if I'm being asked to look at inefficiency in the government. And so there will be opportunity. I think there's an even bigger opportunity, which he also hinted at, which is if you find
policy that's restrictive and unnecessary, you could unlock GDP growth, which helps in this equation as well. And I totally agree with you that the inhibitor here won't be the identification or knowing what to do. It would be whether the system can actually reform itself.
And we'll have to find out. - I mean, like, you know, we'll put in here the charts that we showed last year on how you get to a balanced budget in 2029, but it is just not that hard to get to a balanced budget at $6 trillion, right? That's 1 trillion above what we should be spending on the baseline today, right? But it just gets you back to that 2019 baseline, right? And it assumes that we have some acceleration, but not a lot.
in terms of growth, and we are going to get acceleration, right? Cutting taxes $300 billion to $400 billion a year is going to accelerate it. We are going to get efficiency and productivity gains from AI. Like these things are happening. So we've got a lot of goodness.
And by the way, relative to Europe, which is a basket case and a disaster, like the U.S. is in such an incredible position. We just have to have the courage of our conviction and get rid of some of this wastefulness. That's the only boogeyman I see out there. But Bill, there's one other thing, you know, like we've talked a lot about spending. Let's talk for a second about regulation, because I know there's an AI bill brewing down in Texas akin to California's 1047 that we got killed. So maybe just...
Again, we got a lot of changing politics. I suspect that Trump is going to rescind all or at least part, the major part of the Biden EO executive order on AI. But what's going on in Texas around AI? Well, you and I have talked about this in the past, and there had been a huge movement underway recently.
And I would call it an unusual movement when there's a new market evolving where people are literally begging for regulation. And that got a bunch of different parties up on both sides that, you know, part of those people believe part of those efforts led to the buy NIO. And they also led to this big push in California, which we talked about. What was it? 1092 that did that.
everyone got on one side or the other. Everyone in our community had a point of view. You had Scott Wiener as the person inside the administration in California pushing for that. And it ended up with this huge argument and then Gavin Newsom vetoing it. And I suspect that
that most people see administration change or a friend, David Sachs coming in as the, as the AIs are. And they would expect that, that this is kind of, at least for now put to bed, what's happening. And people are probably not aware of is that, um,
The people pushing for this regulation have moved underground to a certain extent, and they're pushing it in a bunch of different states. I've heard there's as many as 25 now state by state initiatives. And I guess this is how policy works in this country. But there's there's a bunch of reasons to be really worried about this.
I can't stand that the one that's got kind of the most heat right now is in Texas. I think Governor Abbott and obviously Elon and all his companies have had this massive impact on the Texas economy by making it the state that has the least red tape that's kind of the most...
pro-innovation and pro-business. And for this thing to pop up there is just so ironic in my mind and would be literally brain dead. When 1092 was being proposed, 1047, sorry, people, even including myself, said, why don't you just write a sign that says move your AI company to tech?
Well, now those words sound stupid because Texas is – and the first thing that people should realize is there is no reason to do this on a state-by-state basis. Exactly. At the end of Obama's term, he had this initiative that he didn't get around to. I wish he had, where he had identified –
like a hundred different industries where there's state-by-state regulation, including like hair care and stuff. And it's just wasteful red tape. And for us, when so many people in Washington are worried about our competitive position versus China, to build a gauntlet that our companies would have to move through to adhere to state-by-state interpretations and rules is,
is just mind-numbingly... I want to say the word stupid out loud. The only reason I fear saying it is that someone trying to write that legislation would use it against me. But I can't imagine. We have a changing administration. We have people looking at this. If we feel we just have to do something...
God, please. I'd prefer it at a national level. Yeah, no, federal preemption. I mean, listen, this is the Interstate Commerce Clause. There's nothing – our great advantage over Europe is they got this crazy patchwork of regulation across all these different countries. And here, if you're operating an internet company, you have one rule of the road.
right? And we need to have one rule of the road around AI. And I think that it should be promulgated and it will be promulgated in the federal level. And just so people... I've heard this boogeyman out there that people are like, oh, everybody in Silicon Valley, they don't want any regulation. They're just a bunch of crazy libertarians. No. What I think you hear is pushback to...
un-informed regulation that would slow us down and cause us to lose an important race to China. If we care about the race with China, then the first thing we need to do is to reduce the impediments to us running the fastest race we can run, right? While still caring about AI safety, while still caring about national security, while still caring about all of these things. But if you have this patchwork,
where these states are onerously regulating all of these companies out of the gates. All you're doing is holding, handing the gold medal to China.
And so I think the unintended consequences are really bad. Let me tell you about some of the details and I'll include a link in here. Dean Ball wrote a really solid analysis of everything that's wrong with the Texas proposal. But one of the things that's in there is that you have to do a risk assessment. So this is similar to like creating an audit system.
And, you know, for those of you who've run companies who've gone through audits, you know how difficult it is. But not only would you now have to, you'd have to publish a risk assessment. So whatever product I'm working on, I've now stepped through these hurdles and wrote these reports. And then there's liability associated with it looking backwards. And so if I'm
do something bad, I'm liable for it. But I also then get a second look back and did I run the analysis where I should have known about the risk that's there? And it's like, and it applies, you know, very broadly. I give the people that are looking for regulatory capture credit in that I think they saw that they were losing at the national level and realized that they could create
messy anxiety at a state level, which, you know, may actually promote trying to get to a national level.
But man, this would be horrific. I think they're going to get a lot of pushback. I'll be very I would be shocked if this happened in Texas. I don't think they're you know, I think you have a lot of coordination going on in Washington. I think it's quite smart. I think it's on both sides of the aisle. You had a great report published by Congressman Jay Obernolte in the House yesterday.
who was co-chair of the AI committee in the House. The Senate's done some work. And so hopefully Sachs will get there and provide a really clear coordinating function and that will have federal preemption and that will get great direction out of Congress. You know, one other thing, Bill, is I'm just thinking about 2025 predictions.
Because there was a lot of, when we're talking about regulatory capture, I think there was a lot of fearfulness. And a lot of it directed at OpenAI that these models, these companies with a closed model were trying to, who had the lead, lock everything down before people could catch up with them, trying to squash what was happening in open source. I think you're going to see in 2025 a lot more companies open source more of their models. Wow.
Well, I think we just saw Microsoft put out some open source models last week, which is like really surprising. And so I think that my own sense on this is that there's a lot more in commonality between
all of the major players around AI than there is division. And, you know, we'll see. But that's a... What I'm hearing out of a bunch of them is that we're going to see a bunch more open source. Yeah, but there's two dimensions to this. So you could be right that the bigger companies could all agree on a regulatory framework. There's also the issue of small versus large. And, you know, there's...
There's a lot to pay attention to there. Before we wrap up, I had like four or five things that I...
I'd like to bounce off of you and get your opinion on this as I look forward into the year. The one that I've been thinking about a lot is Google. And we talked a lot about how the, you know, how search is at risk. And clearly, for those of us that are doing so many searches first on a chat GBT or other product like that,
there is a real argument as, you know, what's the point of Google in the long run and does it still hold? On the other side, you know, when we've looked at them in the past, we've said, man, they have an incredible number of assets. And when you look at
You know, whether it's their email, Gmail products, and then their Docs product, and then their Slack-like competitive products, and their Zoom-like competitive products. And then the fact that they own Android and this mobile operating system. I was...
you know, surprised to hear Jason on all in get super excited about Google in this way and then announced that he had switched to a pixel phone. And I'm thinking to myself, wow,
This could be kind of the perfect AI basket of assets. And you look at a company like Glean that I know you're in and is doing well. You can create Glean for the rest of us if a company commits to it.
to being on the Google stack and has all these things. And it becomes a combination that no one else has. Apple doesn't have it. Microsoft doesn't have it. It requires insanely great execution to get it all right. But I find myself, you know, if a whole bunch of people said, man, I want this so bad that I'm going to switch from my iPhone and get onto Android, that would be a real tell. That'd be something special.
Yeah, well, you know, I'll tell you, I don't know what it was, late November, maybe early December, before the recent run-up, you know, I started seeing a bunch of breadcrumbs around Gemini, around Deep Reasoning, around Notebook LM, just a cycle time...
you know, improving there. I talked to a lot of companies, a lot of companies were still seeing growth in their search volumes, you know, their lead generation, so online travel companies, et cetera, from Google. And I started to see a lot more evidence of them just, you know, getting fit and getting more efficient. And frankly, I think a lot of credit goes to Sundar. I think he's starting to get feisty and see the things that, you know, that need to get done here. I would say, you know, let's keep a few big pieces in perspective.
They are facing the largest innovators dilemma in the history, or certainly from my perspective, in the history of Silicon Valley. And I think it's almost impossible to replace a 99% incremental margin business, i.e. a monopoly search business with whatever comes next. Because whatever comes next, they may be important in it, but they're not going to be 99% monopolies, right? Like OpenAI is going to be there, Meta is going to be there, et cetera. And I don't think their margins are going to be the same. So that's the first thing.
Number two is
Like, how could I be wrong? Well, you could envision a world where the pie grows so damn much, Bill, that Gemini and if they were able to displace Apple on the phone, then all these things could really replace that. I think you have plenty of time to wait and see, to pick up the breadcrumbs on whether those things are occurring. But I will tell you this, I suspected Apple for the first time, they look at Google's assets and they say, okay, this is orthogonal.
They were never worried about Pixel over the course of the last eight years, but I imagine, to your point, this is the first time in a long time that they've worried that Google could appropriately embed Gemini and deep reasoning and an assistant on the phone that can book my hotel and just make it a 10x better product. I'll tell you another area where they have an advantage, which is hopefully useful to everyone out there. Gemini is really good at local.
because they have all the local reviews. So I find myself now, if I'm going to a restaurant, you can say, what are the three dishes people have loved the most? What are the two that you stay away from and get down at a level of detail that you wouldn't have used before. And, you know, very valuable.
So that's another asset. What are they like at Tony Black's? Well, you have to go to this. The second one that I think is super interesting to me that we're going to see play out the first half of this year, X.AI, Elon's commitment to the largest cluster. So the concerns about running out of data, the concerns about maybe there's a parameter limit.
at least against the text dataset, draws into question, do you need the largest data
Grok 3 has been promised, I guess, in the first half of the year. It would be super interesting to see if something comes out of that product that is competitive with the cutting edge at OpenAI or maybe above it. I don't know, but I'm very curious about how that plays out. I think there's been a lot of attention focused on that. I'm not sure that I would measure...
you know, X.AI success solely on the dimension of is the Grok 3 model better than whatever OpenAI's latest is or Gemini? Because remember, what they're doing really, Bill, is building larger cluster on pre-training. So they can do more on pre-training. But remember, these other models are infused with post-training and test time compute, et cetera. So I suspect that what we should look- I think it's a test of that pre-training argument.
Let's say they do achieve something and come out with something that's better than everybody else. That would be a new data point relative to where everybody's thinking. Here's the way I think about X and Elon. He's got three vectors on which he is leveraging AI.
Number one is robotics. And he recently said, and I've heard Google and others say this, that he expects there to be millions of humanoids live and in the wild by 2028. Think about that. That's three years away.
It's really, it's incredible. And you just will post this video of this Chinese humanoid that, you know, somebody said passed the visual Turing test because humans were confused whether or not it was real. It was so damn good. And so he's got that vector. The second vector he has is autonomy.
Right. And I picked up a new Tesla S at the end of last year with hardware for just so I could be on the latest of FSD. It drove me home from San Francisco last night. So those are two real world, like not language models. Now we're talking in the world of bits and, you know, and atoms.
And then the third one is really around... By that, you're implying just because those are in separate companies that X.AI could become the hyperscaler for Tesla. 100%. I think there's no doubt about it. It already is and it will continue to be. He has a kuretsu, as they say in Japan, of companies that can leverage each other's insights and I fully expect that they will. And X.AI, so his need to
to build these large clusters and place these bets is because he's got to support huge potential businesses, not just competing, not just building Grok to compete with JetGBT, although I think they'll try to do that too as part of X, et cetera. But he has lots of ways that he can leverage it. And remember-
If you build a large coherent cluster for pre-training and you decide, okay, we're hitting up against some scaling limits of pre-training, you can use it for training reasoning models. You can use it for post-training. You can use it for inference. You can, you know, so it's not like you're wasting this just by building these larger clusters. I think what I'm most convicted in
is that these guys are going to spend a lot of money on GPUs because, as Jensen says, more of the world is moving to workloads that demand machine learning and accelerated compute.
Okay, that was number two, Bill. What's number three? One last question on number two. It seems to me that X.AI wants to make the argument that having their own hardware is a competitive advantage versus open AI. Do you buy into that argument? How do you think that plays out?
Well, I think it's not necessarily owning it or having it, but it's being more competent. I do think that the ability to rapidly stand up compute that works, that works really well, that doesn't have as many hiccups in productions, et cetera, like that's a superpower. I mean, Jensen's gone into, you know, talk about this crazy timeline that Elon hit. I think he's unique in that regard.
I think he has a balance sheet and an ability to raise global capital bill that his cost of capital is really low for doing this. So listen, I'm happy. I think it's so great for the US ecosystem that Elon is charging full speed ahead in AI and keeping Anthropic and Google and OpenAI and everybody else honest. It's part of the reason I'm so excited.
bullish on Team America when it comes to AI because we've got the smartest people in the world investing more money than we've ever seen investing in anything other than perhaps the Apollo project on an inflation-justed basis to make sure that we're in the lead. But at the end of the day, it's a combination of things, right? You've got to have the best research in the world. You can have the best computers and chips and biggest cluster in the world, but if you don't get the architecture right around your reasoning model and the other guy does, they're probably going to win.
All right. Number three, we'll do four and then we'll wrap it up. Number three, I don't recall in any of the previous waves, you know, PC wave, client server wave, mobile wave, the amount of
about coopetition that I'm seeing here. And people took out of our video with Satya where he said, oh, I already have my own models. And then they released a few, implying that they're in some kind of coopetition with OpenAI. You've heard all of these hyperscalers talk about doing their own
processors of some sort, you know, both Tranium at Amazon and then the TPU at Google compete with NVIDIA, but they're buying NVIDIA's product. NVIDIA put out models this week, which competes with some of those customers. And then they announced this, this,
AI driven PC unit, which would, you know, presumably compete with Dell, even though Dell's on stage helping them with the Elon build. But I don't recall, you know, at least I think about the Wintel thing.
you know, world, which I was, I was on wall street covering. I just felt more like people stayed in their lane and were thankful to be a part of this broad ecosystem that was growing. And I just see so much, uh, tension. Like it's, it's super interesting to me as a, as a, just someone that's watching it, but I'm curious what your thoughts are. Yeah. I don't know that I have any, any strong, uh,
perspectives there, maybe a couple of things. Number one, the amount of capital that it takes to do these things is not the domain of venture capitalists. So if you're a new entrant, if you're anthropic or you're open AI, frankly, they were forced into this because there was no other choice.
When they realized the amount of capital was going to be a nuclear level of capital, you just had to turn to these big companies. So these big companies ended up owning pieces of them, but they couldn't be totally dependent upon them. So you had Google, who hedged their bets a little bit with Anthropic, and you had Amazon that made a bet on Anthropic, and Microsoft on OpenAI. And now as these companies become big and successful – like, listen, if they don't become big and successful –
just gets subsumed by Microsoft. Character gets subsumed by Google. But if they get some scale on their own, like OpenAI, and now it probably couldn't be subsumed from an antitrust perspective by Microsoft, and it has standing on its own. And so I think you have that dimension. I think the competition is as aggressive as it's ever been. I think sometimes we make more out of this coopetition than there is. Like, I don't think Microsoft...
You know, NVIDIA launching a few models, I don't think they intend to be in the, you know, frontier model game where they're competing heads up with these other folks. At the same time, there's a long tail of their customers who are more than satisfied to use a tightly coupled model rather than maybe trying to kludge it together themselves by going to Lama or using a lot of these other plumbing services. And so, you know, we'll have to wait and see. Here's one thing that I would just point out in that regard.
We just looked at that CapEx chart. Almost $300 billion of non-governmental R&D being spent by the biggest companies in the world
to advance causes that are aligned with America, right? Like we hearken back to the age of Bell Labs in Silicon Valley, you know, and where you had national government spending. And I just think this is so damn bullish for us that we have printing presses. We have companies generating enough cash flow to invest this much money to put us at the bleeding edge. And it creates...
Incredible national strategic advantage. And they were getting pushback for doing stock repurchases. Now that people said they don't have any use for this cash, well, now they do. So I guess that's a positive. All right. The last thing that's on my mind and that I'm really looking forward to better understanding in 2025 is
It does appear like, for the time being, the majority of the enthusiasm on the...
advancement in AI is around this chain of thought thing. And as you've talked about, I think it's somewhat ironic that it's less efficient, but now that's a good thing. I think if someone had built a model that was just hyper-sufficient where the margins expanded, that would be better. But okay, so now the argument is it's less efficient, but it's going to consume more compute. So that's a good thing. So seeing how well this applies to different
uses of AI will be super interesting. So, you know, what are the coding companies seeing, you know, when they use chain of thought precisely against that? Because when this model was first released, even OpenAI said it's not for every use case or they hadn't seen it be successful in every use case. Now, maybe they will in the future, you know, and I've
signed up and paid for all these pro versions now in the past few weeks, been throwing problems at, you know, especially the one that's interesting is it goes away, you know, both, both the Google and the open AI products, it'll go away for 10 minutes. So, and it is, you know, how much better is that output? And so it's super, I think that's going to be something that's really important to watch. Obviously, if the compute price keeps falling, it's,
I think you fall into this why not argument. Like, why not run 20 passes on something if the marginal cost gets somewhat irrelevant? And with...
The fast followers, you know, and the way you're using synthetic data to create even smaller models, maybe you can get the smaller model to run, you know, the second, you know, double checking your work, if you would. But anyway, I think this will be the fun part to watch on the edge. I can tell you a couple of things on that. Number one, I can tell you that the researchers inside these labs, they feel like they're looking into AGI.
And they're surprised how skeptical the world is. Yeah. Like there's a lot of dissonance that I think they have because I think they're, they're like, oh my God, we're, we're, we're, we're, we're getting close. The second thing I would tell you. Part of the problem on that is,
They're seeing things ahead of time. Correct. And they're unable to release them because they're not fully baked. And they don't have the compute. They don't have the compute, Bill. But I'll tell you this. They also, part of the reason they're so bullish on these reasoning models,
is that from a scaling perspective, right? So remember, reasoning models can get better just because you throw more compute at them as well. So where are we on that scaling curve for reasoning? Most of them tell me that we're around a chat GPT 2.0 level.
level from that logarithmic scaling. So they expect a lot of scaling advantages in 2025 and 2026 on the reasoning models. And then finally, when it comes to utility, so where are these things going to be put into practice? I think you're going to see some pretty dramatic breakthroughs this year on coding, right? So think about all the startups that we've seen in the coding space.
A lot of them have used prompt injection and other techniques in order to, you know, kind of get these agents to do things that they want. I think you're going to see real coding agents baked by Google and by open AI driven by their most sophisticated inference time reasoning models that are going to be meaningfully better than what's in the market today. That's one. And I would say, secondly,
So if at the enterprise level, you think of coding as kind of the tip of the spear as to what these agents can do. Remember, like you and I started last year talking a lot about the consumer, right? Memory and actions, right?
And I think that you're going to see ChatGPT imbued with these capabilities. And I imagine the same happens with Gemini, etc. And we've seen it. One of my partners did a demo using Booking.com and the anthropic computer use where booking a hotel and doing other things.
It's still fairly embryonic, but I think that, again, that's not built on the back of 03. Now build that on the back of 03, it becomes very, very capable. So
If you said to me, like, how wide is the use case going to be? I think now the use case is very narrow. It's researchers, okay? But I think you're going to see the aperture on the reasoning models expand pretty dramatically. And I think you're going to see increasingly a blending of the pre-trained and the O-series model, because ultimately it's one model...
you know, that's going to help you, uh, you know, do all the things that you want to have done. Well, here's, here's, I didn't discuss this with you ahead of time, so we can edit it out if we don't use it, but I would invite if, if there are people that are doing particularly interesting things or leveraging this in a, in a particularly interesting way, I would invite them to reach out to us. And, uh, I'd love to not only learn about it, but be obviously willing to
to share with the audience what these things are as, as they uncover them. I'll close with this and you didn't, you didn't ask for my, my input on it, but you, you know, I'm not a fan of, of macro analysis and that I think it falls in the too hard bucket because of all the different variables at play. Great seeing you. And I look forward to a fun year of doing these with some incredible guests. And, you know, I learned a lot kicking around with you. It's a lot of fun. All right. Take care. Take care.
As a reminder to everybody, just our opinions, not investment advice.