We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode NVIDIA’s New Roadmap, State of OpenAI, Apple Shuffles Siri Team

NVIDIA’s New Roadmap, State of OpenAI, Apple Shuffles Siri Team

2025/3/21
logo of podcast Big Technology Podcast

Big Technology Podcast

AI Deep Dive AI Chapters Transcript
People
A
Amir Efrati
Topics
Amir Efrati: 我认为,谈论英伟达,实际上是在谈论少数几家主要的云服务提供商,以及一些大型AI公司,例如OpenAI、Anthropic、XAI和Meta。问题在于,还有谁会想要这些芯片?Blackwell系列芯片的推出是一个很大的进步,但目前看来,除了大型AI开发者外,其他公司对它的兴趣并不大。英伟达预测,随着时间的推移,更多企业会采用其AI芯片,但这需要时间。目前,大型AI实验室仍然面临许多技术挑战,例如模型泛化能力和数据安全等问题,这些问题阻碍了AI芯片的广泛应用。虽然英伟达的芯片销售良好,但这并不意味着AI建设已经进入成熟阶段。 关于机器人技术,我认为英伟达将机器人技术视为下一个重要的增长领域,但其发展仍面临诸多挑战,可能需要较长时间才能实现突破。Elon Musk和Optimus的出现加速了这一进程,但机器人技术涉及复杂的组件和技术,需要克服许多困难。 至于英伟达的芯片路线图,我认为新一代AI芯片的推出将面临延迟,大型云服务提供商也可能面临升级挑战。硬件的复杂性使得快速迭代和升级非常困难。 关于OpenAI,ChatGPT取得了巨大成功,但OpenAI也面临着人才流失的挑战。一些核心员工选择离开并创办新公司,这给OpenAI带来了挑战。 关于苹果公司,我认为苹果公司在人工智能领域面临的挑战源于其公司文化问题,例如缺乏透明度和协作性。此外,其领导层对大型语言模型的认识不足以及人才投资不足也是导致Siri项目进展缓慢的原因。 Alex: 我同意Amir的观点,英伟达的成功很大程度上依赖于少数大型客户,其AI芯片的广泛应用仍存在不确定性。机器人技术虽然前景广阔,但实现大规模应用仍需克服诸多技术难题。OpenAI的成功有目共睹,但人才流失问题不容忽视。苹果公司在人工智能领域的落后与其公司文化和战略决策有关,其Siri项目的未来发展仍面临诸多挑战。

Deep Dive

Shownotes Transcript

Translations:
中文

Let's talk about what's on Nvidia's roadmap after its GTC Bonanza and whether the company's right that the AI build-out is just beginning. Plus, we'll dive into the state of OpenAI and a reshuffling at the top of Apple's Siri project. That's coming up with the Information's Executive Editor Amir Efraoudi right after this.

Will AI improve our lives or exterminate the species? What would it take to abolish poverty? Are you eating enough fermented foods? These are some of the questions we've tackled recently on The Next Big Idea. I'm Rufus Griscom, and every week I sit down with the world's leading thinkers for in-depth conversations that will help you live, work, and play smarter. Follow The Next Big Idea wherever you get your podcasts.

From LinkedIn News, I'm Jessi Hempel, host of the Hello Monday podcast. Start your week with the Hello Monday podcast. We'll navigate career pivots. We'll learn where happiness fits in. Listen to Hello Monday with me, Jessi Hempel, on the LinkedIn Podcast Network or wherever you get your podcasts.

Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool-headed and nuanced format. Boy, are you in for a treat today because Amir Efraadi, the co-executive editor of the information and the author on many stories that we read on the Friday show is joining us to break down the news on our Friday show. Ranjan Roy is out today. He'll be back next week.

Meanwhile, we have a great show for you. We're going to talk about NVIDIA's new AI roadmap, the state of open AI based off of a lot of Amir's reporting. And then also one more week, we'll talk about a shuffling atop the Siri organization in Apple and whether the company will ever right the ship there.

So great to have you on the show, Amir. Thanks, Alex. Great to be here. Great to have you. I've been, I think I've been in your inbox for how long? A year maybe trying to get you on. So I'm really happy to have you here. Like I said, we recite your stories all the time and I'm going to ask you so many opening eye questions. Hopefully we'll get to the last story.

Cool. I think I was in your inbox for a while too, waiting back. But yeah, very excited to get going. I will take that. It is true. All right. So let's talk a little bit about the GTC news from this week. We're going to get into the actual NVIDIA roadmap in a minute, but I just want to talk about the broader analysis here because the main message from Jensen Wang, of

Of course, he was gonna talk about the new chips they have, but basically his message was, we're just beginning to see the build out around AI and stay tuned because Nvidia has a lot to offer as we move into reasoning and as we get bigger and broader adoption

of AI. So I'm going to quickly read the take from Dylan Patel at Semi Analysis. And Dylan, by the way, is going to be on the show in the coming weeks. So stay tuned for that. His take is this. AI model progress has accelerated tremendously. And in the last six months, models have improved more than in previous six months. The trend will continue because three scaling laws are stacked together and working in tandems, pre-training scaling, post-training scaling, and inference time scaling.

And GTC this year is all about addressing the new scaling paradigms.

Basically, he says that we're just seeing bigger and better models. And last year's mantra was the more you buy, the more you save. And this year's slogan is the more you save, the more you buy. I think what Dylan is saying there is basically that we are all now entering an era where AI chips are much more efficient. And if you save a lot, you'll be able to do much more. And then therefore, you will be able to buy more NVIDIA chips because adoption is going to be through the roof. Amir, I'm curious how you react to that.

Well, I think when you talk about NVIDIA, what you're really talking about is a handful of major cloud providers. And within that, and sometimes separate from that, big companies, like big users, I should say, of NVIDIA's chips, meaning OpenAI, Anthropic, XAI, and Meta. And that's mainly it. And so the question then becomes, okay,

Who else is going to want these chips? So we've got the Blackwell series. It's just starting to roll out. And it's a big improvement on the Hopper series that was in such, such high demand two years ago. You could not get enough two years ago. There was a huge shortage as everyone clamored for those chips.

And, you know, we're in somewhat of a similar situation where a lot of very big companies want these chips, the companies that are developing AI, but not that many others are similarly going to want it. And it's interesting because a couple of weeks ago in the last NVIDIA earnings call, Jensen talked about this shift in

Over time, he predicted that a lot of businesses, just regular businesses, not necessarily technology companies or software companies, are going to be adopting technology.

Nvidia's AI chips over time. That's his sort of roadmap. And he said that that kind of follows other, you know, technology waves that came before. And that may very well be true. I think the question is how much time before we get to something like that. And, you know, at the moment, that's pretty unclear. And so the, you know, the cloud providers that we talked to at

NVIDIA's GTC event, we're basically saying that, yeah, there's not much interest in Blackwell chips on the part of companies that aren't really big major AI developers themselves. And that just raises some questions. I don't think that will necessarily change NVIDIA's sales. They're sold out and they're going to be sold out because, like I said, you've got this mad rush of

and mad competition on the frontier model side. But yeah, I don't think it's as straightforward as Dylan is making it out to be in the sense that

There's still a lot of technical challenges that the major AI labs are trying to figure out. So I think there are definitely unanswered questions about how to make models that generalize and that are good at many different things at once. And so, yeah, I don't think we fully know the future, but NVIDIA is going to be fine for a while. That's for sure.

So can you give an example of a company that is not among the major cloud companies that would potentially buy a bunch of Blackwell chips, I guess, for to set up their own? Would that be to set up their own data centers for inference? Or what is what exactly would this group of companies look like? And this is why the earnings call was so interesting. I think he brought up an example of an automaker that would want to have AI chips on on premises.

And again, we're just not there. So there are definitely automakers that use hopper chips for various reasons, as many big businesses have been trying out, either building their own internal apps or trying to launch AI features in their customer-facing apps.

But yeah, I think they're just still sorting through that and that takes time. It especially takes time if you're talking about reducing hallucinations and

And just protecting people's data and making sure things don't leak and all that. That just takes corporate America time. And so I don't think anyone is really clamoring for these Blackwell chips except for the really big AI developers themselves. And to some extent, NVIDIA itself. That's what's so interesting. And we keep learning more about...

Nvidia itself is actually one of the biggest customers of its own chips, both for internal purposes and also because they have their own cloud service that they may or may not want to supercharge. But they certainly spend a ton of money, billions of dollars, renting back their own chips, which is something people don't talk about a lot.

Yeah, and I guess we'll cover it a bit when we get to CoreWeave. But in the meantime, I want to continue to advance Dylan's argument because I think this is the argument that NVIDIA and the rest of the AI industry would make. It's basically that the chips are getting much more powerful and more efficient, and the models are getting more efficient. And as you get more efficient, you can unlock some new novel applications beyond maybe just putting chips in cars.

This is what Dylan writes. The inference efficiencies delivered in NVIDIA's roadmaps on the hardware and software side unlock reasoning and agents in the cost-effective deployment of models and other transformational enterprise applications, allowing widespread proliferation and deployment, a classic example of Jervon's paradox, or as Jensen would say, the more you buy, the more you make. So do you buy this?

Maybe in the long run, there could be a period of uncertainty in between that. I don't know that it's a straight line. And I guess another way to look at this is how many companies are generating revenue from generative AI? Not that many.

Definitely many companies are using it to cut costs and cut some costs and that's great or automate customer service and so on. But in terms of companies that are actually selling generative AI, it's really very few. You could probably count it on two hands who actually has a significant amount of like application revenue or model revenue. So, yeah.

I think that just raises an interesting question about the trajectory of adoption and what most companies can or cannot do with models, even though they're getting more efficient. Now, these things can change quickly. And the deep seek revolution is still just beginning. So there's a lot of reasons to be excited. Like deep seek is still proliferating. It really like overshadowed

up a lot of people's eyes to your point and to Dylan's point. So we'll see in the coming months what it enables because people are still testing and experimenting and playing around with it. But

But yeah, I think there's still unanswered questions about the exact trajectory that we're on beyond open AI, anthropic XAI meta, and these big kind of spenders on the chips and these big companies that do have the ability to make revenue from providing this technology. Amir, this is clarifying something for me because-

Jensen did spend a good chunk of the keynote trying to make the case that the AI build-out is just beginning. This is from analyst Gene Munster. About a third of the two-hour-plus keynote Jensen spent making the case that we're still early in the AI build-out. Investors are still skeptical given that NVIDIA trades at 20 times earnings per share. I buy it. So,

I think what you're saying is going to be the big question around generative AI in the next couple of months, maybe years, which is, are we at the beginning of the build out? And is this going to extend beyond the companies that you just named, the open AIs, the anthropics, etc.? Or is this kind of where the AI revolution, so to speak, nets out, where it just kind of ends with a couple of chatbots and maybe some better enterprise efficiencies?

Yeah, no, I'm not ready to make a bear case on this. And certainly progress is occurring quite rapidly on a number of fronts, including on the coding side, as you've covered and you know very well. But yeah, I think we're still in this phase where we're waiting to see who else can generate meaningful revenue from products that are powered by generative AI. Yeah.

So I'm not saying it's not going to happen. I'm just saying it maybe takes a little bit of time.

And maybe that's why we're hearing so much about robotics, because if they're able to unlock robotics with like an LLM for the physical world, then the opportunity becomes even more massive. And it's funny because I was on CNBC right as GTC was getting started. And they're like, what are you expecting? And I said, well, listen, I'm expecting them to make the, a big case for, you know, humanoid robotics. And that's going to lead to artificial intelligence, general artificial intelligence. And, and,

There was a little bit of a mention of it at the beginning, at the end from Jensen, but it actually came after the keynote and sort of the side stages that we got more announcements from NVIDIA on robotics. A lot of people like Jan Lekun, who was on the show on Wednesday, make the case that you basically need to understand the world as it is, and you're never going to get to human level intelligence just with text. And it was interesting as I read your recap of what was happening within the company, uh,

What was happening within GTC that you made robotics kind of like a big part of your first sentence? This is from your story in the information. NVIDIA on Tuesday unveiled a slew of new software for robotics, including a simulator it made with Disney and Google's help during its annual conference for software and hardware developers. The announcements aim to catalyze the development of robots with software powered by NVIDIA's artificial intelligence chips.

So I'm curious if you think that's the way to read it, that this is what they see as the next big growth area and how real you think the robotics push is for, I would say, NVIDIA, but also the broader tech industry.

Yeah, it's definitely real. And I think what really catalyzed it was Elon Musk and Optimus, to be honest. He really is a force of gravity, if you will. And I think that definitely caught the attention of OpenAI, which restarted its own

robotics efforts as a result and many others. So there is reason to take it seriously, but it involves, it's such a difficult thing to pull off and also involves all kinds of components and motors and all sorts of things that you have to get right. And I don't want to say that we're sort of like where self-driving cars were 10 years ago, but

I think 10 years ago, a lot of people were like, oh, self-driving cars are right around the corner. And actually, because it involves the physical world, because it is physical AI, because people's safety matters, and we have had at least one or two deaths associated with self-driving cars, which is really not that bad.

It just took a very long time to get to where we are, where Waymo is outside my office every day. So it's very exciting, the robotics space, but will probably take a lot longer than you think. And whatever demos you're seeing, figure AI, just take this with a huge grain of salt. One other thing that we just wrote in summary on the robotics announcements from GTC is that these were mostly incremental announcements. And

Yeah, the little Star Wars robot that they trotted out. I forget what it's called. DBX or something. Very cute. That one was probably remote controlled, it seemed. But yeah, I wouldn't hold your breath. I think there's a lot of good reasons to be excited. There's a lot of reasons why investors are putting money into this, but it's probably a 10-year kind of bet.

Definitely. And you see in Jensen's openings or one of his opening slides, he has this like exponential curve of

And it starts from perception AI, moves to generative AI, then to agentic AI, and then physical AI. And Jan comments, Jensen at GTC, the future of AI is physical AI. I couldn't agree more, obviously. So I think we're just going to, this is just going to be a narrative that we're just going to hear a lot more about, especially as that question that we asked in the beginning, how, what stage of the build-out are we, as long as that's unclear, clear.

We're going to see more of this. It might be a look over here type of thing. But yeah, what do you think about it, Amir? Well, just to round out the physical AI part. So I remember at, I think it was CES in...

It must have been 2017 or 2018. Jensen was basically saying, we're making self-driving cars with Mercedes and we're making this and that. I think he even had a prototype self-driving car based on NVIDIA's own software. That was definitely way too far ahead of where things actually were.

That's part of his job. I'm not saying that that's a reason to doubt what he's saying, but I think we have some past precedent for this when it comes to physical AI. On the non-physical AI, on the digital AI front, look,

OpenAI and Meta and Google are definitely very serious about massive build-outs, XAI too. So you have at least four, five, six companies with a lot of resources that are still completely and totally committed to the idea that bigger is better. The funny thing about that is that

Similar to self-driving cars, it could be that the rate of improvement does not continue in some exponential way. And it could be more logarithmic or it could just be you put more into it and you get a little more efficient over time. But I don't know that it's obvious that it's going to be some exponential improvement.

I think a lot of these folks, especially the dyed-in-the-wool AGI kind of researchers, they

um, definitely envision a time of like self, you know, improving AI. Right. And, um, again, they are a hundred percent religiously committed to the idea that they're very, uh, close on that. I just, uh, I don't know that we have a ton of evidence for that yet, but everything that's happening is great. And I, I'm, I'm a huge fan of all these investments. So, um, you know,

Yeah, go for it, guys. Keep at it. Okay, before we move on, just very quickly about the chip roadmap itself. We don't have to spend a lot of time on this, but we should at least note it. This is, again, from your story. The company gave some glimpses of its upcoming AI chips. Blackwell Ultra due out later this year. Vier Rubin chips coming out next year. And the next generation is going to be named after another scientist, Richard Feynman.

So any high-level thoughts about what these generational improvements of chips enable companies to do? Is it just like a slightly more powerful chip to train your AI models on? How should we think about them? At the moment, yeah, that's pretty much it. I think, as we saw with Blackwell, there are inevitably going to be delays as NVIDIA continues to really try to accelerate

their rate of releases, it's just really hard to pull off, especially when it's like a different architecture or like a totally different die configuration, which they're going to be trying to do. So

It's very complex to pull off with all their partners and with TSMC in particular. So there's definitely a lot of like nervousness and hand wringing about like how long it's going to take to perfect these next versions. And then for the companies that ordered a ton of Blackwell's, you know, the, the Microsoft's of the world and Google's and so on. They're, they're,

They're pretty stressed just hearing about these new chips because they haven't even figured out the Blackwell chips and how they're going to make those work. And that's still like a TBD. We have not had the rollout of Blackwell, and that's going to be a very massive undertaking on the part of the data center companies or the cloud providers that are going to implement them.

Yeah, it was one of those things when I heard about this, the Rubin chips that are set to come out next year. I was like, wait, so many people haven't even gotten the Blackwell chips. I can actually like, what are they going to do? Are they going to just double up on orders or skip the Blackwell? Hardware is tough. It's not like software. It can be delays, it can be messes, and you have to sort of adjust from there. It's not clean. I'm glad you brought that up because adjustments happen all the time and these customers do

put their orders off and say, "Actually, don't give me the B200s. I'll wait for the GB300s," or whatever. It's taking you too long to do this. We'll just wait for the next one. So there is a lot of that negotiation happening. And what's interesting about next year is that it's going to be a very, very important year in chips because that is the year when TSMC is going to launch

It's next generation chip making process. You've got all kinds of companies that have been making huge investments and planning for that moment because they want to launch their own AI chip. OpenAI is definitely on that list. There are others. We know ByteDance has been trying pretty hard. We've got Meta. I don't know how big or serious their effort is, but it definitely exists.

you know, you've got efforts from, you know, from Amazon, certainly that are very much worth paying attention to. And we wrote about this week and

They're practically willing to lose money to get customers to use these things. These are the Tranium chips that directly compete with NVIDIA and are offered through Amazon Web Services. And so, yeah, you just have a lot of things converging on next year. And yeah, that's going to be pretty exciting to see.

Right. And so let's just take one of those and kind of talk a little bit about it. You mentioned that Amazon is willing to effectively lose money on its tranium chips. This is from your story, Amazon uncuts NVIDIA with aggressive AI chip discounts.

One longtime AWS cloud customer said the company recently pitched them on renting servers powered by Tranium, which is Amazon's chip, that would give them the same computing power as NVIDIA's H100 chips at 25% of the price. Does that type of stuff, I mean, we know that all these companies are building their own chips, does that stuff shake up the market? I know you said we might have to wait until next year to figure that out, but what do you anticipate the impact of that being?

Yeah, it definitely could have some impact. I think it's a little too early to tell, but we just know how serious they are about it because...

NVIDIA, nobody likes to deal with a monopoly or a company that has dominant share because they then ask their customers to buy other things, whether it's networking equipment or software.

or other parts that those customers may or may not want to buy. And that was a huge source of tension last year between NVIDIA and Microsoft in particular, and some others that we reported on. And it's actually part of the government investigation into NVIDIA that is underway. So I think everyone respects NVIDIA and Jensen, unbelievable job, kind of seeing into the future and building for it and hats off.

but they want to do what they can to be less reliant on them. And actually the one company that also was very thoughtful in getting ahead of things is Google, which we haven't talked about. And their tensor processing units are something that you, you know, it's fair to say is a bit of a competitive advantage for them because they have been less reliant on NVIDIA and this is how they develop their AI, you know,

Although they don't have an infinite amount of TPUs. And so one of the reasons they had to merge their two AI groups, DeepMind and Brain in 2023 is because they just didn't have enough TPUs to build large language models. So it's not, it doesn't solve all their problems. They are a

major NVIDIA customer as well. But the TPU chips are a force to be reckoned with. I know that they have their own plans to try to get customers, their cloud customers, I should say, to use these TPUs so that it's not just Google using them. I don't think that there are a lot of customers clamoring for that other than maybe Apple, which has a lot of former Google engineers.

but they're going to give that a shot too. So yeah, these are going to be really, really interesting efforts because these alternative AI chips are supposed to end up being pretty good. Now, Dylan Patel might tell you

that NVIDIA is just so far ahead of everyone else and nobody is going to catch them. And it's an open question. And NVIDIA has all these advantages, of course. But I just wouldn't discount the seriousness of some of these other companies. And then it raises the question of exactly how much computing power are you going to need? And if OpenAI or Google or Amazon, if they just make

you know, a lot of less powerful chips, does that make up for the power shortfall? You know, if they are good, efficient chips, you know, is that going to be enough? So that's another question worth thinking about and asking Dylan about. Okay, that's fascinating. Let's keep poking at the NVIDIA thesis because it's fun to do it. So there's this company CoreWeave, which is pretty interesting. If I have it right, what it does is it stacks up on GPUs and rents them out

And NVIDIA has a large investment in it. It's planning to IPO. So you have this story that I think Corey Weinberg wrote this week and information about the projections for CoreWeave. The early projections, as it shared with private investors last fall, was that its revenue would quadruple to $8 billion and the cash burn would shrink to $4 billion.

Instead, what's happening is revenue is going to be $4.6 billion. Cash burn is going to be $6 billion. It was $6 billion last year. It's going to rise from $6 billion last year to $15 billion this year. So instead of doubling revenue or quadrupling revenue and shrinking losses, it is going to grow its revenue, but its losses are going to be incredible.

And I'm curious, some people have taken this as a sign that like basically it's all over for generative AI. So can you help us sort through the fact and the fiction of what's going on with CoreWeave? Well, at a high level, what's going on with CoreWeave is when you present financial projections as a private company,

there's a lot more leeway than when you present financial projections as a public company. So that's really the main point there. When it comes to CoreWeave itself, it is a success story. There's no doubt about that. And they were kind of right place, right time and moved very quickly. They were Bitcoin mining enablers. And so they were big NVIDIA customers to begin with.

And then NVIDIA saw an opportunity very smartly to, again, diversify its customer base. And it knows and doesn't like dealing with these cloud providers because they're trying to develop their own chips that compete with NVIDIA's. So they're like, let's help CoreWeave. Let's help and a bunch of other smaller data center and cloud providers.

And give them allocation of these really in-demand hopper chips so that they can get revenue. And what ended up happening is, again, because most of the chip customers, NVIDIA chip customers are huge companies like OpenAI or these huge users like OpenAI and Microsoft and a few others, Corweave's business ended up being mainly

renting out these chips to Microsoft and a few others. It's good work if you can get it. Yeah, instead of Microsoft getting these chips directly, they had to go to CoreWeave to get them. So I'm not being cynical about it, but CoreWeave is basically a pawn in NVIDIA's master plan. That being said, it really sounds like there is a place for them in the market because as a smaller company, as a startup,

that's really focused on AI computing, they're just able to move a lot faster than some of these big cloud providers and in setting up data centers. And so speed is really their forte. And it's really as...

Just like physical AI, anything hardware related, as you pointed out, is really, really hard. You know, atoms are much harder than bits. And so a lot of the big cloud providers, even Microsoft, they really struggle. They really struggle to move quickly in the data center world.

Core Weave doesn't have the same issues. They don't have a huge broad base of customers that they have to cater to. They only have a few customers they have to cater to. They can move a lot faster for all sorts of reasons. So there is a place in the market for Core Weave.

I think it's just a matter of what is the proper valuation and can they raise money to continue their build out over time? I think those are really the questions. It's more of a question of how much do you value this asset? As long as they continue to be strategic to NVIDIA, they'll be okay.

All right, Amir, but on that point, they still have less revenue than expected, right? They were expecting, what were they expecting? $8 billion. They're only getting $4.6 billion. So is that a sign that the demand for these services is lessening? Explain that. I don't think so. No, I don't think so. I think these are lumpy things. And they've had this issue before in 2023 when they were trying to expand very quickly and they couldn't.

So they were mainly kind of real estate constrained more than anything else. So I wouldn't read too much into that. One thing that... Oh, go ahead. I was just going to say, sometimes it's a supply issue. Sometimes it's a demand issue. And it seems like it's a supply issue here, not a demand issue. Yeah. In this case, definitely a supply issue. One interesting thing that

I don't know that they've disclosed this, but I remember in 2023 and it really stood out to us and it ended up being true. They were telling investors a couple of years ago that the NVIDIA chips that they were getting, these Hopper chips, they would be able to monetize those for six years.

That sounded like a really long time to us, especially because you'd be like, oh, there are going to be two more generations of chips after that. But it's interesting, that prediction, even though it's only been a couple of years, it seems to be right because even the Ampere chips, which is the prior generation before Hopper, they're still in use today.

OpenAI still uses them. So I think they definitely, if that continues to hold, then it's not as if they are buying these fast, fast depreciating assets. These are assets that can hold their value for some time anyway. So that's a little bit of good news for them.

Okay, Amir, so you mentioned OpenAI, and you are the co-executive editor of the information, but you also do a lot of reporting. And I think that among reporters, there's few that know OpenAI better than you. So I think that as we go through this conversation about where Gen AI is heading, what the state of NVIDIA is,

Now, we really need to talk about where OpenAI is going. So I'm just going to ask an open-ended question for you. I mean, what is the state of OpenAI today? Very curious to hear what you think. The state of OpenAI today, how to start, there are a few different vectors you could take. I guess the easiest one to talk about is ChatGPT. And that is a runaway hit. I mean, it is a strong brand. It's growing like a weed.

Dare I say, it's like Google 20 years ago. And that is worth a lot of money. And I don't want to say that it completely replaces Google, but it definitely could replace some of Google, but it also just enables a whole slew of queries and commercial opportunities. I mean, my wife

is actually the chatbot evangelist in my household and has nothing to do with me. She's already, she's using ChatGPT in more creative ways than I could ever think of. This is not necessarily a creative example, but it's an important one. She asked it to do a bunch of research on skincare products because she, I guess, wasn't happy with hers, wanted a new stock. And it came up with a bunch of different answers based on her parameters and

And she is loving it. She bought everything it recommended and she's loving it. And she's so happy. She's like, can you tell I'm glowing and all that? And I'm like, yes, yes, of course you are. So that just gives you an example of the kinds of things that it can do, even if it's not eating into commercial stuff right now. So I don't know. I don't know how you discount that. Like that is an amazing kind of rocket ship.

And I'm not seeing Google catching up to that. Certainly not their standalone Gemini chatbot. So the question is, can they get these same types of queries within this main search product? I don't know. They're trying a bunch of stuff. It's hard to do because they don't want to destroy their cash cow.

But that's an incredible thing. Meta is trying to shoehorn its chatbot into all these social apps that you have on your phone. I don't know exactly if that makes total sense or it only makes sense in some cases, but it's definitely not leading to the kinds of queries that I think you're seeing on ChatGPT, a lot of which are actually kind of coding related, right? It's like

the original really robust coding assistant that is a big reason why it's such a financial juggernaut in addition to being like a consumer juggernaut through its ability to charge subscriptions to a lot of professionals. So that's one side of OpenAI.

And there's a lot of reasons for them to be excited. Then there is the like talent side, which is still pretty good despite a lot of their talent losses. I would say though that, you know, it's usually, you know, talent change, talent kind of personnel moves and the impact they have sometimes takes a little bit of time to play out. And so Mira Marotti, who was the CTO and abruptly left last year,

uh, last fall. That's the one that's been the most concerning to people at OpenAI and to Sam Altman because she's been able to land some, um, you know, some good folks from, from the research team. She recruited just like everybody from the live streams, like the top echelon. It's just like all now working at Thinking Machines. Yeah. Yeah. And, um, you know, it, it definitely, you know, one of the

One of the ways you know it's becoming a problem for OpenAI is when they send one of their biggest investors in to give a seminar to the employees telling them,

all the reasons they shouldn't go to a startup and how rare it is to have a great financial outcome that would be better than what they would get by staying at OpenAI. Yeah, so can you talk about that? You had that story. It's very interesting. Josh Kushner went in and was like, if you join a startup and you get 1% and it sells for a billion, don't think you're getting $10 million. It's like a hilarious... I've never heard of a meeting like this. Yeah, and what's interesting about it is

It may be generally true what Kushner was telling people, but because these people are so rare and so unique, and it's like some of these folks are worth like $10 million right now in terms of the offers, the job offers that they would get.

And it's not unlike the self-driving car phenomenon from 10 years ago. It's very similar. There are just so few people who had any expertise in developing the core underlying software that they could get these kinds of offers and valuations and so on if they left these big companies to start their own thing or join a startup.

So we do have this interesting dynamic. Mira is definitely trying to make it worth people's while to join her, um, you know, by giving them, uh, by giving them equity that is very, very cheap and could be very valuable, at least on paper very soon. So she's not an idiot. She knows, she knows, um, you know, what money talks, what people need to be able to come out of open AI, given that it is a rocket ship. Um,

So we don't need to go too far on this because there is still some great talent density at OpenAI. And I don't think it's easy to say that they're completely decimated or anything like that. Before we move on, can I ask you one mirror question? I mean, what is she building? Because is she just going to try to build the same thing as OpenAI? Or I could not understand for the life of me what she was trying to do when I read the launch post. So maybe you have some thought on this. Yeah.

Well, there are some things that are just harder to talk about if we're trying to report on them or break news on them. But no, I don't think it's going to be the same. And I think in some cases, they're just trying to bring really smart people together to come up with a plan once they have those smart people, right? It's like... It's a good roadmap. Well, they call it a lab, right? It's a lab. So the purpose of a lab is to do experiments and then figure out what to do. Now...

So there are plenty of commercial opportunities if you want to like be a consultancy and say, hey, we just came out of OpenAI. We know what to do. You can go to an aerospace company or you could go to any business under the sun and offer your services and they'll probably take it. Or you can go to some of OpenAI's customers and try to say like, hey, we will do bespoke stuff for you as you help us learn what products are going to be great in the market, right? That's ruthless. Yeah.

I mean, we'll see. You mentioned DeepSeek earlier. And I think one thing that those clearly enable is just a lot more experimentation and a lot more of kind of building on top of these things to kind of get some approximation of the state-of-the-art open AI or anthropic models without having to buy those open AI or anthropic models. So yeah, they definitely have some options, but I don't know.

Okay, I want to ask you one more OpenAI question before we go to Apple and Apple Intelligence. There was a fascinating information story that you guys put together about the apps at the other end of the agentic AI that OpenAI is building and how they feel about the fact that they're effectively going to be disintermediated by AI. And I think this is like the first few examples we've seen of companies telling the AI companies, hey, wait a second, we don't just want to be like

an endpoint effectively in your master chatbot system. And you have some details of a meeting between DoorDash and OpenAI last fall, where I love the question that DoorDash asked OpenAI. They said, what happens if you take operator to the logical limit? And what happens if only AI bots rather than people visit the DoorDash site?

One thing that they talk about is how like they have ads on their site and now instead of people seeing the ads, it would be robots. But to me, I think the greater concern that we're going to see from companies like DoorDash and others is like as this AI gets more advanced and can surf the web, like why do you need the DoorDash? For instance, you know, maybe if your bot can now just log on to a restaurant web page in order of delivery. I mean, of course, you still need the Dasher, I suppose.

But okay, that's a robot. Now what does DoorDash exist for? And so there's been this kind of discussion around this stuff about

Do we live in a world without apps as AI gets better? And I think in this story, you're starting to see that the apps are starting to think about that as well. Now, maybe that's extreme. Like if you take it to its logical limit, but in the meantime, there could be some serious consequences to companies that end up in this AI ecosystem, maybe the same way that app companies found themselves sort of, you know,

you know, by the leash in the Apple App Store. But this might be even worse because they control everything. So what do you think, Amir? Yeah, we're still really, really early in this. You mentioned operator, this like web using, web browser using AI that's out there and OpenAI is trying to get other developers now to make similar products.

We don't know where that's going. I still think the opportunity with that kind of technology is more in the enterprise app world when you are trying to use different enterprise apps at the same time. And maybe you have this AI that can help you work with all these things together, transfer data from one place to another and so on. Yeah.

But the consumer uses of these web browsing agents are the things that the companies that have made these agents have talked about. So OpenAI and Anthropic and Google to some extent. And yeah, I think that retailers in particular are going, huh, well, I guess we need to see where this is going.

And to get ahead of it and to see if we need to block these things because they'll eat our lunch, no pun intended, in DoorDash's case. So, yeah, I think, you know, OpenAI, like every technology company or any company, wants to be the, you know,

front end of the world. It wants to be the conduit through which everything happens. That's sort of what these companies end up desiring at the end of the day. And already, as I mentioned, with the ChatGPT example and my wife,

OpenAI already has a product that's starting to impact commerce. So all the retailers and publishers, website publishers and other folks, they already have to sort of contend with that and think about that. Because just like when Google was coming up,

people are going to need and want this referral traffic. And they're going to want to, if not game the system, but want to make sure that they're getting as much traffic and as many customers as possible once consumers change their habits and start experiencing the web in a new way. Same thing happened with social, obviously. And these things ebb and flow over time. But that's the starting point here. These web-using agents

I'm still not totally seeing it when it comes to consumer uses, but it's quite early. And I'll definitely take it seriously. But I think that you've already seen

Reddit block some of these things from traversing its site. So it's not a stretch to think that others will. And from the perspective of OpenAI and the companies that are creating OpenAI,

these agents that basically can take actions on your behalf or do things that are a little bit more complex than just reading and regurgitating something. They can actually do a transaction. They think that this is the future. They think that you're just going to tell your AI...

I want to buy this kind of meal or I'm hungry for this and just go do it for me and have it show up in my door. Or I want this kind of information. Tell me what happened in the world and set it and let it go and we'll do it. So they sort of see it as part of this vision of so much more being automated. I think if you talk to the Walmarts and Amazons of the world, they'll say, yeah, for some things it makes sense to automate purchases, but like

Not for a lot of other things. If you're going to plan a party and if you have taste and you care about the details, you're not going to lead that to an AI, right? At least not right now. So these things are early. These things are buggy. But there's clearly going to be tension between retailers and AI companies, just like there has already been tension between

or content publishers and AI firms. And that is going to continue. There's already a lot of bad blood there and that will continue. Yeah, I just think this is going to be a huge story moving forward and we're just seeing the beginnings of it. So it was nice to see that you guys had some coverage inside a meeting where this is being discussed. We're here with Amir Afradi. He's the co-executive editor of The Information. We have to talk about Apple drama and we'll do that right after this.

Curious to know the fastest growing skills that professionals should be investing in to stay ahead of the curve? Check out the Skills on the Rise list only on LinkedIn. Skills on the Rise highlights the skills you should be on the lookout for, the skills that professionals are increasingly adding to their toolkit, and those that companies are increasingly hiring for. Skills on the Rise is sponsored by Workday.

Raise the rudders. Raise the sails. Raise the sails. Captain, an unidentified ship is approaching. Over. Roger. Wait, is that an enterprise sales solution? Reach sales professionals, not professional sailors. With LinkedIn ads, you can target the right people by industry, job title, and more. We'll even give you a $100 credit on your next campaign. Get started today at linkedin.com slash marketer. Terms and conditions apply.

And we're back here with Amir Afraadi, the co-executive editor at The Information, talking all about the world of AI and what would a discussion about artificial intelligence be in March 2025 without a discussion of what the heck is going on at Apple. More drama at Apple. We've done two straight episodes leading with the Apple story. Figured this week we would take a break. So let's conclude with what's going on at Apple because it does seem like the urgency is high and

We haven't seen firings yet, but we are seeing a reshuffling. This is from Bloomberg. Apple shuffles AI executives ranks and bid to turn around Siri.

Chief Executive Officer Tim Cook has lost confidence in the ability of AI head John Giandrea to execute on product development. So he's moving over another top executive to help Vision Pro creator Mike Rockwell. Rockwell is going to report to software chief Craig Federighi, removing Siri completely from Giandrea's command.

I think there was an MG Siegler tweet about this where he just kept on repeating, resist the urge to make a joke about the Vision Pro ahead taking over Apple Intelligence because we all know the Vision Pro hasn't done well, but at least they shipped that product in the way that they said they would, whereas Apple Intelligence has been a disaster. I don't personally think that this leadership change is going to make a difference because to me...

The inability to build something good with Apple intelligence and Siri, like I've been saying on the show for the past couple of weeks is a culture thing. And by putting Mike Rockwell in charge, you're not going to change the culture of Apple, which is something that really needs to change. Something that needs to be less secretive and more collaborative. Like I wrote about

big technology this week to be able to build AI. But maybe this will light a fire under their butts. It's hard to really think that this is going to make a big difference. But I'm curious what you think, Amir. Yeah, you made an excellent point. And I think a couple of years ago as this was starting,

Apple was trying to be a little bit more transparent in its research arm in terms of talking about the large language model related research they were doing. But yeah, I don't think they've carried that too much further. And to your point, it is not easy to attract AI talent to every place. And I think it's been very strange

to see their struggles here because they, they have had a long time to do this. Now it does, it does start with the person who's in charge. And one, one thing that is interesting about John, Jane, Andrea, uh, is that he really wasn't like a big fan or a big believer in large language models. And I think it took him a long time to kind of get, get fully on board, uh,

And, you know, they were sort of running into this situation where at WWDC last year, they're actually promoting ChatGPT, right, and OpenAI as being a leader here in this kind of foundational technology. Now, I don't know that they've actually pushed that much traffic to OpenAI and, you know, kind of provided their users with a lot of ChatGPT.

action. And I think there's just this like inherent problem with privacy and data or like

I think for a long time, their whole mission was to put Siri all on device or as much on device as possible, right? And really have it not need to talk to any servers. And so then it was like, okay, are we entering a totally different era where you actually do need server access because that's where the AI chips are that need to process whatever I need to process? So honestly, it's been really strange to see this. It's been really strange to see...

them have struggles with some of the features that they've been trying to launch and having to delay a lot of these things. But, but I think it's a combination of leadership being very slow on LLMs and being slow on ramping up investment in talent. And then, and then, yeah, some of the like inherent structural organizational and kind of structural constraints that you pointed out. Plus the, you know, how,

These LLMs or LLM services tend to be with the idea of like hyper privacy, hyper, you know, data localization and things like that. So, yeah, it's a combination of factors, but very strange. The whole thing has been really odd. It's weird. I'll tell you one of the most interesting contradictions about the AI moment right now.

you need to be big to play we know that right like the companies that are at the lead are the ones that have the most money um but the advances have been made by the smaller more nimble companies open ai was an upstart built chat gpt deep seek a hedge fund with a bunch of gpus uh built deep seek r1 you just can't have big company attitude if you're doing this stuff because

You don't want to cut people off from new information. You need to be experimental. You need to be nimble and be ready to try the cutting edge and not have to go through processes. And it just seems like that's plaguing Apple in a big way. And Google's had that but has gotten over it, but Apple has not gotten out of it. And it's going to be hard for them to do. Can we just point out that, you know, when Siri launched,

hats off to them because yes, it had its limitations, but it was a cultural moment, definitely in the West and in the US. And it was an earthquake inside of Google for sure. It really freaked everyone out there because Apple had done something that just captured people's imagination. And I think

It's just incredible. It's been like 10, 12, 13 years since the launch of Siri and the improvements to Siri have just been so lackluster. It was interesting though that in all of this, it definitely took some time for...

for accountability to set in. And I think you had one big signal that there was a shakeup coming was Kim Verath, who's their software taskmaster, went into the Apple intelligence section of the company and the Siri folks and was really brought in to figure out what to do on the product side and help people get on track.

And then, you know, Mark Gurman at Bloomberg, who's done a great job reporting on all this, then reported on kind of mea culpa by Robbie Walker, who joined quite a long time ago to work on Siri that sort of like took the fall for some of the problems. This was sort of an internal mea culpa. And then I have to say, yeah.

German's report yesterday on this change with Rockwell and Gianandrea. We had heard about this earlier this week, and we're working on reporting on this as well. But to be able to print in publication

sentence like Tim Cook has lost confidence in John Dehan Andrea that that is not something you write lightly and you have to imagine and I'll keep this vague but you have to imagine that there is a reason why Gurman was able to write those words just in the way that he wrote them and but that yeah it's really really you know

quite a shift over there, but it's definitely logical given all the problems they have and how far behind they are, but they still have an opportunity. Nobody has really just knocked it out of the park on voice and voice AI. It's definitely happening. It's definitely coming. I know Elon obviously is trying with the Grok app and I forget what he calls the voice assistant. I think it has a different name than Grok, but

But Google definitely has options here with their Android platform. So Apple can still do amazing things that are very, very useful if they kind of make Siri innovative again. So I wouldn't totally count them out in every way, but...

man, this has been such a weird, weird saga. Not something you expect from Apple. Okay. So we have a discord chat for our paid big technology subscribers. It's real fun. And the discord has been buzzing all week about someone saying, I think it was Mac rumors calling this the windows Vista moment for Apple. Do you think that's fair? You're asking me to tell you whether Apple is going to be languishing for, for a decade. Yeah.

Um, that, that is difficult to do. Um, yeah, I don't, I don't want to stick my neck out on that. Um, I really don't, but I, and I don't have the Apple intelligence features. I'm very happy with my iPhone. Uh, really one of the greatest purchases I've, I've ever made. Um, and so it's, it's hard to feel too, too down on them. Um, and you know, one, um,

One thing that I've actually been thinking more about than lately is the upcoming Google antitrust penalty trial and what that could mean for Apple. I think that's really the big deal. If for some reason the government is going to be able to prevent Google from paying Apple $20 billion a year for search referral traffic through Safari, which preloads the Google search service, then

Then it's worse than the Windows Vista situation. It's like they get cut down in size. So I don't know whether or not the market has priced in that possibility or exactly what the government is going to be able to do and what Tim Cook has up his sleeve to deal with that. But

That's a very pressing short term concern for them to lose those profits. Oh, yeah. All right, Amir, thank you for coming on. Can you share where people can find your and your team's reporting and how they can subscribe to the information?

Sure. It's just the information.com. I'm on X at Amir and Amir at the information.com is my email if you ever want to reach out. But yeah, we, you know, we focus on stories to tell you things, hopefully that you didn't know before. That's the bar we try to meet.

every time we try to get inside all the companies that people care about. And yeah, we take AI quite seriously. We have a newsletter that is insanely popular and insanely full of great reporting called AI Agenda run by Steph Palazzolo. So we definitely encourage people to check that out too.

And I will just second all of this and say thank you for writing so many great stories that we're able to cite here on the Friday show and for coming on the show yourself. So thanks for coming on, Amir. Thank you very, very much. I'm so glad we could do it. Me too. All right, everybody. Thank you for listening. We'll be back on Wednesday with another flagship interview. And on Friday, Ranjan Roy will be back to break down the week's news. We'll see you next time on Big Technology Podcast.