Let's assess the fallout of the deep-seek moment and how it changes things for OpenAI, Anthropic, NVIDIA, and others. Plus, Apple makes Siri dumber and iPhone sales are falling. We'll cover it all on a Big Technology Podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday edition, where we break down the news in our traditional cool-headed and nuanced format. Well, last Friday, we spoke with you about DeepSeek. And over this past week,
It seems like it's really changed the entire conversation in AI. So we obviously covered it on Friday. We had MG Seekler come in for an emergency podcast on Monday. Today, we're going to pick up that conversation and talk a little bit about now that we have a little bit of distance from the actual news itself, what it's going to mean for almost every tech company in the AI game today.
And then we'll also talk about Apple and how Siri has become even dumber with Apple intelligence. Joining us as always to do it is Ranjan Roy of Margins. Ranjan, what a week it's been. Welcome back to the show.
I can't believe the DeepSeek sell-off was only five days ago now. These weeks are getting longer, I think. That's crazy. And it's very interesting that NVIDIA was the one that was hit. I guess NVIDIA has been a symbol for artificial intelligence and maybe the AI bubble, if you think it's a bubble. But there are some others outside of NVIDIA that I think it's worse for. And we can talk about that a little bit today. So let me just kick it off because I...
about to publish a story on big technology, talking a little bit about the four areas of margin in AI and where this week's news
leaves open AI. So I was speaking with a source this week who's working in the industry who basically put it this way. You can get margin in four areas of the AI business. There's hardware, and NVIDIA is taking all of that. There's data centers. So you have Azure and Amazon's AWS and Google Cloud taking all of that. There's AI model building, where there's not really a margin, especially which we see this week after DeepSeek has basically shown that
We're going to be on a curve where these models will just get cheaper and cheaper to run. And if you're licensing them from an open AI, it'll get about as close to free or the energy that you need to run them as possible. And then there's applications. And that's the chat GPT and the replicas of the world that use this technology to build.
And so basically the assessment is the biggest takeaway from this week is that if OpenAI thought it was running an API business or a model building business before this week, right now there's clarification. It's all about the applications. It's all about ChatGPT. Good news, it has the leading application in the AI application world, but it certainly shakes up the broader picture for the company. So Ranj, I'm just going to put it to you. I'm curious how you react to all that and where you think OpenAI is after this past week.
Well, my first reaction is I'm very glad that I've been ranting that LLMs will be commoditized for, I think, about a year and a half now. And now it seems to be the conventional wisdom, the fact that we're starting the conversation here saying that models will provide no margin versus the idea that actually that's where the
insane profits would be, that wisdom, I mean, that's completely flipped on his head. Now it's almost the assumption after R1 and DeepSeek that there will be no economic incentives from the model side. It's interesting how you put it though, because remember, OpenAI in the financial projections that leaked, I've
I believe it was going to be 80 or almost 85% of the revenue was supposed to come from the application layer, the consumer subscription side, versus Anthropic is actually one that gets even more concerning here because they were betting on the API layer, the model layer, the fact that people would pay them lots of money for access to the models directly.
Even in this framework, it almost bodes worse for Anthropic than OpenAI here. Yeah, I think that this was really vindication week for us on a couple of fronts. You sent the product is all that matters, and me whispering over the past couple of weeks that Anthropic might be in bigger trouble than a lot of people are anticipating. We'll cover it. Dario Amodei had a really interesting post about this deep seek moment, and we're going to cover that in a moment. But
You know, it's interesting because I'm actually leading my story off with the revenue numbers about how ChatGPT was supposed to always be the lead for OpenAI and not the API.
But there's one thing I can't square with that. Like you would think that this would be total vindication for their strategy and like actually a good thing for them. And maybe they might even eventually use different models within ChatGPT if we end up in this world where everything is commoditized, which is something I hadn't considered before, but I'm going to mention in the story. But the surprising thing to me has been how flat-footed OpenAI has been in their response to DeepSeek. Now,
Now, obviously, it's a moment that will probably shake you if you're a model builder because you've been not, let's say, surpassed, but effectively equaled by an open source model coming out of China that is so much cheaper to run. And we talked about it last week that the big deal
in this moment is not that it took less money to train and that's obviously under question. It's the fact that you can use the API for much cheaper. And I was speaking with a developer this week who said basically it's $60 per, I think, million tokens to use the OpenAI O1 model. And it's $2.19 to use the DeepSeek R1 model, both comparable reasoning models. And it even goes up to, let's say, $10 if you're using an API provider that's going to make it a bit stable.
But it's so much cheaper. And then you look at like what has OpenAI been doing these past couple of days in terms of its messaging, and it's been terrible. And they've been the ones that have been so good at the PR side of things. I'm just going to talk a little bit about what Sam Altman does. So first of all, he says next phase of the Microsoft OpenAI partnership is going to be better and much, it's going to be much better than anyone is ready for with some awkward selfie with him and Satya Nadella.
Then he talks about, addresses DeepSeek R1 directly. He says, DeepSeek R1 is an impressive model, particularly around what they're able to deliver for the price. We'll obviously deliver much better models, and it's also legitimately invigorating to have a new competitor. We will pull up some new releases. And then he says, looking forward to bringing you all AGI and beyond.
I mean, that to me reads as sort of panic and shock from Altman. And I wouldn't think it would be the case if it is, as we're all agreeing, an application world at the end. What do you think?
Well, so separating it out from the application side, the most important part of his statement I think we should get into because there's been a lot of controversy around it is he makes sure to say and believe more compute is more important now than ever. I think that's the central question for the entire technology industry. It's underlying the entire Nvidia story. I mean, I've seen so many sell side analyst notes.
where clearly there's a heavily invested interest in NVIDIA and some cost into NVIDIA, where people are just hitting the bandwagon saying,
this actually means there's going to be more compute than ever needed. And Sam's saying it too, because this was the week, remember Stargate was the middle of last week. The idea that investing $500 billion would be required. And then suddenly DeepSeat comes along and the initial chatter is it's only 6 million. And then there's been plenty of reporting around originally the Chinese quant hedge fund. I think I saw some number in the FT, it was like
It had already had around $1.2 billion in GPUs invested. We all know that the $6 million cost attributed to it was only on the inference side of R1. Overall, it was not $6 million to actually create the R1 model.
But overall, it showed us it can definitely be done for significantly cheaper than $500 billion. And overall, the amount of compute no longer means victory. And I think that is a fundamental shift that Sam Altman and OpenAI have done so far down the road of
those with the most compute will win. That's been Mark Zuckerberg's attitude. Even Satya Nadella has pushed this kind of thesis. So to me, no one is still, everyone, the entire industry, at least out of Silicon Valley, is still trying to hold dear to this idea that the more compute you have, you will win. And so do you think that's why OpenAI has been so shaken this past week? Because basically they were going to have that compute advantage?
And now, you know, maybe it doesn't matter as much. I mean, we're going to obviously we're going to get into Jevons paradox as we're required to. I mean, obviously, this is a contractual thing. We're going to get into that NVIDIA section in the Microsoft section. But like, let's focus on OpenAI for a minute. I mean, again, going back to our question here.
If it's all about ChatGPT and that is the leader and they've shown that they've been able to innovate with things like operator, which I think we're going to get your thoughts on in a moment, and ChatGPT voice mode, which is pretty cool. And they've gone from, let's say, 100 million to 300 million weekly users of ChatGPT in just a year.
which is massive growth and shows that there's real promise for the app side of AI, even though I'd like it to be bigger. But anyway, that's a story for another day. So what's the big deal? Again, if this compute advantage or this compute lead doesn't make a big of a difference, just as long as AI can surge forward. Well, that's exactly why their reaction is so fascinating to me because
I have definitely argued that OpenAI's greatest asset has actually been their product jobs, the chat GPT, the UI behind it, the overall experience, their voice mode, incredible products.
operator, which I have played with a lot, I spent probably like 12 hours last weekend trying to use, is not a great product. It's an interesting product. It's not a great one yet. But they've had an advantage on the product side. So on one side, it's odd to me that they don't just kind of say...
"You know what? We're going to win on product and we're going to have great models behind it." But then it's kind of when you go back and remember at their core, which Sam Altman has said many times, they're a research house that kind of accidentally walked their way into a business.
And at that point, you can almost picture the ego side of it, the kind of like more competitive side that they don't want to lose that battle. They don't want to lose the actual foundation model creating who's going to get to AGI first. They care more about that.
than hitting their $340 billion valuation, which there was reporting that SoftBank will potentially invest in this week. $25 billion at a 340, I think. So they're still, I really believe Sam Altman cares more about having the better models, getting to AGI faster than the business side, because this doesn't fundamentally change at least what's been reported around their business plan.
Yeah, so let me see if I can do my best to sort of tie it up now that we're talking about it. I'm thinking it through. I think it's important for open AI to lead in models because if you lead in models, you can dictate where the product goes. And honestly, that's why, you know, everyone wants to be able to do that. And when you have open source, in particular open source, come and maybe not grab the lead, but basically show that it can play in your ballpark.
then you're in trouble because open source is a sort of swarm of developers. It's deep seek. That's going to build on meta and meta. That's going to build on deep seek and all the other developers that are going to customize and push forward. And if they are, if open source is the one that leads over proprietary, you could end up seeing the product game potentially spin out of opening eyes control just to throw out like a crazy example. Like let's say open source gets to AGI before, before, um,
OpenAI does, well, then the products that are going to be built on that will be, by definition, better than the chat GPT. So if you're OpenAI, it's also like, even if your model business is not going to be the breadwinner, it really is incumbent upon you to be able to lead this race. And that's why
you know you would we're going to talk about meta in a moment but you would think that meta is devastated by this moment because they've been they've been sort of outflanked by uh deep seek but it turns out like actually the strategy you know might be proving true well no i i think that
This idea that a bigger model or better model means a better product is the underlying thesis or thought process of a lot of Silicon Valley. But again, I disagree with it. And let me get into Operator. So Operator is OpenAI's new kind of like browser take computer takeover tool.
It has a built-in browser, but it can do things and take actions across the web for you. So I was doing a test. I'm working on a medical innovation fund as a nonprofit, and I've been having to kind of go through a lot of medical research papers.
And I'll manually pull all the author's names into an air table and then look up, you know, like LinkedIn profiles, Google Scholar, PubMed profiles. So it seems like a pretty straightforward, this whole world of agentic that we've been promised seems pretty straightforward.
So I went to operator and I paid the 200 bucks for chat GPT pro. And I kind of, it is really cool. Like it's actually like you give it a prompt and you say, here's a list of papers, go find the authors, put them into air table, then go to Google, search the author's name, find these links and paste the URLs back into air table.
seemingly, like, not an overly complex thing. It worked two or three times really well. And it was mesmerizing because you're literally watching it and it's a built-in browser in the operator interface. And like, you're watching it do this and just click around and actually copy and paste links. So you start to see the promise. Then the most fascinating part was
it got lost in Airtable. It started just clicking on random things. You can almost see it just losing its place and losing its flow and rhythm, and just it completely broke. So then you take control, you can actually kind of assume control back in the browser, try to redirect it, and I could not get it to work again. Or then it started copying in completely incorrect
things. It just does not work even close to where the promise is, but you see where it could be going.
But again, this is the product they have released. Sora, very similar. Huge promise. We waited like a year and a half. Sora is available for, I think, all ChatGPT Plus subscribers right now. I have not seen anyone posting cool demos or all those kinds of things. So the last two products that they have launched have been pretty much flops. So I think that's why...
They are still working under the assumption that operator, if the model development becomes so good, then that's the only way this kind of browser takeover behavior will work. So they're still betting that the model will solve everything. This is why you need better AI, don't you think? Better models will make sure that operator works better. And in fact, they were asking Kevin Wheel, who's the head of product, what's coming next?
For OpenAI, what kind of moat does OpenAI have? He says, we're about to launch some models that are meaningfully ahead of the state of the art. And some people are speculating that's 03 or who knows, maybe we'll see GPT-5 finally. So don't you think these two work in concert?
Yes, no, it's tough because on one hand, the coolest part of the experience, and again, I'm going to give full credit, it was a really, really fascinating kind of like, holy shit, this is the future moment. So I'm still going to give them credit on that, just the way the entire overall UI and just experience of it. But still, as you're using it, you're like,
"Is this really the way I need to set this up?" In reality, I could have written a PDF parser that did the same thing using AI and cloud. I do this stuff regularly. Extracted authors' names, done, written some... It was basically web scraping.
the web is such a messy thing that how many people are really going to derive value from this kind of tool? So it actually made me question, is this the way? And I saw some posts as well around, could websites change the way they're structured so they're more
open and easy for an agent to access, maybe that starts to be an interesting thing. But the idea that we need a model that's so smart that it'll actually understand the first time how to use a pretty heavy tool like Airtable, I don't think we do need that. I think the way this whole battle will be built, like one, will actually be around
creating an air table connector that's already just pre-programmed and not LLM based to actually do this kind of work. So I don't think, I still am embarrassed that we're going to see such a smart model in the next few years that'll actually get the complexity of the entire web. I don't see that happening. It's one of the interesting things about looking at DeepSeq with its reasoning model is it will show you its chain of thought.
And you can actually see the model kind of struggle its way through trying to find exactly what you're intending with your prompt. And so like there was this concept of like, oh, the prompt engineer is going to be the new job. And it's already getting to the point where you don't need to write the best prompt because the models will sort of get to that point eventually. And I think that is sort of why being able to watch that within DeepSeek's consumer app is the reason why it ended up going well.
so crazy and taking over the top app spot above Chatupati and the rest. And in fact, I think that's probably like the biggest risk for OpenAI is like, who cares if the model is a little better or a little cheaper? And we can argue about whether it was, but the fact that the DeepSeek app went to the top of the app store, that to me might be why they had such serious concerns. And before we move on, I do want to move on to what Dario from Anthropic said.
I just want to talk about a couple of things about the OpenAI business. First of all, no one paid attention to it this week, but there was a report that OpenAI has now made four times the amount of its nearest competitor on the ChatGPT app since it launched at $529 million. It's not a massive number, but this group called AppFigures now estimates that the AI app market is $2 billion this year. And as things like Operator get better,
That's going to be even bigger, an even bigger number. So we're starting to see this AI app market materialize even again, if it's not as big as I would want it to be. And then it sort of leads into this next part, which Ranjan, you've already brought up, which is that OpenAI is trying to raise again at a crazy number. They are in talks, this is according to the Wall Street Journal, to raise $40 billion in the funding round that would value them
as high as $300 billion. Yes, the $340 billion number was tossed out, but I guess they've come back to earth a little bit and now it's looking like $300 billion. I mean, late last year, OpenAI raised $6.6 billion at $157 billion valuation. And we said, right, what did we say when that came out? Oh, they're probably going to need to raise again at the end of the year. 2025, they're already raising in January.
Oh my God. I loved this. You couldn't give me enough news this week, but then Masa-san had to come in and just push the idea of instead of, you know, the world is like, is OpenAI finished? And he said, no, let's come in and add another 15 or 25 billion, add, bring in 40 billion and pump this from up to 300 billion. I mean, it's, it's amazing. It's, it's,
There's not much you can rationally or analytically say about this. All you can do is just sit back and enjoy that Masa is once again right in the center of things.
I just found a great quote from Masa. I guess he said this last year. He was talking about Gates and Zuckerberg. He says, those are one business guys. Bill Gates just started Microsoft and Mark Zuckerberg started Facebook. I am involved in 100 businesses and I control the entire tech ecosystem. These are not my peers. The right comparison for me is Napoleon or Genghis Khan or Emperor Genghis.
Quinn, who's the builder of the Great Wall of China. "I am not a CEO. I am building an empire." What a guy. So Lionel Barber, who was the editor-in-chief of the FT when I was there, he just came out with a book. It's called Gambling Man: The Secret Story of the World's Greatest Disruptor, Masayoshi Son. Is that where this ... Because I saw this quote everywhere and I was trying to find ...
Like the actual where it was originally from. Yes, it's from that book. Okay, I'm going to buy that right now, Lionel. I'm reading it. Yeah. One last thing. Can we put to bed this idea of the subprime AI crisis, which we talked about last year? I'm very curious what you think about this. This idea that these companies had raised so many billions of dollars that startups were going to put their technology in their products and then...
They were going to depend on it. And then eventually those rates were going to have to go up and kill the startups and kill the entire AI world. It was a very interesting theory. But now that we've seen that rates will go down to just about zero to use this technology, do we still believe in subprime AI crisis?
So again, it was a really interesting theory from Ed Zitron, the idea that startups that are building, the cost will only escalate the actual input costs of these AI models. So they're subsidized now. At some point, it'll catch up and it'll bring everyone down. That's why this is such an exciting moment. To me, that's the biggest news of the week. Separate from the financial markets and the considerations on what this means for NVIDIA
Everyone out there now has access to very good models cheaply. And that means that the product side, the application side, this is like Andreessen's time to build. Let's get out there. Everyone start tinkering with everything and not having to worry about getting some inflated API bill. This is it. This is going to happen and this is going to keep happening. And now the entire industry is going to start thinking about not focusing
funneling all the investment dollars other than Masayoshi-san to the model builders. But to me, you can imagine there's going to be hundreds of new startups where capital will flow to that will actually be more on the application layer and actually solve problems and be interesting for
regular business people or just regular people and not just OpenAI making a few products and us all being stuck with them. So that's why, I mean, this to me was such a big week aside from all the actual, I don't know, market and global geopolitical implications, but...
It just means this is where the world is going in the next year or two. And that's far more interesting to me. Yeah. When we let off our show last week, I was like, Ranjan, what do you think about Deep Seek? And your immediate reaction, not thinking, was I love it. I still love it. I still love it. And what we're saying right now sort of plays perfectly into this piece that Dario Amode, the CEO of Anthropic,
put out where he talks about on DeepSeq and export controls. He says, and this is sort of going to our point, the efficiency innovations DeepSeq developed will soon be applied by both U.S. and Chinese labs to train multi-billion dollar models.
These will perform better than the multi-billion dollar models than they were previously planning to train, but they'll still spend multi-billions. The number will continue going up until we reach AI that is smarter than almost all humans at almost all things, which I think he predicts is coming in a couple of years.
I think his post was excellent. And it also is like, I kept reading posts by like Dario Amodei and Andrew Ng and Jan LeCun. And I was just like, oh yeah, like this is real. And it's going to spread into the other models, basically making models that are much, much cheaper to run, which should spur that, you know, spark of building. And the only thing is,
Like you just said, this might not be good for the proprietary builders. So I was interested to see it in Dario's post. What do you think is going on here? Reading the post...
And again, it was very good in terms of him talking about what the actual genuine innovation from DeepSeek's side around their models were and recognizing there was genuine innovation happening. But the push on export controls, I think, and I'm curious your thoughts on this, it's such a difficult question because on one side, is it going to be enough? The fact that DeepSeek
open source this. Literally, it's on AWS now. It's not some only in China thing. It's not that bifurcated world of only the West versus China. It's more integrated than that. And it did it. To me, one of the interesting parts is, and again, plenty of debate around
what the actual GPU count is for the quant hedge fund that spun this out. But overall, it does feel like, or there's been plenty of arguments for the idea that
Because of export restrictions, it almost forced the additional innovation and efficiency. Because they still did not have unfettered access to the NVIDIA state-of-the-art chips that they had to, they had to innovate around it, which is really interesting already. But what does that mean about how we approach this? And especially of all people, Anthropic, a model builder whose entire financial forecast
is predicated on selling models, like to kind of lean into this is a national security thing. And I'm always, I'm very pro-ban TikTok and in other areas, really cognizant of the, you know, like national security implications of a lot of this. I am curious, like they've already shown us it's not effective anyways. So I don't know. I think it was a little self-serving. Yeah.
Oh, definitely. And this is like, going to go to like the practical theory of how CEOs act, which is in their own self-interest. Yeah. Fair. You know, it was like very interesting to like see everybody on X reacting to what Dario's saying by saying like, oh, this is an unprecedented level of cope. And he's just trying to hamstring, you know, the Chinese companies now that they've, you know, released a reason model and he doesn't have one and he's just upset about this.
Like as if you're supposed to do anything else as a CEO, then try to look out for your own interests. I don't think it looks good for him, but I don't think it really matters.
Well, Dario, I can just beg you, now that we all have access to cheaper models, can you please make Claude the paid subscription level not hit my rate limit when I'm halfway through a project? Because it's one of the most frustrating things. There's going to be some problems there. Maybe they're just going to try to sell themselves now that I guess acquisitions are legal again in America.
Yes. That's actually, I mean, back on the political side, I think the whole changed M&A environment actually is really worth watching with a lot of these companies because there was kind of this assumption, I mean, over the last couple of years that, especially once you're hitting
these many billions of dollars in valuation that M&A is essentially off the table. So that'll be pretty interesting. But also, I mean, and we haven't even mentioned Google in all of this. Google has been making some pretty big strides, at least on the model building side, even on the application layer side. So then you start to get into which company really needs an Anthropic. Again, Claude's a great product, but at what price?
Yeah. I mean, maybe it'll be a capitulation sale to Amazon, but even Amazon is in good shape this week after DeepSea had put DeepSea in its product and Andy Jess, he bragged about it. There's plenty to talk about later, but it's worth talking about now. It just, you know, it took a week and it's there. And I think in a minute, it is the best model in AWS. And that's going to lead to more building through AWS. So I think that they're probably doing jumping jacks over at AWS HQ. Yeah.
Yeah, remember we had the theory that Amazon was going to Amazonify this whole space by bringing you into the AWS ecosystem, offering you the fancy expensive models. Actually, they were going to deep seek with their Amazon Nova, their new foundation model suite, which was supposed to be winning on cost.
That was kind of our thesis anyways, that they were going to just bring you in, let you use the high price stuff and then give you the Amazon basics version. And now they got deep seek already. So I think they have actually played this pretty well then if they haven't invested and just kind of been sitting to the side. Definitely. And just as telling that it was Jassy that was out there tweeting about it yesterday.
It's clearly important to the company. All right, one last thing on Anthropic. This is from Hugging Face co-founder Tom Wolfe.
He says, if both deep-seek and anthropic models had been closed source, yes, the arms race interpretation could have made sense. But having one of the models freely available for download and with detailed scientific report renders the whole closed source arms race competition argument with artificial intelligence unconvincing, in my opinion. Here's the thing open source knows no border.
both in its usage and its creation. And I think that like to just put a point on the first half of this conversation, it is, this has just been the week where it's been the open source week and good for open source, good for the companies that benefit from open source, like let's say the Metas and the Anthrop and the Amazons, bad for the proprietary model builders. And it brings us right into the Meta thing actually, which is like,
A lot of this commotion over DeepSeek began when there was a screenshot passed around of like a blind post saying Meta engineers had been freaking out about the fact that DeepSeek had surpassed their open source models. And there's like now reporting that top executives within Meta are worried that the next iteration of Lama won't perform as well as DeepSeek.
But ultimately, I think that as they digested it, people like Jan LeCun, who's going to be on the show in a couple of weeks, and Mark Zuckerberg were basically like, yep, it's going to be open source. We will take what they did and put it into our models. And we're going to ultimately, you know, be in the right place, which is that we're going to help develop AI that's available to all. It's going to be on our terms as long as we can keep our lead. Right. That's the big question. And then we're going to benefit.
Well, this is one thing I've not been fully clear because if we're talking about does the value accrue to the model layer or the application layer, Meta is in a position where they should probably be able to do some pretty amazing things at the application layer. One, you have distribution that absolutely no other company or nothing in the history of mankind has had.
in terms of your billions of users that you can push products onto. And Meta AI, obviously already, I mean, image generation, it's fun to play with. In the Ray-Bans, it works incredibly well. Even the real-time language translation is
getting pretty good. So getting good products in front of people at that point, I do wonder why do they care so much about having the winning model? Because in reality, they can build it with DeepSeek if they want. I mean, they're building it with Lama right now, but what the massive incentive is. And originally I remember thinking like it was all done in an awesomely
Like, you know, in an amazing way to undercut the open AIs and all the other paid players. But I'm still a little bit confused on that. I mean, I do think, like I said earlier, that everybody wants to dictate how this goes. So they kind of have an idea of where it's going and they can lead like the product development and have people.
that are developing on their tools and making it better as opposed to somebody else's. You really want to be in the pole position there. But it's, again, it's not the end of the world for meta if deep seek just kicks their butt and they're like, all right, well, we'll just put this into messenger. And maybe people will use our chatbots that, which I don't think they use right now, even though they're talking about how they want the billion user assistant, um,
I don't think anybody is, maybe some people, but top of mind, the fact that Meta has an assistant, it's still not really competing with ChatGPT as far as I can tell. I actually, I use it for quick, fun image generation with my son, just because it's right there in WhatsApp search. I'm convinced that probably a large percentage of
their AI assistant usage because it's built into the search bar, essentially, as people accidentally trying to search for something and ending up using meta AI. But it's actually good. I've never spoken with anyone who's using it as a more dedicated chatbot or chat assistant, like most of us use ChatGPT and Cloud and others. But for quick one-off
questions. It's I feel it's again, the distribution side of it. It's right there in the apps that people are in all day long. Anyways, it's it's in terms of pole position. I think it's it's definitely has an advantage. Yeah, and it's just going to get better over time as
as let's say these open source models do achieve AGI, right? To just go with the crazy example. Well, you can then deliver that to a billion users and you don't have to rely on open AI to do that or billions of users for that matter.
There's still a lot of let's, you know, what if though on this front. And let's quickly talk about this Jevons Paradox thing. So this was Satya Nadella's reaction to it. He says his AI becomes more efficient and accessible. Its adoption will soar, transforming it into an indispensable commodity. I think we mostly share his opinion on that. I think you definitely do, Rajan. But my only question is,
If AI gets exponentially cheaper, you know, then will we see an exponential rise in applications? Because on the application side, like we have chat GPT, we have these metabots, then we have some stuff in enterprise software like Salesforce as agents. And then, of course, there's coding applications. But after that, for as powerful of a technology that AI is, I don't know if we've seen the applications that I would anticipate.
Well, let's start with, so Jevons Paradox is not actually a paradox where it's not a logical contradiction. It was a bit counterintuitive at the time. So that's been frustrating me, I think, more than anything, because it just completely sounds like one of those things that people have been using because it sounds smart, but we're still going to have to talk about it. And it's the idea because everyone is. And again,
To me, it actually is intuitive. As any kind of resource becomes more efficient and accessible, it'll actually grow in overall aggregate demand and output because people will figure out more uses. And I mean, I think...
At the core, this is it. This is what we've been saying. And this is now the thousands of startups building interesting apps on top of it. And finally, the promises that we've been made for so long and have not been realized will be realized because people will actually pay attention to building cool experiences and products using AI. It's not just...
Microsoft's of the world and OpenAI's of the world and only a few, and then others trying to build stuff but being really limited based on the actual cost input side of it. So I think it is real. It's like the idea that AI is going to become much more deeply integrated into everything we do in a good way, I think is going to be real. And this makes it a lot more real.
do you think that this is unfairly punishing nvidia or do you think it makes sense because there's been more uncertainty that's been added in like we were the ai industry was moving on like i don't know if a linear path is the right way to put it but on a path and now the path sort of changed directions
Yeah, I definitely think it's fair. The biggest change is for the last two years, the Nvidia story has been ironclad. And every earnings report, every growth... I mean, so the valuation just becomes richer and richer and the overall market cap grows and grows. But
No one could question it. There was absolutely no doubt. I do think, again, the NVIDIA story has fundamentally changed. And I had mentioned this before, but it's almost fascinating to me when you remember, think about the $3 trillion market cap, how much money is tied into this company? Because every sell-side analyst report, just emails left and right, like,
Everyone is just like, "This means nothing. The story is still great." Because the amount of just vested interest in this company right now means that people are going to fight for it. People are going to fight to make sure the story doesn't go away. But it was always a story and they were realizing it pretty well, but now there's at least a seed of doubt that was not there in the last few years that at a certain point, do we need
better and better chips? Is that where the battle's going to be won? Because then there's no competition. Do we need more and more compute? And even if the world needs more compute, do they need the latest NVIDIA chips to realize actual utility and applications? Maybe not. There's at least a little bit of doubt. So I think the recovery here over the last few days is warranted, that it was a pretty sharp sell-off for a company that size. But
I did so the invincibility of Nvidia for me is gone. There's still so many unknowns, right? A lot of this is based off of things that should happen, right? Like because this, because deep seek was able to develop with like lesser chips, then if you actually put those innovations and you use the better chips and use more of them, then they should deliver better innovation. But this is all based off of hypotheses, right? It's not like a rule, just a hypothesis. Like it's not a law. And so I, yeah, I totally understand why people would
Be a little wary about NVIDIA, and when you trade a company like that, you trade probabilities. So did the uncertainty sort of change the probability that they would dominate? Yeah, maybe a little bit. By the way, breaking news. This is from TechMeme. They're quoting Wired. Sources say, OpenAI plans to release O3 Mini today with O1-level reasoning and 4.0-level speed as the company's staff is galvanized by DeepSeek's success. So it's game on.
I guess it's game on, but just make operator a little better before you do that, guys. I mean, it still baffles me. And I say this again because I'm not an AI researcher. I am just a technology person that's using all these different apps and technologies, like building some stuff. But like...
Why are they so focused? They released Sora with so much fanfare. It's basically useless for the vast majority of the world in its current incarnation. Operator, huge release, not really usable. Enough people show some demos and there's endless Twitter threads of like...
10 amazing ways you can use Operator. But in reality, no one is using Operator today in an actual day-to-day way that helps their work. So I just, rather than focus on making that stuff good and being so caught up in the model battle, it's...
It feels like to me that if that mentality never changes, that $300 billion valuation looks even richer. Well, you're underestimating pride here. You know, there is a level of pride that's involved and that's probably what's happening. So, okay, let's quickly hit some of the misconceptions about DeepSeek. We can just go through this real quick.
We talked about it last week. It was a misconception. We talked about it last week that they said that they trained just for $5 million. I think we can both agree now that they trained for a lot more and $5 million was just their last training run. Even though it's impressive for a training run, it doesn't fully incorporate the cost. So we should definitely note that. Yeah, no, no. I think it seems to be like everyone has accepted it's not $5 or $6 million. It's something more, but it's still not...
500 billion or 100 billion or it's not at that scale. And I think that still means that the
This story is very important. And then also, again, even amidst the export controls, and then you get into, did they circumvent them? When did they circumvent them? What chips exactly did they get and did they build on? But whatever it is, it was not some super cluster of the most advanced up-to-date chips.
And so it still reminds us that kind of that necessity breeds innovation or whatever the quote is, is still definitely part of this story. Yeah. And I said this on CNBC, like it's not the process as much as it is the output. And the fact that they were able to run reasoning at such a cheaper cost
is the thing that matters most. And people can complain about the way they did it. They can complain about the fact that they're talking about a different number of chips, but ultimately like the process has galvanized. I mean, the output has galvanized Silicon Valley. Like you're hearing from Altman and Amaday and that's the bottom line. The other thing that's been interesting has been
The anger, it seems like, from Microsoft and OpenAI, and from others, where it seems like OpenAI, that DeepSeek had effectively copied some of the stuff that they were doing, or maybe taken their data. This is from Bloomberg, Microsoft probing if DeepSeek, Link Group, and properly obtained OpenAI data. Microsoft and OpenAI are investigating whether data output from OpenAI's technology was obtained in an unauthorized manner.
Microsoft security researchers in the fall observed individuals they believed may be linked to DeepSeek exfiltrating a large amount of data using the OpenAI application programming interface or API. This led us to a, I think, quite funny headline at 404 Media. OpenAI furious DeepSeek might have stolen all the data that OpenAI stole from us.
Ranjan, you're pretty good at like sort of assessing like when it's fair for tech companies to take content and when, you know, it's all right to rip it off or rip other people's products off. Is this like a sort of nefarious move from DeepSeek or is this like...
somebody in a glass house throwing rocks. I think glass house and rocks, especially open AI. So you have on one side the kind of, I mean, certainly larger question that I've still predicted will be answered in some capacity around individual artists' work or creating in the style of specific artists or certainly the New York Times lawsuit. But then even at scale, if we remember open AI...
OpenAI apparently had been scraping YouTube at a large scale for their initial data sets. And the entire world seemed to have reacted in a pretty similar way, where obviously it's a bit rich from all companies. OpenAI kind of
going down this route. I don't believe they had released any kind of official statement. So at least to their credit, these were off of kind of like just more like either leaks or just genuine reporting that they're looking into this or exploring it. But yeah, I don't think anyone anywhere is going to have any sympathy for OpenAI in this matter. Yeah, I'm on the same page there. And it was like, it wasn't done through hacking. It was done through the API. So like...
I guess, Crimea River. Yeah, I mean, it's the YouTube example perfectly. And it's a big platform against big, well, actually small platform against big platform in this case, but just using platforms
the actual available technology and probably breaking the terms of service a little bit and then using that to get started. Okay, so I want to take a break, but before we do, I should say that we have a Discord now. So Ranjan and I had been talking about starting a Discord as these stories had continued to break and there was more to talk about and we were kind of curious what the audience was thinking.
throughout the week so that discord is now open it is open to big technology paid subscribers so if you go to bigtechnology.com you can see there's a post that says let's talk deep seek ai etc on big technology's new discord server
If you're a paid subscriber, you just scroll to the bottom. There's the invite link. If you're not, you can just sign up and then scroll to the bottom. And there's the invite link. It's been kind of fun. We're just about wrapping day one and there's been some really good conversation there already. Alex Stamos, the security researcher,
is in there telling us right now about how DeepSeek's security issues really are compared to all the hype. Like what happens if you download DeepSeek? Are you in trouble? So thank you for the idea, Ranjan. It's been fun getting it off the ground and we hope listeners will join.
I actually learned, I think, more from just a few comments from Alex Deimos around what are the actual security concerns or considerations on DeepSeek than pretty much everything else I read this week. So actually at a very technical level, even having to myself look up what safe tensors are and other really deeply technical terminology. So it's off to a good start today.
Yeah, it's been cool getting it off the ground. And again, you could go if you want to join, just go to bigtechnology.com, click the Let's Talk DeepSeek AI, et cetera, on Big Technology's new Discord server link, and then join us. We'll see you over there. All right, take a break. And when we come back, we're going to talk about how Siri's still terrible, or maybe it's even worse, and then briefly touch on Apple earnings. And then we'll get out of here. So we'll see you right after the break.
From the minds of visionaries to the desks of disruptors. I'm Lars Schmidt, host of the Redefining Work podcast. Join me each week as we explore the new world of work through the lens of those shaping it. CEOs, HR leaders, investors, and more. Be a part of the conversation that changes everything. Subscribe to Redefining Work today.
And we're back here on Big Technology Podcast Friday edition. So we've talked about DeepSeek the entire episode so far, which I think has merited, but we're not going to let the week go away without talking about this great post by John Gruber on Daring Fireball titled, Siri is super dumb and getting dumber. I mean, the...
Long and short of it is that basically he had a friend who asked Siri when the Super Bowl, who won each Super Bowl and Siri got 34% of those right, which is truly disastrous. And basically Gruber's point is Apple got this gift of generative AI.
It put it into Siri and somehow it made Siri worse. It definitely, the summary notifications have been getting worse and worse or just realizing how much use, how useless those have been getting is just becoming even more and more acute in my day to day.
Genmoji were fun for a minute. And I've ranted a bit about this week, so I'm glad that even Gruber, who is about as long time an Apple fanatic as it gets, the fact that he is recognizing and ranting himself about how bad it is, but it really calls into question, where are they in all of this? The only thing though, I don't know, I'm curious, do you think this is good for Apple this week or
reminding us that the entire direction of the AI industry is in question. So maybe them screwing up the first phase of this battle so badly actually means they'll be okay and they can start fresh. It's definitely the galaxy brain take here. It was kind of interesting watching. So the Apple earnings came in yesterday. They sold less iPhones in this Q4 than they did in the year previously.
So iPhone sales are going down. Apple intelligence is garbage. And yet, you look at what happened. DeepSeat comes out, actual AI innovation, there's panic and a sell-off.
Apple made Siri worse and is selling less iPhones, and the stock is up 6% on the week. It really is amazing. I don't know. To me, I'm a little bit puzzled at the buying activity on Apple. I just don't see what the news is. Maybe services. They beat on services revenues. That's pretty impressive. But we talked about it, that when Apple intelligence came out, it wasn't going to lead to a super cycle. It hasn't led to a super cycle.
And I think Mark Gurman put it pretty well today. He said, Apple intelligence is a half-baked marketing strategy that was rushed in response to OpenAI and Google Gemini. Yes, Apple had no choice. It is exactly the right thing. But they shouldn't have been in that position in the first place.
So, I guess like what the market is saying, you know, why it sent it up this week, of course, there's the services beat. But even after DeepSeek came out, and I just think it's a pretty simple thing, which is just that you can build more with DeepSeek, and Apple has got to find a way to build consumer products, and it can maybe use open source to do it. But
I don't know. Do we have faith Apple's going to do it? I don't. Well, I was thinking because even that idea that the marketing drove the product side of it with Apple, which usually it's the other way around. To me, the most fascinating part of this is, and I think a lot of companies, and Apple probably is the one most troubled by this, is kind of the dichotomy between normal everyday users and
call it the industry. And by the industry, I mean the actual, the technology-focused people working in the companies, the investors focused on the companies. There's such a distance between what any normal person wants or expects from AI and what these companies are pushing. And to me, that distance has actually been
kind of like the core part of Apple's errors in this, that they're more focused on the market and probably their own competitors and even their own employees and what they're thinking about rather than what the everyday consumer is thinking about. Because again, I can promise you...
No, most, certainly non-tech focused iPhone users were not clamoring for Apple intelligence. And I think even most technology advanced Apple users could easily install ChatGPT and whatever else on their iPhone and still be fully locked into the Apple ecosystem. So the idea that
they needed to do this. I actually disagree with Mark Garman that they had to do this. I think they could have still just sat back. The only reason they would have had to do it is for the market. And I mean, apparently it's still kind of working. Apparently the average Apple investor has not actually tried to do a basic task with Siri, but
It's still working, I guess. I just want to end with this. I love the how it started, how it's going meme. And I saw one come across my X timeline this week that just really sort of captured it all. How it started is the Think Different campaign with these beautiful black and white images that can know, like, if you're an iPhone user, you're just kind of more artistic, iconoclastic, you have more taste. And then I've been in, okay, and then the second image is how it's going.
And it's, imagine it, Genmoji. And it is a sunny side up scrambled egg with hands and feet. This is what people want. This is what they've been clamoring for and why they will spend, it will launch an entire new iPhone super cycle. The sunny side up egg with hands and feet. I mean, I've been in Miami for most of the week this past week and it's
I don't know if they're in New York yet, but these Genmoji billboards are everywhere. And they're terrible. They really don't say anything. And if that's your AI play, Lord help you.
I saw some posts around, like everything they've offered feels like something that Tim Cook is the only user of. Even notification summaries, like again, they're relatively useless and almost counterproductive, but I can see if you're Tim Cook, maybe they're kind of useful. I could see Tim Cook really being into Genmoji. He's just churning out, tossing out Genmoji left and right.
I mean, someone's got to be using it. All right. Well, I will make like a Genmoji as a big scrambled egg with one hand up and say, have a great weekend. Have a good one. All right, everybody. Thank you, Ranjan. And thank you all for listening. We'll be back on Wednesday. Actually, I'm speaking with NVIDIA executive. So that'll be fun to air on Wednesday. And then Ranjan and I will be back on Friday to break down the week's news. Thanks again for listening. And we'll see you next time on Big Technology Podcast.