Was this the week the tides finally turned against Apple's disastrous Siri project? Microsoft is working through its options with OpenAI. It's all about those AI wrappers, baby. And let's talk about whether AI will take our jobs now. That's coming up right after this.
Hi, I'm Jonathan Fields. Tune into my podcast for conversations about the sweet spot between work, meaning, and joy. And also listen to other people's questions about how to get the most out of that thing we call work. Check out Spark wherever you enjoy podcasts.
Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool-headed and nuanced format. A lot of stuff is moving in the world of big tech and AI this week. It's, I think, a watershed week in the age of Apple intelligence, and not in a good way. We also have some news about Microsoft.
and open AI's relationship and how that's progressing. And finally, the AI field seems to be progressing beyond the model and towards the product or the wrappers. So that's going to come as welcome news to Ranjan. We'll also talk about my latest story about how I'm starting to feel like AI can do a lot of the work that I do. That's coming up in our show. Joining us as always on Friday is Ranjan Roy of Margins. Ranjan, great to see you.
We might have to update our It's the Product, Not the Model t-shirts to It's the Wrapper, Not the Model, but we'll get to that in just a bit. But I'm excited to talk about Siri this week. I'm giddy. I'm giddy. I'm sorry.
I shouldn't feel this much joy, but my God. There's gotta be some AI rapper puns, like hip hop puns, East Coast, West Coast. I don't know exactly what it is, but we'll have our merch guy work on it. We don't have a merch guy, but if we did, that's what they'd be working on. And it's a good thing we don't, so we don't end up doing this. Exactly. It might be a very successful fashion business. You never know, Ranjan. You gotta dream, just like Apple is dreaming with Siri. And the dreams have turned to nightmare.
This week to me seemed like the week where the tides finally turned on Siri. And it begins with this German story that comes out at the beginning of the week, talking about how Apple's artificial intelligence efforts reach a make or break point. Now, I think that's a very nice headline from an editor, because when you start reading the story, it's clear that German doesn't think it's reaching a make or break point. It's clear that German is telling us. And I think we can all see with our own eyes we've been talking about on the show that
that Siri is an absolute embarrassment. And look at the first paragraph of the show. Apple, the company behind the Mac, iPhone, iPad, and other groundbreaking products, has typically beaten rivals by following the hockey approach. Skate to where the puck is going to be, rather where it is right now. But we're currently in the middle of the biggest technological revolution since the debut of the internet.
And Apple is barely even on the ice. I mean, this is a week of terrible cliches that are beaten to death. But each one of them has Apple not on the ice. A corpse will read them all. Now, Gurman says the unveiling of Amazon's Alexa Plus has made Apple's shortcomings more apparent. When Apple unveiled an AI-infused version of Siri in June, the system looked great in computer-generated video.
The reality, though, is the company barely had a functional prototype and Apple engineers are going to need to move mountains to get it finished as planned. We already know that it's not finished as planned.
This type of strong, forceful wording from someone like Gurman coming out about the new Siri, I think just is a moment where the narrative shifts. And I think as journalists, we sort of have to wait a while before we say, yeah, this isn't working, right? The company might always be a little bit behind. You have to reserve judgment. You can't really say in week three or four or month three or four that it isn't working. But now we are months after...
Last year's WWDC event, and we're coming up towards the next one. And as we take that leap around the sun, it seems like the promises of Siri have gone from something that generated true excitement among Siri watchers like you, Ranjan, and has now become a deep disappointment and a deep embarrassment for Apple. And that's why you're seeing this moment here is because it's finally been enough time that the commentators can say with credibility,
This is a disaster. And this is a disaster. This is a disaster. It is nice to see the things that we've seen with our very own eyes for months and months now be able to be reported, as you said, with full credibility, with full balance, and still just outright say,
It is bad. It is really bad. It's shockingly bad. Listening to the Alexa episode from Wednesday and just listening, it was almost enjoyable to hear about, oh, here's how we're approaching this problem with the stochastic approach as opposed to an LLM approach. And like people excited and actually building things that actually work, you could hear it in their voice. Yeah.
None of that exists from Apple. Like Apple intelligence, the product. I think the biggest difference I've seen is the summary notifications now are italicized and before they were not. You know what that takes, Ron John? That takes courage. It takes courage.
I mean, it is – this was the first time I think in years I really had the thought listening to the Alexa announcement, listening to the interview episode –
I think I'm going to switch from HomePods. Like, I can't not have that quality of voice-led generative AI. And voice-led generative AI is that good right now. We've seen it on all different types of formats, on Gemini, on ChatGPT. So just the baseline is actually good enough for
So, and then you start thinking, okay, then is if I'm going to start using this more on my phone, should I get a pixel? And then suddenly I was telling this and like one of my friends made the, the years old joke that do you want to be the person with the green bubble instead of the blue bubble? That is the only lock in right now. And it's, it's crazy to me. And again, I say this,
fully locked into the ecosystem, but this is the first time I'm starting to really crack and not just joke about it, but actually think about moving out.
And by the way, you're hitting on something that Panos and Daniel broke down in the Amazon episode. So I asked them basically, like, is there a switcher that goes from this deterministic type of if-then statement to the stochastic or probabilistic LLM approach? By the way, LLMs, they are stochastic. And they basically explain, like, you're thinking of this as just like, you know, one model, but there's a number of models to it. It's far more advanced than that.
And then as you read German's story, you do see that this elementary architecture that I was envisioning is built within the latest Alexa. It actually exists within Siri. This is from German's story.
Now,
And this is for the next operating system. For iOS 19, Apple's plan is to merge both systems together and roll out a new Siri architecture. I expect this to be introduced as early as Apple's Worldwide Developer Conference in June of this year, with the launch by spring 2026 or as part of iOS 19.4. The new system, dubbed LLM Siri internally, was supposed to also introduce
a more conversational approach in the same release, but that is now running behind schedule as well and won't be unveiled in June. Something is rotten within Apple. And I'm not saying that this means that they're bad people. I'm not saying that this means that, you know, there's ethical issues. I am going to say there must be a cultural problem because if you're Apple and you have the ability to legitimately attract the best and the brightest in Silicon Valley in the tech world,
And you fall behind when you are behind Alexa to the point where they are getting ready to release their update next this month. And you legitimately can't get into shape after announcing the vision at WWDC.
Your organization is messed up. There's no potential explanation other than that. No, I really wonder how it could be that bad. And we've joked about there needs to sometimes be like for the commercials, the advertising side of it, like the one normal guy who just sits there and just says, okay, that's weird. That's not weird. Because let's not forget that.
The advertisements that's kicked off this whole debacle where people are kind of like celebrities are ignoring people that aren't as important for them and summarizing their emails quickly in real time to try to, you know, avoid having to pay attention to them. Like everything has just felt off from the beginning. But when you look at the product, it's shockingly off everything else that's out in the market. It would have been one thing if,
ChatGPT voice was really good and then Gemini and no one else was. But when Gemini voice became as good and as ChatGPT voice, advanced voice mode, you saw that this was table stakes. Now, this was par for the hyperscaler course. And Alexa, I'm confident, will be just as leading, if not more leading in terms of that whole space that, I don't know, it's something is up. I think...
Something is rotten at Apple will be a nice little cute headline that we'll see more and more. But there's something else, something going on.
That was, yeah, and I'm just going to talk about this a little bit more because it is so important. The longer you do this, the more you sort of learn to read between the lines of some stories. And there was an incredible set of clauses in Gurman's story that I don't think anybody picked out yet or has been picked out in the way that I expected it to, that I think we should go over. So he says there have been problems with rivals poaching talent and what they deem to be ineffective leadership.
Within the AI department, employees have raised serious questions about whether chief executive officer Tim Cook or even the company's board of directors needs to make bigger changes. The crisis could ultimately place the job of Apple AI head John G. Andrea or others at risk.
Now, he says, but an imminent departure would only be an acknowledgement of Apple's AI shortcomings with which the company isn't ready to admit. So clearly there's a leadership issue to have John G. Andrea's name as someone who might be forced out, maybe not imminently, but even in a story is a big deal. You could tell that Apple would definitely push back on that. But to me, the most stunning thing in this report is who might have to do the pushing.
Now, he says it could be Tim Cook, but it could also be the company's board of directors. You never in a business or a tech story hear of the company's board of directors intervening to either force the CEO to make changes in an org that isn't working or something else. And Ranjan, you are familiar with how these big corporations work.
And I'm curious if you read into this the same way that I am. I 100% agree that if that is the case, it's very, very odd for a technology company, for the board to be getting into the kind of product side of the conversation. Yes, if the focus is solely on the leadership, that's one thing. And yet it makes sense that if a CEO is afraid to fire a very longtime lieutenant or ally,
then that makes sense that that would be the role of the board to try to step in. And maybe we've gotten so removed from having independent thinking boards in technology companies that you just don't hear or see this stuff anymore. Hashtag Tesla. But overall, I think... Hashtag Facebook. Hashtag Facebook. Hashtag all got Snapchat. Snapchat, all of the above. I mean, actually, it's true. So that's why...
You don't really think about that. But so as of January 2025, you have the former CFO of Boeing, the former CEO of Johnson & Johnson, the former CEO of Northrop Grumman.
The founding partner and director at BlackRock. So it is that actually does make it even more odd and comical of like, are these people going to help guide the company towards landing the future of generative AI? It's hard to figure that out. But I think I agree with you that we just haven't seen that kind of behavior out of any large technology company in a long, long time.
I mean, what's your best? So for me, I'm saying culture is probably the problem here. I've said it on the show a thousand times. I'm going to say it again. If you have access to the talent that you have and you can't build this the way that you operate. And by the way, I mean, engineers have told me this. You can't you cannot build AI in silos. If you've got somebody working on computer vision for, let's say, the face ID and somebody working on computer vision for the now ill-fated car project, they need to be able to speak with each other.
This technology is evolving fast. You can't have people in silos. Even look at what's happening with open source. And so the Apple silo approach and the Apple, like let's ship on certain determined intervals approach, that's not working. Even I think Amazon had to sort of do away with that. They ship like, of course, in their two planning sessions in the year. So I think it's culture. What do you think it is? So I have a slightly different theory here.
And I'm not going to just again be complaining as a locked in Apple customer. But I think their financial performance has been slowing but not dramatically suffering yet because of these disparities. Like earlier I was talking about this that I now am thinking, OK, I'm going to get rid of my HomePods and go all Alexa for my smart home. Maybe I want Pixel so I can have Gemini as my core assistant as opposed to Apple.
God help us Siri on my iPhone. And if I do that, do I get rid of my AirPods? And is there a world where, you know, like the downstream effects of losing that lock-in are huge, but they haven't seen it yet. And then on the other side, subscriptions and services, which is this, I don't want to say the scammiest, but just like the,
The least innovative, let's say, portion of their overall revenue has been one of the fastest growing business segments. So they're becoming more dependent. It's now 22% of revenue.
And so they're now becoming more like in myself, like we, my wife and I have a family iCloud account and somehow we just hit another limit. And now I think I'm paying like, I think I pay between insurance on phone and iCloud storage. I'm probably paying Apple like 80, 90 bucks a month or I don't even know what it is. I'm not, I'm ignoring it, but yeah.
Overall, I think they have this financial cushion that allows them to look the other way kind of minute to minute because they're OK as of today. And then it's that kind of inertia really can affect a company this big.
So it's a natural resource curse, pretty much, is what you're saying. That they have basically so much money that they don't have the pressure to change as others might. And their stock is doing as well as it is. And this was, it's a great segue into...
into this other part of this discussion, which is the market side of things. So A, they have all the cash. And then B, you think about how their stock performance is. And remember, their iPhone sales went down in the last quarter. They had five of six quarters where they had revenue decreases. They have not been able to ship the Vision Pro in the way that they hoped. Apple intelligence is a dud.
I was at the Hightower Investment Advisors. They had a conference this week for some of their New York crew. It was really fun. It was at 30 Rock. And I was on a panel with Dan Ives, who's like the most bullish of all Apple bulls. And I looked at him and I was like, Dan, you know, I just read the government story. And I'm like, Dan, what's going to happen with Apple, man? And he goes, I don't know.
They're holding up pretty well, which is like Dan's like famous line, like, you know, not as bad as feared. And I was like, come on, Dan. And then I was like, all right, let me just quickly check the market caps. Apple's market cap is $3.6 trillion. Last one year, stocks up 40%. NVIDIA's market cap, $2.74 trillion. Microsoft's market cap, $2.93 trillion. So these two other giants fell below $3 trillion. Apple is closer to $4 trillion than it is to $3 trillion.
I mean, that is insane. There's no comeback to that. - It's because the numbers are still astonishingly good as of today. And this is where I like that. I think let's start
saying Apple has the Dutch disease, the natural resource curse, because I think we might start hearing that a lot more. And again, maybe we can call it a natural resource curse, or we can call it slightly monopolistic behavior that ties a lot of different services together. Or you could call it just a shit ton of money, cry harder Siri boys. Yeah, exactly. Tim, we're looking out for you here, Tim. This is not...
These are your fans trying to tell you, Tim, listen to us. Siri is worse than ever. Apple intelligence is worse than ever. But they're doing okay financially. So they're not going to have that internal like all hands on deck. But it's also – I do wonder from a cultural standpoint, you hear about at Meta, at Google slash Alphabet –
All these companies, even at Amazon, that kind of existential all-hands-on-deck moment that generative AI brought into those companies, you don't really hear that at Apple. And I don't say this in a disparaging way, but Tim Cook is not a pure technologist. He's a production operation supply chain person at his core, and that's what he's been incredible at. Could that be the reason why
It hasn't been like, he wasn't just sitting there like, oh my God, this is going to change everything. We have to,
cut a lot of people, merge a lot of groups, and we have to win at this. That urgency is clearly not there. Do you think that could be it? I think, so when Gruber was on the show, he basically said that he expected them to marshal all of the forces that they had. Like when Apple, Apple can hit snags, it can, but when it does, it tends to marshal all the forces into big pushes and eventually succeeds. So that's what he said he expected here. I imagine that it happened. They did take people off the car project and,
and put them on apple intelligence but i just think if you think about the nature of organizations and this is sort of the the thing that i think a lot of folks who are watching on the outside sometimes overlook when you think about the nature of organizations when there is a way of doing something inside a company it is very very hard to change those practices google cannot name anything google knows it can't name anything google still calls its models like you know uh
Google, Gemini, Bard, Flash, Thinking, 35.8, you know, 9.2, Beta. Okay? It is very hard for these companies to change. I just think the culture of Apple is a culture where it is just not set up to develop software outside of operating systems. None of Apple's owned and operated software properties are really good. I'd much rather use Gmail than Apple Mail. I'd much rather use Google Calendar than Apple Calendar.
I could go on, right? Safari is okay. I'm on Numbers all day long. Is that what it's called? Numbers, I think. Is that their Excel? Yeah, that's the Excel. I think it's good. When that shit boots up, I get mad. I know. Shut yourself down, Numbers. Notes is good. Notes is good. Notes is good. But in terms of big software development, I don't think Apple has the culture and place to do it. That could change, but we haven't seen it.
That's a good question, though, that is generative AI more of an operating system infrastructure layer type of development or is it more of like an app software type development? Because you're right. Apple...
some hits, but for the amount of default behavior they're allowed to push onto people, they don't actually develop great consumer-facing software, but they do great operating systems. So that's actually an interesting question. Where does generative AI fit into that stack? I would say generative AI is about as software as it gets. Now, can it function as an operating system? Sure. We think that it might. I mean, we've seen some failures with Rabbit and Humane trying to have AI as an operating system, but
But it can. But the thing is, with hardware and with an operating system, right, you have predictability. This is the system. The parts fit in very predetermined way within the system. And they go a range here. You have the icons, you tap in, you go into an app, the app has to be built according to these exact specifications. In software, you sort of you have to live in a world of uncertainty and a world where users can sort of push the boundaries of what you do.
LLMs are that without a doubt. So to me, it just seems like that's going to be kind of rough for them to build. And it sort of leads us to this question of do they want to continue to stick with Siri or potentially give the space that Apple intelligence and habits over prominently to somebody else?
MG Siegler writes in Spyglass, Apple should swap out Siri with ChatGPT. This is, again, this week. It's part of the wave. He's responding, by the way, to what Gurman says about when this is going to be good. Gurman wrote, people within Apple's AI division now believe that a true modernized conversational version of Siri won't reach consumers until iOS 20 at best in 2027. This is basically akin to what we saw with Alexa Plus that's coming this week in 2025.
MG writes, if this is true, it's not just a joke. It's downright worrying. And if it's not true, Apple should probably do something to refute it because it's quite damaging on a number of fronts. And it couldn't come at a worse time with Apple having finally just unveiled, with Amazon having just finally unveiled Alexa Plus. If Amazon was sort of a tortoise in this race to the hairs of Google and OpenAI, as always, there's hope that despite being slow to start, the tortoise can steadily win in the end.
Apple is more like a dead duck now in the same race, a corpse. 2027, how about never? Is never good for you? MG writes, citing an older story, why not fully outsource Siri to chat GPT? You'd still want to keep the task-oriented elements like setting timers with her.
but everything that requires anything resembling a search query should be outsourced. Right now, you can fully force this by asking Siri to ask ChatGPT something, but it's cumbersome. I'm suggesting this be made the default action. He writes that Apple has long had an illustrious history of teaming up with partners to get a service out the door while they work on their own solution behind the scenes.
So basically, they could potentially work with OpenAI for a couple of years as they get their solution in order. But in the meantime, maybe they just want to give it over to OpenAI, which has a great voice assistant with GPT-4.0 and can handle a lot of the stuff that Siri is struggling to do today. What do you think? Maybe the bullish case for Apple here, and I'm stretching, but in this vein, I'm
Most people still aren't totally integrating generative AI into all the stuff they do all day, like probably us and many of our listeners. So even though we kind of are like laughing at the timeline of 2027 or whatever,
They still have money. They still have plenty of money. So maybe it really is still just let it build. I think – I don't believe this is actually the strategy in any way given how much they invested in the Apple intelligence marketing to start with.
in a just really over promising things. But, but the idea that maybe at a certain point they buy Mistral or which we've talked about in the past, or they, they, I mean, there's more and more very capable chat bot experiences are generally kind of like infrastructure level LLMs that can handle a lot of different types of work.
that they could use. So to just stop trying to do Apple intelligence, maybe there is a world where that just happens in a year.
We start the board meeting. We start the board meeting. We start pushing it. Activists. Just show up there at the shareholder. Mr. Cook, as people who really want Siri to work well, what is going on? What is your current stake in the company? I have like 20 shares, but I really want Siri to work.
I do have an iPhone. It shouldn't that matter? Yeah. Look, I think that you make a really good point here where every time we get really down on Apple, we just realize that this has, the company has a massive install base. It is true that they can still recruit the best, the best and brightest. They have access to all the open source. You mentioned Mistral. Maybe they don't even have to acquire it. Maybe they could just download the weights and then run an LLM that way. I mean, there's definitely technical expertise there that's required. Yeah.
But Apple is still operating in a world where so much of this innovation is not proprietary, not contained within a single company. And that is a huge bullish sign for Apple. And maybe that's why they're still trading at a $3.6 trillion market cap.
It's still a generous, it's a generous case for them, I think. I think the stock price is just much more reflective that financially, I mean, obviously like performance, iPhone sales in China, iPhone upgrade cycles, there's
a lot of genuine, more kind of standard business issues that are facing challenges that are facing the company. But I feel that they almost more than make up for that in the growth of their subscription and services segment that that's what, again, is allowed them to paper over all these concerns. But I do think this is a core issue. I do think because they told everyone to upgrade because of Apple intelligence, they said it.
And that would be the – to do it for Apple intelligence would be the most ridiculous consumer decision. If anyone told me they did that, I would be horrified. For them. For them. For them. And you get a higher multiple on subscriptions and services. Like you get valued as a software company, not as a hardware company. So your market cap goes up. Yeah. Yeah.
But you're right. The liability is real. I mean, basically, this is what Gurman writes. Just to close out the segment, there's little reason for anyone to buy a new iPhone or other product just to get the software, no matter how hard Apple pushes its marketing. I mean, you remember I had a I was talking about I had an interaction in the Apple store where, of course, like I'm going to buy more Apple stuff, which is so funny, the context of this whole conversation. Yeah.
But as I'm about to walk out the store, they're like, oh, and by the way, we have Apple intelligence. And I'm like, they're like, do you know about Apple intelligence? And I'm like, unfortunately, I know too much. I think I want to go to an Apple store now and get the pitch. Like as of March 2025, hear the store associates pitch trying to sell this. Because I'm curious what they're being trained on right now. Yeah.
Is it still? You can find your flight info and book tickets directly. None of that. I mean, yeah. They very earnestly smile and they're like, Apple intelligence. And you're like, this is a labor rights abuse to make you pitch this. I'm sorry. That's the move. That's the move. We start a union backlash.
I do think, yeah, if we frame this through the unions, potentially we could get a better Siri or iJohn. It's not right to make people try to sell and pitch Apple intelligence. It's inhumane and I do not stand for it. And instead of making them stop pitching, you should make them continue to pitch, but just fix the software. I think this is really to get, the way to get what we need in the world is, yes, through the struggle. Marx would love this. Marx would be totally into this.
Workers, you have nothing to lose but your shitty voice assistants. Pretty sure that's what he said. I think that's exactly right. He inspired a movement. Dust capital, yep. Marx was trying to set an alarm with Siri and it didn't work. He just got super pissed off. Screw this, millions should die. Yep. One last thing about this, there's another Siri delay. Gruber reported this, SodaGerman, I guess, said,
The Apple spokesperson said Siri helps our users find what they need and get things done quickly. And in just the past six months, we've made Siri more conversational, introduced new features like type to Siri and product knowledge and added an integration with ChatGPT. We've also been working on more personalized Siri, giving it more awareness of your context as well as the ability to take action for you within Siri.
and across your apps. It's going to take us longer than we thought to deliver on these features, and we anticipate rolling them out in the coming years. So the stuff that hasn't shown up, there's going to be a, I don't understand, there's going to be another delay. I guess we were waiting for this. But again, what a statement. Very cheery. Look how great Siri's gotten, according to the official line, according to most of us using the product. That is not the case.
Yeah, actually, maybe that's what's more concerning to me. If they just came out and said, you know what? We err too much on the side of caution and safety and responsibility, and we still believe in those. So that's why things are slow and not good. Just say that. That's okay.
That's I mean, you guys are the responsible, secure, private safety people. And that's that's a good story. And it's probably maybe true to an extent, but they still try to pretend much like that poor Apple store associate that everything is rosy and OK.
It ain't. I'm here to tell you. It is not. It ain't. All right. We got to go to a break. I just want to remind folks, we'd love to hear your feedback as always. So we have a email address, bigtechnologypodcast at gmail.com. If you have constructive criticism or anything you'd like to share, please do so.
You can use that email address. I read every single one of them forward to Ron, Ron John, when applicable ratings and comments on places like Apple podcasts and Spotify are kind of like the front door of the show. So new, new listeners come and see those comments. And you know, we definitely, when there, when there's good constructive stuff that come in there, we'd like to post them. We don't really have a choice on Apple. But if you have a good experience with the show and you want to post it on
and a comment on Spotify or a five-star rating on Apple Podcasts, that would be much appreciated. We always want to listen to you, and we definitely have the right forums available, so hope to hear you there. Okay, let's take a break and come back and talk about what's going on with Microsoft and OpenAI, and then we'll close talking about our jobs, because maybe AI will take them, and then you can listen to AI every week. All right, anyway, we'll be back after this.
Hey, you. I'm Andrew Seaman. Do you want a new job? Or do you want to move forward in your career? Well, you should listen to my weekly show called Get Hired with Andrew Seaman. We talk about it all. And it's waiting for you, yes you, wherever you get your podcasts.
Raise the rudders. Raise the sails. Raise the sails. Captain, an unidentified ship is approaching. Over. Roger. Wait, is that an enterprise sales solution? Reach sales professionals, not professional sailors. With LinkedIn ads, you can target the right people by industry, job title, and more. We'll even give you a $100 credit on your next campaign. Get started today at linkedin.com slash marketer. Terms and conditions apply.
We're back here on Big Technology Podcast, Friday edition. All right, let's talk about this very interesting Microsoft and OpenAI story. So Ranjan, you and I have talked about how there's been tension between Microsoft and OpenAI.
in the past few weeks and, uh, or in the past months, really. And the information really breaks it all down. The story breaks down what Mustafa Suleiman, who's the CEO of Microsoft AI has been up to within the company. It is very interesting as Microsoft starts to effectively insulate it from opening eye, uh, itself from opening eye. And I do wonder what happens with the relationship there. Let me just read the first couple of paragraphs of the story. Uh,
Last fall, during a video call with senior leaders at OpenAI and Microsoft, Suleiman, who leads Microsoft's in-house artificial intelligence unit, wanted OpenAI staffers to explain how its latest model, OpenAI, worked, according to someone present for the conversation. He was peeved that OpenAI wasn't providing Microsoft with documentation about how it programmed O1 to think about users' queries before answering them. The process, known as chain of thought, is a key ingredient in the secret recipe of any AI model.
Raising his voice, Suleiman told Mir Maradi, then OpenAI's chief technology officer, the AI startup wasn't holding its end of the deal, the bargain. It wasn't holding up its end of the deal with Microsoft, with which OpenAI has a wide-ranging analysis, and Suleiman cut the call short. So that's who he yelled at. He yelled at Mir Maradi. He's simultaneously tasked with carrying on the OpenAI partnership with
At the same time, he's also under orders to put Microsoft on a path to self-sufficiency in AI so it won't have to rely on OpenAI's technology for the majority of Microsoft's AI products. Interesting situation here for Suleiman trying to, we finally have conversation that confirmation that Microsoft is going to try to be self-sufficient in
And there's been some serious yelling, shall we say, between the two parties. This is a big story. It's an information. I just want to get your perspective, Ranjan, on this. Is this natural? Is this surprising? Do we already know this? Or is this potentially a breaking point between the two?
I mean, we've definitely heard about tension between Soliman and the larger open AI management infrastructure, but I do think it's getting worse, but it also makes sense because on one side, the entire co-pilot suite of products, like the more add-ons to Excel and Word, these are the tools that have gotten the worst types of feedback from users. Those are the ones that we've
There's been articles about how bad the uptake has been or the kind of egregious errors that are being made. So if those are the ones being still powered by open AI models, clearly there's some kind of issue. I do think this is an important moment because when I was reading through this,
This is going to end very soon. Like I just – it has to. The Microsoft OpenAI – I think six months from now, we're no longer going to be talking about their relationship because it is making less and less and less sense every day. And I think it's clear that there's a lot of infighting in politics but –
Like what would be the reason to try to continue an investing in that relationship when also you have open AI going all in on SoftBank and Masa Sun anyways? Like what's the purpose of this other than if Satya wants to just have a little bit of a stranglehold and a leash on Sam just for fun? Yeah.
Well, you've already invested 13 billion if you're Microsoft. So the question is, what happens with that money? I mean, part of that money was supposed to be that you get OpenAI effectively as an outsourced research house for Microsoft. And that hasn't happened in the way that they've hoped. Now, of course, they're going to get a percentage of the future profits and they're going to get a stake if they go for profit. But that 13 billion, like that was part of that was supposed to buy this collaboration.
Yeah, that's fair. But also if you think about like what's the risk to that $13 billion, it's very – it's not zero – going to zero necessarily, right? It's not like – like you as a steward of capital, I don't think the risk is that great in terms of still having this much integration and interaction. And I think the other main thing too is –
I mean, maybe I'm wrong on this, but it does feel overall like when you see the advances coming out of other research houses,
When deep – the deep-seep moment more than anything proved this, that the idea that the frontier model research houses are so far and away, are they that much better than Microsoft and Suleiman's team? Probably not. Like to such an extent that at this point they're realizing we don't actually need – that's not going to make or break our own business.
You're right. This is a consequence of the commoditization of models, right? That we're seeing open source be able to handle basically the queries that a lot of these frontier models can.
And that parity is quite important in a company like Microsoft's ability to be able to compete. And in fact, there is reporting from the information that Suleiman's team recently completed training of family of Microsoft models internally referred to as MAI that perform nearly as well as leading models from OpenAI and Anthropic on commonly accepted benchmarks.
The team is also training reasoning models, which use that chain of thought techniques that could compete directly with open AIs. Suleiman's staff is already experimenting with swapping out the MAI models for open AI models in Microsoft's Copilot. The company is considering releasing the MAI models later this year as an application programming interface or API, a software hook that would allow outside developers to weave the Microsoft models into their own apps.
So, I think what you said, it seems like it is indeed playing out. MAI? MAI? 1.0? 2.0? It's a good name, too. I think it's got some good staying power. MAI works. Definitely. I like it. Good job, Simon. But yeah, that's it. Build your own models.
release them, integrate them more directly so you have more control over them and you don't need to like... I mean, think about how ridiculous that must feel for someone like him where he's like, just explain to me how O1 is doing its reasoning. Like, we are a massive investor in you. Can you... Even like...
academic to academic, brilliant mind to brilliant mind. Let's just talk about how O1 achieves this kind of reasoning and they're hiding it from you. That's got to be frustrating. Yeah. And by the way, for OpenAI also, there's got to be benefit from breaking free with this because OpenAI was restricted in using just Microsoft infrastructure. Remember, OpenAI models are not available on Amazon's AWS because of that agreement. So this could be an opportunity for them to expand as well.
Let everyone free, Satya. It's time. Let Sam run free. Let Suleiman run free. Let everyone run free, Satya. It's your call. I think it's happening. Here's more from the story. At Suleiman's direction, Microsoft has been hedging its bets further by trying out models from opening eyes competitors to power Copilot. Those include Anthropic, Musk's XAI, along with open source models from DeepSeek and MetaPlatforms. So this once tight partnership happened
No longer seem so tight.
Maybe both got exactly what they needed out of it. Opening Eye got the resources to train frontier models. Microsoft got the positioning and the head start on Copilot. And now they evolved to realize that they're better off free. Everyone's grown up and it's time to move on to the next phase of the battle, I think. There's got to be, I'm not like a huge war history buff, but I'm sure there's some case in the past of like some historical examples of this and like great wars and battles where...
I don't know, someone, an alliance like that, once neither side has that need anymore,
Just nicely breaks it off amicably and goes off. I believe it's the great voice assistant-based communist revolutions where the Siri acolytes and the Marxists combined for better functionality in the iPhone. And then once that alliance of convenience was no longer appropriate for both sides, they broke off and went on their own way. I'm going to go through all great tragedies in history and try to figure out how Siri is at fault here.
You've heard of Das Kapital. This was Das AI Assistant.
God. All right. If you haven't turned off the program yet, I appreciate you. All right. So one more little bit here from the story, just to look at the amount of money that Microsoft has made on this. No bet is more important at Microsoft than the one it's making on AI. Last month, the company told shareholders it was generating more than $13 billion in annualized AI revenue across all of its businesses, up from $10 billion just three months earlier. This is accelerating, Rajan.
I mean, we're seeing those numbers across Microsoft from the Azure side of things, from Google Cloud. I think we're starting to see – I still take those numbers with a bit of a grain of salt though because –
what exactly are people paying for? I know like these companies are a bit cagey in terms of representing what AI revenue means. I mean, even like the large consulting firms, I think I saw numbers from like Accenture and KPMG even that they're making gobs of money on AI. So I think I'm still, yes, I'm guessing a lot of Azure clients are using a lot of
API calls and I think stuff's happening, but I don't know. I don't know. I still take it with a bit of a small grain of salt. I know what it must be, Ranjan. It must be the rappers. It's all about the rappers. It's the rappers. It's all about the rappers. So the rappers are rising and the products are rising as Ranjan has long hoped for.
This is from Bloomberg. The hottest AI companies right now are apps. Today's so-called AI rappers are all the rage. Step into any venture capital office in Silicon Valley and you'll hear investors buzzing about startups that offer AI chatbots, research tools, and other software applications for coding, clinicians, and customer service, all built at least in part on the backs of large language models created by other leading AI developers. These startups are seeing revenue and valuations grow at a fast clip,
often while spending a fraction of the amount of the top AI model developers do on chips, data centers, and talent. Harvey, that was founded in 2022, surpassed $50 million in annual recurring revenue in December, which Harvey is a legal startup. Enisphere, the startup behind the popular code editing tool Cursor,
has hit $100 million in annual recurring revenue. Investors are eager to put their money into these services. Harvey raised $300 million. Enisphere also raised a lot of money. We just can't find the exact number now. And the OpenAI Anthropic rounds have sort of left the ability for VCs open.
to invest in. So it's all about the wrapper. The product wins, the model's commoditized, and it's Ron John Roy's future. We're all just living in it. Except Siri is terrible, and that doesn't make up for any of the other stuff. And somewhere Mustafa Suleiman is yelling at someone from OpenAI, because now he does not have to just
smile and listen to what they're saying because the models are commoditized and it's all about the product. But this is good, right? Oh, go ahead. This idea that we have products being built and working. We have Mike McNano in this story. He's a VC at Lightspeed. He's been on here. Just like after the iPhone launched, there were millions of new mobile apps. Now with AIs and LLMs, there will be millions of new AI products. Maybe that's too optimistic, but it's an interesting take.
Yeah. I would like to submit for the record that we do not use the term rapper. I'm asking you out there, industry information reporter, whoever had written this one. This was Bloomberg. This was, I think, Kate Clark and Bloomberg. This was Kate Clark at Bloomberg. I think because rapper still kind of denotes – there's like a negative connotation to it versus –
I mean, most products are built on some kind of infrastructure and that's good. No one says like the greatest iPhone apps are just wrappers of iOS or I don't know. Like it still – it takes away from how much work goes into creating a good generative AI product.
And I've spoken with people who at large law firms who use Harvey and they're like, it's incredible. And maybe someone could use open AI's API and recreate the entire thing, but they're not going to. And a large law firm will happily pay, uh,
tens of thousands, if not hundreds of thousands of dollars to have someone else do that work for them because they're not a software developer. And I think it's just a reminder that like, this is where people finally start using it. Because I mean, big law would probably be the most resistant to using generative AI if they were still having to go to chat GPT. So it's good that Harvey's out there and Cursor's there for all the coders and
More, more like this, please. Well, for context, like they call it wrapper because the capabilities can get better and eventually sort of subsume the apps, whereas you can't do that with cloud.
Like you can't have like Amazon Web Service all of a sudden get better. And next thing you know, it's a SaaS platform. You can't have that happen with an LLM. Now, there are going to be specialized use cases that you're going to want to use a specialized software for. But something like coding, for instance, you would imagine that these services will get better and maybe eventually compete with the wrappers built on top of them. I'll use the word wrappers.
But you make a good point that it's customized. It's built for a certain user. And over time, they're just going to like kind of branch off and be their own thing. All right. That's fair. And I think maybe what happened is we saw –
Early on, early on, like two years ago, from like the big jumps from like 3.0 GPT 3 to 4, adding Dolly directly into chat GPT, like we started to see certain successful apps like image generation apps get basically destroyed because the capability became native and integrated into the larger chat bots. So I guess we've seen that, but I just don't see that happening in the same way
for large verticalized industry use cases, enterprise use cases. I think like, like, yes, editing a photo to make my face look like, I remember like, remember like two years ago, there was, I feel early stage generative AI image. There's like a bunch of like, look at yourself old or just these kinds of, you'd pay some Chinese company, like 99 cents. Old.
Gray hair. I did it all. It's some Chinese company just hoovering up your facial recognition data. And we were all paying because it was fun for like 99 cents. And then ChatGPT, yes, subsumes that layer of apps. But yeah, it'll be interesting. I met a guy this week who's like, I'm going to add you on LinkedIn. I said, okay. He's like, I have a pretty cool picture on LinkedIn. I said, okay. He goes, it's AI generated. My mom framed it.
I said, okay. Can we get him on? We should. I want to know so much more about whatever is going on. This wasn't just the bourbon speaking because I was like, all right, opening my phone, looking at this LinkedIn request. And I was like, holy shit, that's a great picture. It's a great photo. He really has a great AI picture. It's very cool. I immediately wanted one. But then-
If my mom, if I go home, I actually am in Boston right now and I'm going to be going to my parents' house where I grew up very soon. And if there is a large blown up photo of my LinkedIn profile picture. You'll know you're loved. I don't, there's so many layers to how weird that would be. Yeah, but it's not, is it AI? Okay. It's gotta be cool and AI. Yeah.
Oh, maybe that's why my mom's not free. She's like, if it was AI, then we'd be framing it. Remember you were sitting around, you know, at the Superbowl watching dot plot ads for open AI. All your mom needed was a lovely picture of you with AI on LinkedIn. Make it. She would know what to do with it, print it out and frame it.
I'm not even trying, I'm not even ragging on this guy or his mother. No, I'm not at all. It is an amazing photo. It's great. I would frame it. I thought about framing it. I don't even know. I don't know. Even his mom. That, that actually is quite a move. I think that would add so much. You're on CNBC podcast tapings.
What's going on behind you there? Who's that guy? Oh, it's the LinkedIn guy. It's the LinkedIn guy with a cool picture. He's got a good picture. Cool picture. It's art. Learn to appreciate it. People don't. People don't enough nowadays. I know. It's a problem. Our society, it's gotten coarse. So let's round out. You're in Boston this weekend. I hope you pick up a copy of the Boston Globe on Sunday because you will see my op-ed on there.
And I've syndicated this week's big technology story. Okay, I'm starting to think I can do my job after all with the Boston Globe ideas section. So thank you, Boston Globe ideas for running it. It is a follow up to my last Boston Globe piece, which is
Wait, ChatGPT didn't take my job? And I basically come and say, listen, I'm sorry for taunting ChatGPT. I'm starting to see that a lot of what I do can start to be done with AI. And it really goes back to this, like, what can voice AI do? And I include the anecdote of Evan Ratliff, who came on to talk about AI clones and the clone that he made of himself.
And that he actually built an AI voice clone. He prompted it with a bunch of questions to ask to a voice tech CEO. He sent it out to a voice tech CEO to do an interview and did a better job than him. So Ranjan, I'm just kind of curious what you think about my thesis here, that AI may be not taking our jobs, but it's starting to be able to really do a lot of the work that we do, work that we thought would never be in the path of the machines. I agree with that. And I think it's not...
It's not bad. I mean, I think there's a lot like even the idea of sending a voice AI clone to go do an interview. If the interview is essentially just kind of like, here's a bunch of pre-written questions and I'm just trying to get information, then
then that actually is a good idea. And it's cooler because there could be some back and forth and interaction, but it's not going to be, you know, like really deep and go in lots of new directions, but it'll get the right information out. So if you can...
interview more people or get more information and write more stories because of that, especially as a, like a individual creator. I think that's great. I think that's this kind of stuff for smaller media outfits like us, I think is good. Yeah. So this is how I ended, uh,
I said, as AI extends beyond the chatbot and towards something that can research, take calls, and even pontificate, it'll likely become a force multiplier used to scale up individuals' effort and help them cover more ground. That might lead to less hiring, smaller companies, or potentially fewer overall jobs. And now I'm less confident in our broader ability to weather this change without pain. So I think we could have smaller companies doing what bigger companies do,
But actually, if you're with less people, but then again, the other side of the coin, which you just brought up is if you're me, or if you're basically working on something small, this can be something that can really increase your productivity. So there's two sides here. Yeah, I think, I mean, it gets into the deeper abundance debate that like,
Could – if you have smaller companies but could you be creating a lot more with that and then does it create new behaviors that – again, when people thought banks – physical bank locations would go out of business, that physical malls would go out of business and we keep seeing over and over that that doesn't come to fruition. So I think we have no idea how this turns out. I do agree.
If people who know how to leverage this technology and companies that know how to can do a lot more with a lot less than their competitors. I think that's like we've all seen how clear that is. But what that looks like across society and at scale, it's a tough one.
Yeah, it's crazy. I mean, this is I'm starting to see this stuff be able to do things that I just never dreamed possible. I don't know if you've been like experimenting with the Claude coding capabilities.
But they have gone from just being very rudimentary and somewhat disappointing, if kind of cool, to like being downright insane. I'm going to write about this in a future Big Technology story, but I uploaded a fake bank statement to Claude and had it write me a financial plan. It took just an image and added up the numbers appropriately. Okay, that's crazy.
Then I told it to like plot out my expenses on a bar and a line graph. It did that. Or my balance on a line graph, my expenses on a bar graph. It did that. And then I said, yeah, build me this financial plan. And then after I built the financial plan, I said, build me a retirement calculator. It builds a bespoke retirement calculator with the numbers pre-populated that works. And you could change the variables and kind of see where you're going.
This thing is crazy. I also prompted it to build me a video game and it did it in, it's very rudimentary. It did it in one prompt, two further design refinements, and it's like an actual playable game. I mean, it's very, very basic, but this stuff is getting crazy, the capabilities. And the general public, I don't think is fully aware of how good it's gotten even over the past three months. Well, actually, so for my son, who's in kindergarten now,
they'll be like here's like 25 words to memorize here's the next 25 so I made a game in Claude and I and it like shows animals when he gets it right that he likes and like added those kind of little flair elements to it and
And it works perfectly. And I run it directly as a cloud artifact. At first I was trying to like, I was like, should I upload this to the app store and stuff like that? But it runs well as an artifact. And then realizing again, so from an education standpoint, having like hyper tailored learning tools for kids everywhere in the world is amazing, is incredible to think about. But it's actually funny because that one, my mom saw us using it
And she had no, like, she was just kind of like, oh, that's nice. Like the idea that I had programmed it myself, like I didn't even, she's like, oh, that's nice. You're practicing math with him. That's a good game. But like, I could have just been like, this is it. This is the AI mom. This is it. Yeah. Put this in a frame. Put this in a frame. No, put that other guy's LinkedIn photo.
Yeah. So listen, the only thing we need to do at this point is just write a prompt for a contextually aware voice assistant that works on your iPhone. I mean, you should be good. I'm sorry. I had to. The only way to end it this week.
Yes. Folks, I just want to let you know Ranjan is fresh off a plane. I think he's kind of under the weather. Still showed up, dedicated to the craft. We're lucky we got a trooper with us here. So thank you, Ranjan. Well, we were talking Siri. The jet lag from a quick London trip will not stop me.
from that topic. Nothing will stop this man. All right, everyone, please check out the newsletter. Give us five stars on Apple Podcasts and Spotify. Send the show over to your friend if you like. Join the Discord as a paid member of Big Technology Podcast and have yourself a great weekend.
We are going to have a great show next week. It's the CEO of Roblox coming on to talk about building video games with AI and a whole bunch of other stuff. So we hope you tune in then. Otherwise, Ranja and I will be back on Friday as usual. Thank you so much for listening and we'll see you next time on Big Technology Podcast.