We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How the OpenAI-Microsoft Frenemy-Ship is Shaping AI Development

How the OpenAI-Microsoft Frenemy-Ship is Shaping AI Development

2025/3/12
logo of podcast The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

AI Deep Dive AI Chapters Transcript
People
M
Mustafa Suleiman
主播
以丰富的内容和互动方式帮助学习者提高中文能力的播客主播。
Topics
主播: 微软与OpenAI的关系对AI行业发展影响深远,但这种关系也十分复杂。微软对OpenAI的投资使其获得了超越传统风险投资的资金支持,并将最新的前沿模型通过Azure集成到企业中。Sam Altman被解雇和复职事件是关系的转折点,促使微软寻求更大的独立性。微软与OpenAI的协议中存在关于AGI实现后微软将失去优先访问权的模糊条款,加剧了微软对OpenAI的不确定性。微软聘请Mustafa Suleiman领导新部门,旨在构建自己的AI模型并管理与OpenAI的关系。微软开发的MAI模型性能接近OpenAI和Anthropic的领先模型,并可能直接与OpenAI的O1和O3模型竞争。微软正在测试将MAI模型替换为目前在Microsoft Copilot中使用的OpenAI模型。微软对与OpenAI关系的表态模棱两可,但其独立发展AI模型的意图明显。微软寻求AI独立性是其应尽的责任,但其内部模型仍落后于OpenAI。微软与OpenAI的分歧可能源于双方优先级的变化,而非单纯的不信任。OpenAI可能更侧重于构建面向消费者和企业的特定代理产品,而微软则更关注构建语音代理界面。微软和OpenAI的独立发展将为整个AI行业带来更多机遇和竞争。 Amit Zavari: ServiceNow收购Moveworks将推动企业级AI应用,为员工和客户带来变革性成果。 Chris Cox: Llama 4将原生支持语音功能,无需将语音转换为文本再进行处理,这将极大地提升用户体验。 Mustafa Suleiman: 未来是对话式的,用户将与网站进行对话,而不是滚动或搜索它们。

Deep Dive

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, how the relationship between Microsoft and OpenAI impacts the rest of the industry. And before that, in the headlines, one of the biggest AI acquisitions in years. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. ♪

Recently, rumors started popping up that ServiceNow was in talks to buy Moveworks. Moveworks was founded all the way back in 2016, makes about $100 million a year, and at the terms that were leaking, it looked like this was potentially the biggest AI acquisition for some time. Well, that deal was now confirmed. ServiceNow has bought Moveworks for $2.85 billion. If you are unfamiliar, ServiceNow is a SaaS company that provides business automation tools. Moveworks is a SaaS company that provides business automation tools.

Moveworks will bring their experience in offering AI assistance that service employee requests such as IT support and HR.

In a press release, ServiceNow COO Amit Zavari said, With the acquisition of MoveWorks, ServiceNow will take another giant leap forward in a gentic AI-powered business transformation. MoveWorks' talented team and elegant AI-first experience, combined with ServiceNow's powerful AI-driven workflow automation, will supercharge enterprise-wide AI adoption and deliver game-changing outcomes for employees and their customers. According to Zavari, the acquisition will allow ServiceNow to build a platform that, quote,

combines ServiceNow's agentic AI and automation strengths with MoveWorks' AI Assistant and enterprise search technology. ServiceNow has definitely been pushing aggressively into AI and has been building out a big part of that strategy through acquisition. For example, in January, they acquired QN, a conversation data analysis platform. And at the end of last year, ServiceNow reported 1,000 AI customers, resulting in around $200 million in annual contract value.

What's interesting is that Moveworks was pretty early in this space. They actually raised it a $2.1 billion valuation all the way back in the ancient times of 2021. They've successfully navigated from the automation era to the agentic era. But what ultimately this says for the space, I think just comes down to the fact that enterprise AI is a very big business and is going to do nothing but get more so.

Next up, some news on what Meta is building out. Apparently, their next AI model will come with voice mode. According to the Financial Times, Meta is planning to introduce voice features in their next open-source LLM, Lama 4. Sources say that the explicit bet is that users will want to interact with advanced AI agents using conversation rather than text. Reporting suggested a big push to ensure voice interactions are natural and free-flowing, rather than robotic and requiring a rigid question-and-answer format. Last year, Mark Zuckerberg teased Lama agents...

declaring a desire to build an agentic coding assistant that could replicate a mid-level engineer. The FT also reports that Meta is considering premium subscriptions for Meta AI that enable a range of agentic tasks, and included the examples of booking reservations in video editing. Speaking at a recent Morgan Stanley event, Meta's chief product officer Chris Cox dropped a few additional hints. He described Lama4 as an omni-model where speech would quote, be native rather than translating voice into text, sending text to the LLM, getting text out, and turning that back into speech. Cox continued,

I believe it's a huge deal for the interface product, the idea that you can still talk to the internet and just ask it anything. I think we're still wrapping our heads around how powerful that is. Now, as you'll see in our main episode later today about OpenAI and Microsoft, it seems like Microsoft's AI leader, Mustafa Suleiman, has a pretty similar feeling about the importance of voice going forward.

Part of what makes Meta's plans more interesting to people than just yet another company getting into the voice AI game is that Lama 4 is presumably going to be open source, and having a state-of-the-art voice mode in an open source model would be a big deal.

Also, frankly, given what we've heard about Meta having the wind knocked out of their sails by the release of the latest DeepSeek models, integrating voice natively could be a way to differentiate in the off chance that Lama 4 isn't actually as advanced as some of the things that DeepSeek has been putting out. On the infrastructure front, XAI has acquired a million square foot property in South Memphis to expand their data center footprint in the city. Chamber of Commerce announced the acquisition in a press release with

with XAI Senior Site Manager Brett Mayo saying, XAI's acquisition of this property ensures we'll remain at the forefront of AI innovation right here in Memphis. Now, the land is right outside the existing Colossus data center, so it was presumably part of the expansion plans tabled to be completed by 2026. Judging from satellite imagery, the land seems large enough to about triple the size of the facility. Colossus currently hosts 100,000 GPUs and was constructed with plans to double that size. However, XAI have said they're planning to upgrade the cluster to operate a million GPUs.

Finally today, an interesting one at the intersection of hardware and software. Taiwanese electronics manufacturer Foxconn has built their own AI model for business optimization. Called Foxbrain, the model has reasoning abilities and was trained in just four weeks on a tiny cluster of 120 NVIDIA H100s. Designed for internal use, the model can aid in data analysis, mathematics, reasoning, and code generation. The company intends to open source the model for industry partners and envisions it helping drive advancements in manufacturing and supply chain management.

The model was based on Meta's Lama 3.1, but still Foxconn is claiming that this is the first reasoning model developed in Taiwan and optimized for traditional Chinese rather than Mandarin. The company said the model was slightly behind DeepSeek's R1, but was approaching state-of-the-art in benchmarking. I think the big takeaway here is that the barrier to rolling your own model, specifically for your purposes and needs, gets lower and lower every day, and that could have pretty big impacts on how the business side of the industry plays out.

For now, though, that is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded.

Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk.

Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.

For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. There is a massive shift taking place right now from using AI to help you do your work

to deploying AI agents to just do your work for you. Of course, in that shift, there is a ton of complication. First of all, of the seemingly thousands of agents out there, which are actually ready for primetime? Which can do what they promise? And beyond even that, which of these agents will actually fit in my workflows? What can integrate with the way that we do business right now? These are the questions at the heart of the super intelligent agent readiness audit.

We've built a voice agent that can scale across your entire team, mapping your processes, better understanding your business, figuring out where you are with AI and agents right now in order to provide recommendations that actually fit you and your company.

Our proprietary agent consulting engine and agent capabilities knowledge base will leave you with action plans, recommendations, and specific follow-ups that will help you make your next steps into the world of a new agentic workforce. To learn more about Super's agent readiness audit, email agent at bsuper.ai or just email me directly, nlw at bsuper.ai, and let's get you set up with the most disruptive technology of our lifetimes.

Hey, listeners. Want to supercharge your business with AI? In our fast-paced world, having a solid AI plan can make all the difference. Enabling organizations to create new value, grow, and stay ahead of the competition is what it's all about. KPMG is here to help you create an AI strategy that really works. Don't wait. Now's the time to get ahead.

Check out real stories from KPMG of how AI is driving success with its clients at kpmg.us slash AI. Again, that's www.kpmg.us slash AI. Now, back to the show.

One of the most foundational relationships that has shaped AI for the last couple of years is, of course, the relationship between Microsoft and OpenAI. Microsoft invested a huge amount of capital in OpenAI both before and after the launch of ChatGPT in a move that ended up inspiring a lot of non-traditional relationships between these frontier labs and their deep-pocketed big tech peers.

This sort of relationship was really important as it created a source of capital that was beyond the ability of the traditional venture capital establishment to provide. What's more, it put the latest frontier models right in the direct line of sight of enterprises through integration with Azure. And yet at the same time, it's a complicated relationship.

The Information recently published a long piece about the latest updates in that frenemy relationship, which seems to be getting potentially even a little bit more sour. Now, what makes this update all the more interesting is that it also suggests that Microsoft is making some progress on its goal for independence. What I want to do today is both, one, look at these latest updates, but two, try to understand and game out what it might mean for the rest of the AI space more broadly.

By way of background, it's important to recognize how the key inflection point moment of Sam Altman's firing and then rehiring has shaped everything subsequently. At the time, you might remember, one of the ways that Sam Altman was able to get power back was that Microsoft was going to create a new division where he could effectively recreate OpenAI from within Microsoft. The preponderance of OpenAI staffers were planning on going with him, and that ultimately forced the board to reverse their decision.

Throughout the whole proceeding, Satya Nadella said that Microsoft was agnostic about how it played out, that they'd be happy to continue working with OpenAI as it was before, or as this new reconstituted thing internally. However, it was clearly a wake-up call for Microsoft around the potential volatility of having such a reliance on OpenAI as a partner.

Now at the time, part of the challenge was that the terms of their agreement had this very strange, pretty loose provision in there that the deal would be broken and Microsoft would no longer have preferential access to OpenAI models once AGI was achieved. Of course, it wasn't clear what AGI being achieved meant. That was a decision that was left up to OpenAI's board. However, after this period where OpenAI's board seemed so capricious, it's very clear that Microsoft felt like they had to start asserting their independence more quickly.

That led next to the hiring of Mustafa Suleiman and a big part of the team from Inflection. Suleiman, who had also co-founded Google DeepMind before leaving Google and founding Inflection, was basically tasked with doing exactly what Altman would have done had he actually come over. Build a new division inside Microsoft that not only managed the relationship with OpenAI, although that was part of it, but also built their own models.

Much has been cataloged about the challenge that Mustafa Suleiman has had in reorganizing Microsoft's internal teams, moving people around from different divisions in the company. There have been some high-profile departures, public sniping, although it's not clear that there's particularly more of this going on than any of these big foundation labs with the effectively revolving door of talent. The new part of this report from the information is first that the relationship got more tense after OpenAI showed Microsoft its first reasoning model, O1.

Sullyman apparently wanted more information around how OpenAI actually worked, but OpenAI didn't want to give it to them. The information writes, raising his voice, Sullyman told OpenAI employees, including Mira Marotti, then OpenAI's chief technology officer, that the AI startup wasn't holding up its end of the wide-ranging deal it has with Microsoft. The call then ended abruptly.

This is a lot more tense than we've seen previously reported. It's one thing to see that each side is making moves to express some independence from one another. It's another to hear about a fight internally where OpenAI was actively refusing to tell Microsoft the key technical details of its latest innovations.

The other part of the information update, however, is that AI researchers in that Microsoft division have now completed the training of a family of Microsoft models that they claim are performing, quote, nearly as well as leading models from OpenAI and Anthropic. The team is also apparently training reasoning models, which could compete directly with O1 and O3.

The new family of models is called MAI, probably short for Microsoft AI. And according to this reporting, the team at Microsoft is already experimenting with swapping out their MAI models for the OpenAI models that are currently in Microsoft Copilot. This would be a huge shift, of course, for Microsoft. So far, the only models they've released are the PHY series of models, PHI, that were much smaller models designed for on-device use that weren't directly competing with OpenAI.

After the information started reporting this, Bloomberg picked it up as well, leading a Microsoft spokesperson to give a non-answer saying, as we've said previously, we're using a mix of models, which includes continuing our deep partnership with OpenAI, along with models from Microsoft AI and open source models. So what is actually going on here?

First of all, there is clearly a push for independence and more specifically a push for self-reliance. As I've said ever since Altman's firing, this just makes sense from Microsoft's standpoint. It would be a breach of fiduciary responsibility, in my opinion, for them not to be thinking about how to get more independence. That's why I've never thought that it's actually speaking out of both sides of their mouths when this sort of stuff is happening privately, but publicly they're saying that the relationship is still important. I think that both of those things can be true at the same time. The challenge for Microsoft, of course,

is that while it's great that they finally have some internal models that are starting to get up into the range of OpenAI's publicly available models, that still puts them fundamentally behind. That doesn't make it viable realistically for them to swap out OpenAI's models in point of fact. So to the extent that Microsoft is trying to carve a path of self-reliance, they still have a lot of work ahead of them.

One interesting additional note, however, is that I wonder a little bit around whether that divergence isn't about either party not trusting the other, or there being anything fundamentally wrong with the actual business relationship as it's constituted now, but whether there is, to some extent, also a separating of priorities. On this note, last week Sullyman posted on Twitter, "...the future is conversational. You'll talk with sites, not scroll or search them. Two-way communication, not one-way consumption."

He then shared a blog post all about this called Transforming the Future of Audience Engagement. The piece kind of makes it seem like Sullyman is interested in building the voice agent interface, underpinning a new set of products in the future. Whereas it feels like to me, OpenAI is moving more towards building specific consumer products that actually keep them owning the relationship with the customer.

For example, it seems to me like all the hints that we're getting is that open AI is going to move fairly aggressively into the agent space, not just by having models that underpin and power agents, but by specifically delivering experiences for consumers and for the enterprise around specific agents like sales agents. In other words, take something like deep research, but make it for a very specific use case, a very specific function inside the enterprise, and actually own the customer that way.

Now, there's a whole separate conversation to be had around why that might be an interesting strategy for OpenAI, given the commoditization that's happening at the foundation model layer. But in any case, one thing that I think is worth keeping an eye on is whether this isn't just about two companies that don't trust each other anymore, but two companies who had totally aligned objectives for some time, but whose prioritization is changing in the way that prioritization sometimes changes. In any case, I do think that there are implications for the industry.

In short, these two companies being wrapped up in each other means that they have more incentive and more latitude to get involved with others. And given how big these two companies are, them having more space to get involved with other companies can have some pretty dramatic impacts. Take, for example, this news that OpenAI is doing a deal with CoreWeave. CoreWeave is, of course, a much-hyped AI cloud provider that is preparing to go public, and Reuters is reporting that OpenAI has signed a $12 billion five-year cloud services deal that

that will involve not only OpenAI buying compute from them, but receiving $350 million worth of equity in a private placement around the company's IPO.

Now for CoreWeave, this is a great deal because to the extent that they have an Achilles heel during this IPO, it's that their customer base is extremely concentrated. In 2024, Microsoft accounted for 62% of CoreWeave's revenue. TechCrunch seemed to view this as 5D chess, pointing out that OpenAI will now own a decent chunk of one of Microsoft's major suppliers of AI compute. I'm not totally sure that that's the correct read on the situation. Microsoft has signaled a slowdown in data center leases, which

which coincided with OpenAI breaking their exclusive cloud deal with Microsoft. It strikes me that this deal could represent OpenAI stepping into Microsoft's shoes as the anchor customer for CoreWeave and cutting out the now increasingly disinterested middleman.

Another set of relationships that have opened up for OpenAI outside of Microsoft is their partnership with Oracle and SoftBank. The first deals are emerging for the build-out of Project Stargate, with the three partners' first new data center project already underway at their first site in Texas. Bloomberg reports that the site is expected to house 64,000 NVIDIA GB200s, the top-of-the-line version of their next-generation Blackwell chip.

Installation will occur in phases, with 16,000 chips expected to be ready to power up by the summer. Sources say that the full build-out should be completed by the end of next year.

Frankly, some are saying that these numbers seem a little low for what was billed as a civilization-defining project, with Sam Altman talking about building monuments in the desert. Then again, no data centers are running Blackwell chips at scale at this stage, so we don't know how much more powerful they'll be. Still, XAI's Colossus data center is running a 100,000-count network of NVIDIA's previous generation H100 chips, and their plans are to build that site to host a million chips in a gigantic supercluster.

Meta has similarly large ambitions. In January, Mark Zuckerberg announced plans to build a two-gigawatt data center large enough to, quote, cover a significant part of Manhattan. The company intends to end the year with 1.3 million GPUs in their data center fleet. Now, of course, Project Stargate is planned across several additional sites, so I think it's important not to read too much into this right now. Ultimately, the point is this. Bay's case is that the Microsoft and OpenAI partnership continues well into the foreseeable future.

It might evolve and change. It's already less exclusive than it once was. But I think that these companies are going to leverage each other, even if there's some tension in meetings behind closed doors.

At the same time, it's pretty clear that at this point, both companies are charting independent paths. The opportunity is simply too big for them to be constrained to each other, and that always would have been the case ultimately. Net-net, I think it's very positive for the wider AI world that these two companies are operating independently. It creates more space for opportunity and partnership, and competition tends to benefit everyone else. I wouldn't go so far as to say that that should make us cheer for the drama, but it also definitely means I'm not stressing about it.

Anyways, guys, that is going to do it for today's AI Daily Brief. Until next time, be safe and take care of each other. Peace.