We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Why MCP Won the Agent Tooling Wars (And How It Will Speed Up Agents)

Why MCP Won the Agent Tooling Wars (And How It Will Speed Up Agents)

2025/3/28
logo of podcast The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

AI Deep Dive AI Chapters Transcript
People
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
主持人:MCP协议的快速普及是AI代理发展速度的体现,企业更注重快速发展而非争夺基础设施所有权,这将加快AI代理应用的上市速度。未来一年内,许多非技术人员将通过各种工具与MCP协议进行有效互动。MCP是一个开放标准,允许开发者在数据源和AI工具之间建立安全的双向连接,简化了开发过程,如同一个通用的适配器,简化了AI代理访问API的过程。AI代理需要访问工具和资源才能执行有意义的任务,MCP为此提供了支持。OpenAI最初的策略曾引发人们对新的标准之争的担忧,但最终OpenAI选择支持MCP,避免了标准之争。MCP的成功并非因为其技术上的绝对优势,而是因为其网络效应,即广泛的采用率。MCP的成功原因在于其拥有强大的支持者和开放的标准,而OpenAI的方案虽然功能性强,但封闭性限制了其发展。大型实验室提出的标准更容易成功,这与公平无关。Anthropic在AI工程师中的影响力也促进了MCP的普及。重要的是达成共识,而非标准本身。OpenAI和Anthropic的支持为MCP的普及发出了强烈的信号。MCP标准的统一将显著加快新AI代理功能的开发和部署速度。OpenAI选择支持MCP,是因为其认为这比拥有标准本身更有价值,这将显著加快AI代理的发展。

Deep Dive

Chapters
OpenAI's remarkable financial growth, with projected revenue tripling this year to $12.7 billion and potentially doubling again next year, is highlighted. Their upcoming $40 billion funding round, led by SoftBank, would make it the largest venture round in history, further solidifying OpenAI's position in the AI market.
  • OpenAI's revenue projected to reach $12.7 billion this year, tripling from the previous year.
  • Upcoming $40 billion funding round valuing OpenAI at $300 billion.
  • SoftBank leads the funding round with significant investment.

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, why MCP won the agent tooling wars and how it's going to speed up agents. And before that in the headlines, OpenAI's revenue looks to be up to $12.7 billion this year. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes.

Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. We have a very OpenAI-centric headlines today, with the big news being that the company expects to triple their revenue this year on the back of strong growth in paying customers. Last year, OpenAI had revenues of $3.7 billion, and according to sources speaking with Bloomberg, the company expects that number to more than triple this year to $12.7 billion. Next year's adjusted projections have revenue more than doubling again to hit $29.4 billion.

OpenAI's growth trajectory was a big topic of discussion towards the end of last year as the startup prepared to raise a record-breaking late-stage venture round. That round closed at $6.6 billion and valued the company at $157 billion. Pitch decks circulating at the time showed a projection of $11.6 billion in revenue for 2025, a figure that at the time some found bafflingly high. Those forecasts have now been marked up by 10%, and it's pretty difficult to find anyone questioning the company's ability to grow.

Separately, Bloomberg also reports that OpenAI is close to finalizing their next tranche of fundraising. The company will reportedly raise $40 billion in a round that would value them at $300 billion. For those doing the quick math in their heads, that's close to double the valuation and six times as much money raised as that previous round.

According to PitchBook data, this would be the largest venture round in history, and it's not particularly close. Sources say that SoftBank is leading the round with participation from Magnetar Capital, Founders Fund, Altimeter, and a number of others. Bloomberg reports that the deal is being staged across two tranches. In the initial stage, SoftBank will contribute $7.5 billion, while an investor syndicate will provide an additional $2.5 billion. The balance of the 30 will be provided later in the year, with SoftBank in for $22.5 billion and a syndicate contributing the balance.

Between Project Stargate, their Japanese agent deployment initiative, and regular venture, it's kind of difficult to track how many AI chips SoftBank has on the table at this point. What's clear is that this is by far the biggest bet Masa-san has ever made. SoftBank was already kind of all in on OpenAI's success, and they continue to double down at every opportunity.

Now, following up from our story yesterday, the studio gibbleification of everything that we had talked about in the wake of OpenAI releasing their new image generation model has done nothing but continue. And in fact, has completely overwhelmed the timeline in a way that almost nothing I've ever seen has. There are even meta memes making jokes about the memes.

Open AI, in short, has birthed a bonafide internet phenomenon, but it's causing some logistical issues. A giblified Sam Altman posted, Images in ChatGPT are way more popular than we expected, and we had pretty high expectations. Rollout to our free tier is unfortunately going to be delayed for a while.

Now, basically every OpenAI release over the past year has smashed up against the company's compute limits, but this one might be a little bit different. The company rolled out the feature to all paid tiers, including the $20 per month plus tier. Not only does that put a lot of extra pressure on the service compared to gating the viral phenomenon behind the ultra-premium $200 per month tier, but it also means that it's much cheaper to buy into the latest internet phenomenon.

What's interesting to me is what it says about human psychology. Peter Yang wrote, Another person pointed out that people are loving the output so much that no one's complaining about how long it takes, which is way out of sync with other image generation models.

Speaking of other image generation models, Ideogram has released its Ideogram 3.0. And you might have heard me say before, but Ideogram had basically entirely taken over image generation in my business workflows because of its fidelity to instruction and its ability to handle text. We're likely over here to continue to test both. So we'll see if this new Ideogram model can actually hang. For now, though, and at least for probably the next day or two, everything on the internet is Studio Ghibli. And I got to say, it's not the worst thing ever.

That, however, is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded.

Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk.

Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.

For a limited time, this audience gets $1,000 off Vanta at vanta.com slash NLW. That's V-A-N-T-A dot com slash NLW for $1,000 off.

Today's episode is brought to you by Super Intelligent and more specifically, Super's Agent Readiness Audits. If you've been listening for a while, you have probably heard me talk about this, but basically the idea of the Agent Readiness Audit is that this is a system that we've created to help you benchmark and map opportunities in your organizations where agents could succeed.

specifically help you solve your problems, create new opportunities in a way that, again, is completely customized to you. When you do one of these audits, what you're going to do is a voice-based agent interview where we work with some number of your leadership and employees to map what's going on inside the organization and to figure out where you are in your agent journey.

That's going to produce an agent readiness score that comes with a deep set of explanations, strength, weaknesses, key findings, and of course, a set of very specific recommendations that then we have the ability to help you go find the right partners to actually fulfill. So if you are looking for a way to jumpstart your agent strategy, send us an email at agent at besuper.ai, and let's get you plugged into the agentic era.

Welcome back to the AI Daily Brief. Today's episode is a lot more technically complex than our normal episodes. I will, of course, try to make it totally accessible and understandable, even for people who are not developers and who are not particularly technical. But before we dive in, I wanted to give you just a little bit of context for why to care. Today, we're talking about an open protocol that has very quickly become the standard for how AI systems and agents in particular are being built. This matters to you as a non-developer for at least two reasons.

The first is that the fact that there has been such quick consolidation around this protocol, which is called MCP or Model Context Protocol, is an indicator of just how quickly agents are moving. And more specifically, an indicator that companies in this space would rather move at the greatest possible speed than try to duke it out and try to battle for ownership of key infrastructure. That means in general that you're going to see more agentic applications come to market faster.

The second reason to care is that I think that a fair number of you who are sitting there now as non-technical or not developers will find yourself at some point in the next year, yes, I mean the next 12 calendar months, using a tool like Lovable or Bolt or an IDE like Cursor or Windsurf, actually interacting in a meaningful way with the model context protocol.

Part of the great transformation that is happening with AI and agents right now is that the breadth of people who can create with code is radically expanding. So MCP may be more directly relevant even than you think. So let's go back to MCP, what it is, what this announcement was, and why the agent tooling wars are over before they began. First of all, what is the model context protocol?

Back in November, Anthropic announced the Model Context Protocol, or MCP. They called it a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and developer environments. MCP was trying to solve a very clear need. And

Anthropic wrote,

There's a great graphic that Matt Pocock shared about this back at the beginning of March. Apologies for those of you who are listening. I'll try to describe it. Basically, he shows two charts together, both of which showing how a coding application or IDE like Cursor or Windsurf gets access to the information it needs to build whatever it is it's trying to build.

The first schematic, that doesn't involve Model Context Protocol, shows Cursor having to interact with GitHub, Slack, and a local file system, each through their own unique API. With the Model Context Protocol, on the other hand, MCP is doing the interaction between each of those unique APIs, and the end user using Cursor only has to interact with MCP through a unified API. In short, it makes it a lot simpler to build.

Anthropic wrote, MCP is an open standard that enables developers to build secure two-way connections between their data sources and AI-powered tools. The architecture is straightforward. Developers can either expose their data through MCP servers or build AI applications called MCP clients that connect to these servers. So trying to simplify this even more, programmers can spin up MCP servers for specific tools, knowing they won't have to duplicate that work when the next new agent comes along.

MCP you can kind of think of like a universal adapter for agentic API access that's open for everyone to build on. An MCP server effectively converts an agent's request for data into whatever format the API is looking for. Then once the data is delivered, it converts it into a standardized format that's readable to the agent.

And this sort of tooling is available for anything an AI agent might want to have access to. That could include API calls to get software to do something, querying a database for certain data, or reading or writing to external memory to assist the LLM. Remember that without having access to tools, LLMs can't really do anything meaningful in the world other than predict the next token.

To move from that to agentic capabilities, they need access to tools and resources. Now, MCP had started to pick up steam in the first few months after it was released, and it was increasing throughout the end of February and into March. Two weeks ago, however, on March 11th, OpenAI released their big agentic tooling update, and many thought we were seeing the beginning of a new agent tooling wars. OpenAI's release included a software development kit called Agents SDK,

as well as a standardized tool use access point called Responses API. These new features allowed agent builders to tap into OpenAI's web search and computer use features. Many people thought this was OpenAI trying to build in their own direction. Basically, instead of just adding MCP support to their models, they had built a set of proprietary tool integrations. To give one example, one of the most popular MCP servers is Brave Search, which allows agents to surf the web.

Instead of just letting developers access that through MCP, OpenAI appeared to be locking them into using OpenAI's proprietary web search feature instead. And so many thought that we were headed into the beginning of a new standards war. USB versus Apple's Lightning connection, DVD versus Blu-ray. But of course, in this case, there's no hardware involved. The standards war is entirely about software integrations, meaning that it's much lower cost to support both. And that's exactly what OpenAI has decided to do.

Yesterday, Sam Altman tweeted, people love MCP and we are excited to add support across our products. Available today in the agent's SDK and support for ChatGPT desktop app and responses API coming soon. Now, it's important to note that this was not predetermined. Like I said, MCP was pretty well received right from the beginning, but there still was a lot of debate. The Langchain blog published something called MCP, Flash in the Pan or Future Standard.

At the end of February, however, a lengthy tutorial seminar from the AI Engineer Summit, the one that I emceed in New York, that featured the Anthropic staff member who designed the protocol started to go viral, or at least as viral as a dense 100-minute video aimed at AI engineers can go. It explained exactly how MCP works, how to integrate it, and how to get the best results for agent building.

Following this, we started to see an uptick in MCP servers getting built. And like any network, the more of these servers that came online supporting a broader range of tools, the more that it made sense for developers to just stay inside that ecosystem. MCP, in other words, started to prove the old truism that ultimately which standard gets adopted isn't necessarily about which standard is best and more about the network effect of just how widely adopted it is. There are now thousands of MCP servers that allow easy access to basically every major app or tool.

And so even before Altman and OpenAI decided to make this announcement, the team at Latent Space, who overlaps via SWIX with the team who runs the AI Engineer Summit, wrote a blog post called Why MCP One. They affirm this same idea that we were just talking about, writing,

And it's fair to say that MCP has captured enough critical mass and momentum right now that it is already the presumptive winner of the 23-25 Agent Open Standard wars.

The article dug deep into the technical reasons why MCP was an improvement over the way OpenAI were doing things, with one very simple reason being that it just made more sense with the way that AI works. OpenAI was using distinct API calls for different tasks. For example, the call and response to a tool that an agent wants to use would look very different to querying and receiving data from a database, whereas MCP abstracted that all away, adding a universal interpretation layer in the middle so everything is interoperable on the same standard. But that wasn't the only reason they argued why MCP had won.

Latentspace noted that MCP had a combination of not only a big backer, but also an open standard. OpenAI's solution was functional but was locked down within the company. To get access, tooling and data companies needed to work with OpenAI on integration. It also meant that additions were relatively slow. In MCP, Anthropic just proposed the open standard and let everyone add themselves.

Touching on why this didn't come from one of the smaller companies that were first to market, like Composio, Leighton Space wrote, "...this one is perhaps the most depressing for idealists who want the best idea to win. A standard from a big lab is very simply more likely to succeed than a standard from anyone else. There's nothing fair about this. If the financial future of your startup incentivizes you to lock me into your standard, I'm not adopting it. If the standard backer seems too big to really care about locking you into the standard, then I will adopt it."

Another reason they argued that MCP was a breakout is the degree to which Anthropic has become the de facto model and the de facto brand for AI engineers. We've talked extensively about how much the developer use case has really become the core of Anthropic's success, and that certainly feels like it's at play here as well.

If you're interested, the latent space goes into a number of other reasons why MCP won, all of which are really interesting. It is behind a paywall, but I highly recommend the 80 bucks or whatever it is for the year for latent space. I find even as someone who is not a developer, while it's not technically written for me, it's easily the best place for me to pop into the world of AI engineering and try to wrap my head around what's going on on that side of the market.

In any case, at the end of the day, the technical and sociological reasons that MCP won aren't all that important, at least when it comes to this audience of listeners and watchers right now. The main point is that there is now a de facto standard for agentic tooling access, and adoption can continue ramping up.

One of the big lessons from earlier format wars in tech is that ultimately it doesn't really matter what the standard is. The important thing is that there's a consensus. We now have both OpenAI and Anthropic sending a big signal to every software company in the world. It's time to build an MCP server and let the agents in.

You might not notice it happening if you're not in the weeds coding and deploying infrastructure for agents, but looking back in three months' time, we are likely to see an astonishing proliferation of new agentic features that are available due to adding MCP support. With everyone now building on the same standard, everyone can optimize in a single direction. Pretty soon, agent builders won't need to build tool integrations at all. They'll just plug into the MCP servers they need and move on.

This means that those developers will be starting on first base and can just focus on making their agents work. And that is exactly why I'm so excited about this and why I think it's relevant for you, even if you are not yourself a developer.

TLDR, OpenAI has decided that the value of anointing a standard in terms of how much faster it allows everyone to move towards really performing agents is worth more than owning that standard. In just a couple months time, I guarantee you are going to see the benefits of this by being able to deploy agents that would not have been possible otherwise. Anyways, friends, that is going to do it for today. Appreciate you listening or watching as always. And until next time, peace.