Today on the AI Daily Brief, Anthropic and NVIDIA share barbs when it comes to the question of export controls. Before that, in the headlines, Anthropic makes MCP accessible to everyone. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Thanks to today's sponsors, KPMG, Super Intelligent, and Blitzy. To get an ad-free version, go to patreon.com slash ai daily brief.
We kick off today in the world of Anthropic and Agents, where the company's new integrations feature brings the power of the model context protocol to regular users. So what's going on here and why does it matter?
Well, Anthropic has launched a new way to connect apps and tools to their cloud chatbot. The feature, called Integrations, allows users to tap into data from 10 popular services, including Jira, Zapier, Cloudflare, Sentry, and Plaid. Stripe and GitHub are coming soon, and it looks like the plan is to add as many integrations as possible. Developers can also create their own integrations, which Anthropic claims can be achieved in as little as 30 minutes.
Now, this new feature did cause some amount of confusion, with Vibecode app co-founder Riley Brown asking, Hi, can someone explain to me the difference between what was announced here and MCP? Developer Dink Dobos writes, These are new remote MCPs, aka they don't run on your machine and have auth. Also, they are first-class integrations and are easily connectable in the app.
Chongyu writes, it's like a more user-friendly wrapper over MCP. Now this is worth going into a little bit more detail around. As a quick refresher, MCP or the Model Context Protocol is Anthropic's open standard for communications between AI models and tooling. It allows agents to easily access data from other services without developers needing to build a custom integration. Communications are routed through MCP servers which then connect to the individual tools being accessed. What
What Anthropic has done with the integrations feature is remove the burden of configuring and maintaining MCP servers. Instead of running an MCP server locally, developers can now use the integrations feature to hit a selection of the most commonly used tools. Essentially, it's a cloud-based MCP server operated by Anthropic, with much of the technical complexity abstracted away so that you don't have to think about it. Now, if this leads you feeling like, "Hey, isn't this just a minor infrastructure change?" Adding this abstraction layer dramatically expands access to powerful MCP integrations.
Anthropic Simple Demo gives a very clear example. It shows a user asking Claude, what's on my calendar? The app then recognizes that Jira is the right tool for the job, prompts the user to connect, does a little agentic magic, and spits out their calendar.
This functionality worked before, but it used to require configuring a local MCP server first. Now it just works out of the box. This is obviously a huge boon for non-technical users, and frankly anyone that doesn't want their productivity tool to require a bunch of setup on the front end. It essentially makes MCP plug and play and something you won't need to think about unless you're doing some very custom or obscure tooling.
Because this is a remote-hosted service, it also means that MCP integrations now work with the browser version of Cloud, which wasn't previously possible. Anthropic can also start to formalize security standards to make MCP far more usable in an enterprise setting. Integrations can be verified, and thanks to services from Cloudflare, can have built-in OAuth authentication, transport handling, and integrated deployment.
Now, there was a ton of commentary about how this is no big deal and just a wrapper over what MCP can already do. But one of the biggest barriers to getting people to engage with agents is the complexity of setup. Anything that makes agents more user-friendly is a big improvement. And the trajectory from this is pretty clear. They're trying to make agents no more difficult to use than chatbots. It also could unlock a lot of big things down the road. Ward of UXAI Agency wrote, this might be the start of an app store for agents.
The big takeaway is that Anthropic is one by one removing all of the blockers to agentic AI and quickly approaching the point where things just work. Technologist Robert Scoble wrote, Really interesting stuff. Honestly, it could be a whole episode. But for now, let's jump over to Google, where the company is expanding access to AI mode as another step towards disrupting their own search models.
AI Mode is an experimental feature that was launched back in March. It functions similarly to Perplexity or ChatGPT Search, allowing users to conduct an AI search, ask follow-up questions, or use complex multi-part queries.
Until now, the feature was locked away in Google Labs, the company's experimental platform, but Google will now begin testing AI mode as a regular feature of Google Search. The rollout will begin slowly, with the feature only visible to a small percentage of users in the US. But if the deployment follows the same path as AI overviews, the feature will be rolled out much more widely as soon as possible.
The feature itself has nothing particularly novel to it, but native integration with Google's backend could mean a more powerful and natural-feeling integration. Presenting some of the new features, Google highlighted that AI mode can now present elements about products and places, which can use click-through to get more information. These integrations have been added to competitors' platforms already, but Google's decades of design experience could give them an edge as the AI search wars escalate.
Lastly, never short on ambition, Meta's Mark Zuckerberg has a decade-long plan to make trillions from AI. Court documents unsealed in Meta's AI copyright lawsuit show the company is predicting between $46 billion and $1.4 trillion in AI revenue by 2035. In the shorter term, Meta predicted between $2 and $3 billion this year. It wasn't clear exactly what Meta's definition of a generative AI product is or how they expect these products to generate revenue. In fact, this has been a big knock on the company during earnings reports over the past year.
Meta's advertising business currently represents 98% of their roughly $120 billion in annual revenue. In previous quarters, Zuckerberg attempted to justify the company's massive AI spend with the nebulous concept that AI tooling drives higher margins than spend on advertising. Whatever that side of the balance sheet is, it's definitely the case that the company is spending a pretty penny keeping their Gen AI division running. They showed a $900 million budget for 2024 and projections that it would hit $1 billion this year.
That, of course, is a drop in the bucket compared to the $64 to $72 billion the company plans to spend on infrastructure this year. Anyways, as this case goes on, we'll probably get even more information, but it is an interesting peek behind the curtain for how one of the biggest companies in the world is thinking about Gen AI. For now, that is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context.
which, if you don't know exactly what that means yet, do not worry, we're going to explain, and it's awesome. So Blitze is used alongside your favorite coding copilot as your batch software development platform for the enterprise, and it's meant for those who are seeking dramatic development acceleration on large-scale codebases. Traditional copilots help developers with line-by-line completions and snippets,
But Blitze works ahead of the IDE, first documenting your entire codebase, then deploying more than 3,000 coordinated AI agents working in parallel to batch build millions of lines of high-quality code for large-scale software projects. So then whether it's codebase refactors, modernizations, or bulk development of your product roadmap, the whole idea of Blitze is to provide enterprises dramatic velocity improvement.
To put it in simpler terms, for every line of code eventually provided to the human engineering team, Blitze will have written it hundreds of times, validating the output with different agents to get the highest quality code to the enterprise and batch. Projects then that would normally require dozens of developers working for months can now be completed with a fraction of the team in weeks, empowering organizations to dramatically shorten development cycles and bring products to market faster than ever.
If your enterprise is looking to accelerate software development, whether it's large-scale modernization, refactoring, or just increasing the rate of your STLC, contact Blitzy at blitzy.com, that's B-L-I-T-Z-Y dot com, to book a custom demo, or just press get started and start using the product right away.
Today's episode is brought to you by KPMG. In today's fiercely competitive market, unlocking AI's potential could help give you a competitive edge, foster growth, and drive new value. But here's the key. You don't need an AI strategy. You need to embed AI into your overall business strategy to truly power it up.
KPMG can show you how to integrate AI and AI agents into your business strategy in a way that truly works and is built on trusted AI principles and platforms. Check out real stories from KPMG to hear how AI is driving success with its clients at www.kpmg.us slash AI. Again, that's www.kpmg.us slash AI.
Today's episode is brought to you by Superintelligent, and I am very excited today to tell you about our consultant partner program. The new Superintelligent is a platform that helps enterprises figure out which agents to adopt, and then with our marketplace, go and find the partners that can help them actually build, buy, customize, and deploy those agents.
At the key of that experience is what we call our agent readiness audits. We deploy a set of voice agents which can interview people across your team to uncover where agents are going to be most effective in driving real business value. From there, we make a set of recommendations which can turn into RFPs on the marketplace or other sort of change management activities that help get you ready for the new agent-powered economy.
We are finding a ton of success right now with consultants bringing the agent readiness audits to their client as a way to help them move down the funnel towards agent deployments, with the consultant playing the role of helping their client hone in on the right opportunities based on what we've recommended and helping manage the partner selection process. Basically, the audits are dramatically reducing the time to discovery for our consulting partners, and that's something we're really excited to see. If you run a firm and have clients who might be a good fit for the agent readiness audit,
reach out to agent at bsuper.ai with consultant in the title, and we'll get right back to you with more on the consultant partner program. Again, that's agent at bsuper.ai, and put the word consultant in the subject line.
Welcome back to the AI Daily Brief. There are subplots upon subplots in this AI space. And one of the vectors of battle, of course, is around AI policy. Now, there are two big buckets of policy that all of the big companies in the space are thinking about. One of those are, of course, domestic policies and regulations and guardrails.
the type of stuff that was dealt with in the Biden executive order, but that kind of has a vacuum right now in American policy. And then, of course, there's the more active discussion, at least at the moment, which has to do with export controls. Interestingly enough, Anthropic and NVIDIA have sort of gone to war over changes in those controls. Earlier in the week, Reuters reported that the Trump administration was considering making changes to the current state of AI chip export rules. This
The state of play at the moment is that we are less than two weeks out from the implementation of the AI diffusion rule, an enhanced set of controls set in motion in the final days of the Biden presidency. The new system divided the world into three tiers with different restrictions on chip exports to each tier. Close allies in tier one, including the UK, France, and Canada, have no restrictions and can order as many chips as they like. Tier two countries, which controversially included India and Israel along with most of the world,
are limited to 50,000 advanced AI chip imports over the next three years for all companies and infrastructure. Country-specific authorizations can raise this cap, but only enough to support a handful of AI superclusters for training. Individual orders of less than 1,700 chips are allowed without authorization and without counting to the overall country cap.
Presumably, the logic is that this amount of chips can enable research but not commercial AI deployment. Tier 3, meanwhile, is reserved for adversaries of the US, including China, and prohibits all advanced AI chip exports. Now, as an aside here, given how much scrutiny there's been around the Trump administration's application of tariffs to both friend and foe alike, it's interesting to note that this policy was crafted under Biden.
and already was showing the US being more willing to prohibit access even to its allies than people might've expected. Now, interestingly, the diffusion rule also considers AI models themselves as restricted exports. Models deemed advanced enough to cause concern were prohibited from being developed in tier two and three countries
and the weights of open source models in this class were also prohibited from export and deployment. When the rule was introduced, it had a strange tension in its intent. The primary goal appeared to be to stop China from developing advanced AI, and was especially focused on shutting down the passage of advanced chips through third-party countries.
There was also a stated goal of ensuring that US-developed AI was diffused throughout the world, rather than allowing Chinese AI to become the standard for the globe. But those two things are running at really cross-purposes here. To put a fine point on it, by viewing the rest of the world's access to chips as a proxy and as a threat to China getting those chips, it inherently runs up against the goal to diffuse AI throughout the rest of the world.
Now back to the Trump administration, Reuters wrote on Tuesday that they are considering doing away with the splitting of the world into tiers. The reporting noted a schism in the White House, with some recognizing that granting exemptions to the rule was a powerful bargaining chip in trade talks. Three anonymous sources were listed to ground the story. Wilbur Ross, who served as Commerce Secretary in the first Trump administration, offered commentary, stating, "...there are some voices pushing for elimination of the tiers. I think it's still a work in progress."
He said that government-to-government agreements were one alternative. The report also suggested that a massive tightening was also on the table, presumably to create even more leverage in trade talks out of the diffusion rule. Reuters suggested that just 500 chips could be the limit order for Tier 2 countries without government approval, such a low level at which it might as well be a total ban. Now, heading into this week, the only real discussion had been that the administration wants to make the rule, quote, "...stronger but simpler."
Indeed, since its introduction, there has been a lot of discussion around the impossibility of enforcing a rule with so many moving pieces. Among the complaints was the point that the Commerce Department and Custom Officials are not really set up to provide order-by-order approvals across an entire global industry. Not to mention, Oracle and Nvidia, among others, provided strong pushback when the rule was introduced. Nvidia called the rule misguided and said it would put, quote, global progress in jeopardy. They added, "...America wins through innovation, competition, and by sharing our technologies with the world."
not by retreating behind a wall of government overreach. So that was the state of play at the beginning of this week. But then on Wednesday, Anthropic published their extremely hawkish proposal to strengthen the rule. The company said that they strongly support the rule as it was written and would like to see a few tweaks to close gaps. It argued that, quote, maintaining America's compute advantage through export controls is essential for national security and economic prosperity.
Their main suggestion was to lower the number of chips that Tier 2 countries can obtain in small orders without authorization. They wrote that the current 1700 chip limit "creates a potential loophole for smuggling, as people can make multiple purchases just under this limit to avoid scrutiny. We recommend lowering this threshold so that more transactions would require review, making it harder for smugglers to exploit this gap." The other key suggestion was to expand the ability for Tier 2 countries to obtain chips at the scale of large data centers,
but only through government-to-government agreements, essentially moving the supply of AI chips into the realm of national security and foreign relations rather than trade.
Their final suggestion was that funding for export controls needed to expand in order to make them effective. This seemed like a tacit acknowledgement that the controls are very resource-intensive and would require major expansion of the relevant government departments to enforce the rule. Overall, Anthropic urged no pause on implementation, writing that Chinese firms have engaged in aggressive stockpiling ahead of the May 15, 2025 implementation date.
Any pause would invite further stockpiling and weaken the effectiveness of the rule at this critical moment. They concluded by stating, "...the strategic window for strengthening American export controls is now. By strengthening the diffusion framework, America can ensure transformative AI technologies are developed domestically in alignment with American values and interests."
Now, whether you agree with this or not, it does not come as a surprise from Anthropic. Indeed, the company has grown increasingly concerned about chip controls this year and increasingly vocal about it. In January, CEO Dario Amadei wrote an op-ed in the Wall Street Journal arguing for tougher controls. We read this on Long Read Sunday back then. He stated,
Mr. Trump has likened AI to a superpower and has underscored the importance of the U.S. staying right at the forefront of its race against China. His administration's actions will help determine whether democracies or autocracies lead the next technological era. Our shared security, prosperity, and freedoms hang in the balance.
So Anthropic Snow was one big event in this conversation this week, but the plot thickened on Wednesday with NVIDIA CEO Jensen Huang arriving in Washington to meet with lawmakers and the president. Much of the press coverage focused on a $500 billion pledge to onshore manufacturing and the need to create massive data centers or AI factories across the nation. But when it came to the diffusion rule, NVIDIA didn't mince words. In a statement to CNBC, a company spokesperson said,
American firms should focus on innovation and rise to the challenge, rather than tell tall tales that large, heavy, and sensitive electronics are somehow smuggled in baby bumps or alongside live lobsters. This was, of course, a direct reference to the two examples that Anthropic had used in their blog post to argue that chip smuggling is a major threat. These two incidents occurred in 2022 and 2023, which was, of course, a much earlier time in AI.
NVIDIA's point is that the hundreds of thousands of chips required to power a cutting-edge AI training cluster are not coming into China on a fishing boat. The company's statement continued, China, with half of the world's AI researchers, has highly capable AI experts at every layer of the AI stack. America cannot manipulate regulators to capture victory in AI.
Essentially, this is a call for the American AI industry to get serious and recognize that China has caught up largely due to their own hard work rather than access to chips. NVIDIA has consistently argued that their chips, while advanced, are not unique and control of them is no basis for a long-term geopolitical strategy.
Jensen Huang reinforced this point in comments in the halls of Congress on Wednesday. We played the clip on yesterday's show, but it bears repeating. Jensen told the assembled reporters, China's not behind. China's right behind us. We're very, very close. This is a country with great technical capabilities. 50% of the world's AI researchers are Chinese. This is an industry that we will have to compete for. The comments came out of reporting from earlier in the week that Huawei had developed a chip that could compete with NVIDIA's current flagship, the H100.
Now, it's not entirely clear that performance will be on par and the chip is still in early testing. But Huang's comments reinforced that even if Chinese hardware hasn't caught up yet, it's only a matter of time. He called Huawei one of the most formidable technology companies in the world, stating that they have, quote, all of the essential capabilities to advance AI. The NVIDIA CEO reportedly reinforced these views in a closed-door meeting with administration officials on Thursday.
A senior staff member said, if DeepSeek R1 had been trained on Huawei chips, or a future open source Chinese model had been trained to be highly optimized to Huawei chips, that would risk creating a global market demand for Huawei chips. The full hallway interview also included some comments directly on the diffusion rule, with Jensen saying, I'm not sure what the new diffusion rule is going to be, but whatever it happens to be, it really has to recognize that the world has fundamentally changed since the previous diffusion rule was released. We need to accelerate the diffusion of American AI technology around the world.
So what to make of all this?
On the one hand, I think it is tempting to view NVIDIA's take on this as self-serving. China, of course, represents a huge market for them, and they want to be able to access it. In fact, they have a fiduciary responsibility to their shareholders to try to access it. At the same time, there is a more fundamental disagreement here. The people who are arguing that the export controls aren't working or won't work, who don't have big financial interest in China, are basically saying that China has already caught up. In other words, that export controls have failed.
The stated intent was to slow China down rather than stop them. And in the four months since the diffusion rule was first introduced, we've seen the release of DeepSeek, multiple extremely powerful open source models from Alibaba's Quen team, and the first rumors of advanced chip development. Whether or not you think these controls should be in place...
I think Jensen's point that America is or is not going to win this based on our own innovation rather than regulatory control is almost certainly true. And I do think there is a core strategic question here, the one that was inherent in the tension in the stated goal of the diffusion rule. Is the rule actually about the negative restriction of our technology to China, or is it about the positive distribution of our technology to the rest of the world? Right now, we don't have a clear answer to that, and it shows in the fundamental incoherence of the policy.
Anyways, interesting and important things happening in the world of AI policy. For now though, that is going to do it for today's AI Daily Brief. Appreciate you listening as always. And until next time, peace.