We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Does AI Search Threaten the Business Model of the Internet?

Does AI Search Threaten the Business Model of the Internet?

2025/5/9
logo of podcast The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

AI Deep Dive Transcript
People
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
主持人:AI正在改变我们与互联网互动的方式,特别是搜索习惯。几十年来,互联网的起点是通过谷歌搜索信息,但现在AI搜索引擎提供即时摘要,这改变了搜索经济学,并可能威胁到谷歌的广告模式。 苹果公司正在积极地改造Safari浏览器,以适应AI驱动的搜索引擎,这表明AI搜索引擎有潜力取代传统的谷歌搜索。苹果公司认为AI搜索市场比以往任何时候都更开放,并且这种转变是不可避免的。 AI将从根本上改变网络的商业模式,因为搜索驱动着网络上发生的一切。现在,谷歌搜索结果中75%的查询都在谷歌页面上得到解答,这减少了用户访问其他网站的次数,从而影响了内容创作者的收入。 苹果公司关于谷歌搜索的评论证实了投资者的最大担忧:AI转型已经到来,而谷歌行动不够迅速。 市场对AI的反应可能并非基于基本面,而是基于新闻标题和情绪。苹果公司在AI方面的实际行动主要是在对第三方模型进行重新包装和推广。 苹果公司此举可能是为了维护其与谷歌的搜索协议,因为在监管机构采取行动之前,市场往往已经开始发生变化。 Eddie Kueh: 我相信像OpenAI、Perplexity和Anthropic这样的AI搜索提供商最终会取代传统的谷歌搜索。我们打算在未来将所有这些选项添加到Safari中,但它们可能不会成为默认选项。 Matthew Prince: AI将从根本上改变网络的商业模式。过去15年来,网络的商业模式一直是搜索。搜索驱动着网络上发生的一切。现在,谷歌搜索结果中75%的查询都在谷歌页面上得到解答,这减少了用户访问其他网站的次数,从而影响了内容创作者的收入。网络的商业模式需要改变才能生存。 Deirdre Bosa: 苹果公司关于谷歌搜索的评论证实了投资者的最大担忧:AI转型已经到来,而谷歌行动不够迅速。 Talon Sharpedge: 市场对AI的反应可能并非基于基本面,而是基于新闻标题和情绪。苹果公司在AI方面的实际行动主要是在对第三方模型进行重新包装和推广。 Mark Gurman: 这可能是苹果公司试图通过向法院证明谷歌已经过时,来维护其与谷歌的搜索协议。 David Barnard: 监管机构采取行动时,市场往往已经开始发生变化。 Austin Allred: OpenAI与微软的20%的收入分成协议直到2030年,这太疯狂了。 Megan Gray: OpenAI认为自己有影响力,并且微软愿意承担更大的反垄断风险,这简直是痴人说梦。 Jensen Huang: 中国的AI发展并没有落后,他们拥有强大的技术能力和大量的AI研究人员。美国需要重新思考其在AI领域的竞争方式,承认中国取得的突破。 John Hornby: AI无法提出真正伟大的创意,品牌建设需要人类的智慧。 Mark Zuckerberg: 我计划创建一个完全自动化的AI广告平台,任何企业都可以使用它来实现其广告目标。

Deep Dive

Shownotes Transcript

Translations:
中文

Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes.

There's no doubt that AI is changing the way that we interact with the internet. And one of the clearest examples of this is our habits around search.

For a couple of decades, the starting point experience for the internet was to go to Google, ask a question, search for something you were looking for, and get a whole bunch of links that you have to progressively go through to find the right answers. Now, that supported the business model of the internet because that process drove clicks to all these different sites. Search economics were at the core of how the internet functioned. And all of that is, of course, changing.

A great example of this came in recent testimony in the Google antitrust case, where it was revealed that Apple is, quote, actively looking at revamping Safari to focus on AI-powered search engines. Senior VP of Services Eddie Kueh said that he believes that AI search providers like OpenAI, Perplexity, and Anthropic will eventually replace traditional Google search. He intends to bring all of those options to Safari in the future, stating, "...we will add them to the list, they probably won't be the default."

Q mentioned that Apple has already had some discussions with Perplexity about adding them to the platform. The discussion comes as part of the "remedies" portion of the Google trial, which is nominally about how to force the company to relinquish their monopoly on search. However, testimony has progressively turned to new monopoly issues regarding AI. Q mentioned that the search segment is more wide open than it's ever been, stating: "Prior to AI, my feeling around this was none of the others were valid choices.

I think today there is a much greater potential because there are new entrants attacking the problem in a different way. He feels that the switch to AI search is inevitable, adding, there's enough money now, enough large players, that I don't see how it doesn't happen. Now, Q also did mention that AI search isn't quite good enough in its current form, but he doesn't expect that to last long. Indeed, he says that they expect to have AI search options in Safari by the end of the year. It was also noted that search volume on Safari declined for the first time ever last month.

a shift that Q attributes to the increased use of AI. And this shift also highlights the changing monetization of the internet. Q, for his part, said he's losing sleep over the possibility of losing their revenue sharing agreement with Google on search. And Cloudflare CEO Matthew Prince recently spoke at the Council on Foreign Relations to talk about exactly this. AI is going to fundamentally change the business model of the web. The business model of the web for the last 15 years has been search. One way or another, search drives everything that happens online.

Prince went on to explain that in the early days, for every two page of data that Google scraped, websites would receive on average one visitor. That rate is now down to one visitor per six pages of data. Prince commented that the thing that's changed is that, quote, 75% of the queries that get put into Google get answered without you leaving Google. They get answered on that page.

10 years ago, you might get sent to a Wikipedia page. Today, the answer comes right up on the page. The consequence of that is that content creators, if they were deriving value through subscriptions or putting up ads, or even just the ego that someone is reading your stuff, that's gone. Prince concluded, "...the business model of the web can't survive unless there's some change."

Now, the market took all of this very harshly. Google shares were way down following this testimony. And CNBC's Deirdre Bosa said that even though Google is obviously involved in disrupting itself with this, Apple's comments on Google search confirms investors' worst fear. The AI shift is here and Google didn't move fast enough. Innovators' dilemma is real. Google shares down greater than 8%.

Others are more skeptical. Talon Sharpedge writes, "...the market isn't reacting to fundamentals, it's reacting to headlines and vibes. One whiff of Apple might do AI search and Alphabet drops like it got margin called by Skynet. Meanwhile, Apple hasn't shipped a decent piece of software innovation since Steve Jobs' hoodie era." Siri still thinks call mom means opening your calendar. Their actual AI moves are mostly rebranding and wrapping third-party models, likely Google or OpenAI. And yet Wall Street thinks this is the death knell for Google search?

Bloomberg's Mark Gurman had a different take that I think is pretty interesting. He basically said that this is Apple trying to save its search deal by selling the court on the idea that Google is outdated and the iPhone is dying. As he puts it, in other words, the deal doesn't matter anyways, so no one needs to break it up. I think there definitely could be some truth to this. And David Barnard continued, that's the thing about regulation. By the time regulators take action, the market has often already started shifting by natural forces.

Next up, a follow-on to a story from earlier this week. More details are emerging as OpenAI tries to restructure their deal with Microsoft. As you'll well know, OpenAI recently announced that they were changing their plans around a conversion to a for-profit company, but one of the sticking points had been Microsoft, who are owed a revenue share based on their early-stage investment.

Some suggested the deal could be worth over $130 billion. Bloomberg reported on Monday that Microsoft was the key holdout among investors as OpenAI attempted to complete their new restructuring plan. New reporting from the information gave some insights into the state of the negotiations. Citing financial documents, they report that OpenAI has told investors that they expect the revenue share to be cut in half by the end of the decade. Reportedly, the existing terms were that OpenAI would share 20% of its top-line revenue with Microsoft,

alongside a 49% share of profits capped at $92 billion. The company said that they expect the revenue share to drop to 10% by 2030. This might not end up being that much of a discount. After negotiations around the AGI clause earlier this year, Microsoft reported that their contract will run until 2030 rather than until OpenAI achieves AGI. The quid pro quo is that Microsoft wants to extend their access to OpenAI's technology past 2030, according to documents cited by the information.

Now, Microsoft hasn't signed off on anything at this stage, so it could just represent wishful thinking on OpenAI's part. The report states that some OpenAI leaders want Microsoft to exempt future profits from the existing revenue sharing agreement. This is also one of the first times we've seen the figures laid out clearly. If OpenAI hits their 2030 revenue projections, Microsoft would cash in $97 billion just on the revenue share. Austin Allred wrote, "'20% of top-line revenue until 2030 is crazy, something you'd see on Shark Tank.'"

Lawyer and former FTC litigator Megan Gray thinks the entire thing is unrealistic, posting, This is one of the funniest things I've seen in a while. OpenAI thinks that A, it has leverage, and B, Microsoft is willing to take on significantly more antitrust risk by converting RevShare to equity deal. Dream on, Sammy.

Now, another Microsoft collaboration story, although not with OpenAI this time, Microsoft has signaled that they'll adopt Google's agentic standard. Last month, Google unveiled Agent-to-Agent, an interoperability protocol that allows agents to communicate with each other. Microsoft has now announced that they'll support A2A on their platform's Azure AI Foundry and Copilot Studio.

They've also joined the consortium of companies working to develop the protocol. In a blog post, Microsoft reinforced how important these types of standards are, writing, "...by supporting A2A and building on our open orchestration platform, we're laying the foundation for the next generation of software, collaborative, observable, and adaptive by design."

The best agents won't live in one app or cloud. They'll operate in the flow of work, spanning models, domains, and ecosystems. We're building that future with openness at the center, because agents shouldn't be islands and intelligence should work across boundaries, just like the world it serves.

Now, we've previously covered the importance of AI companies aligning on a single or small handful of standards in regards to Anthropix MCP, which now enjoys support from OpenAI, Google, Microsoft, and many others. Rather than fight out a bitter format war, AI companies seem to have decided to just agree on standards and move on. This means that models should be pretty interchangeable with little vendor lock-in in this part of the infrastructure stack. A2A does not have the same level of buy-in as MCP at this stage, but Microsoft adopting the standard certainly helps.

Now, in launching A2A, Google mentioned that the standard is meant as a complement to MCP rather than a competitor, with each doing slightly different things. MCP is about accessing data from external tooling, while A2A is about allowing agents to share data between themselves.

Explaining how A2A will power advanced agent building, Microsoft wrote, customers can build complex multi-agent workflows that span internal agents, partner tools, and production infrastructure while maintaining governance and service level agreements. We're aligning with the broader industry push for shared agent protocols. Lastly today, Mark Zuckerberg wants to build a fully automated AI ad platform.

Speaking at Stripe's annual sessions conference on Tuesday, Zuck laid out his plans to create an end-to-end AI tool to upend the advertising industry. He said, "...the basic end goal here is any business can come to us, say what their objective is, tell us how much they're willing to pay to achieve those results, connect their bank account, and then we just deliver as many results as we can. In a way, it's kind of like the ultimate business results machine. I think it'd be one of the most important and valuable AI systems that gets built."

Now, the idea is basically to create thousands of AI-generated variations of ads for each customer, cough, Dr. Strange theory, cough, test them on metasocial networks, and lean into the ones that get results. Zuckerberg first laid out this idea on a podcast appearance last week, proposing a system that's completely full service, so advertisers wouldn't need to have the input on creative or even choose a demographic to target.

Not everyone's a fan of this. In a recent op-ed, John Hornby, the founder of ad agency The Ant Partnership, argued that Zuckerberg doesn't understand how to build brands. He wrote, Give AI another decade and it still won't come up with the truly big ideas that brands have been built on over the past 30 years. Generative AI doesn't do leaps of imagination. It's trained on vast datasets, finding the patterns and relationships that allow it to create ostensibly new content, but these are just riffs on what's gone before. Artificial intelligence needs human intelligence to make it sing. Now, on the one hand,

John's not wrong here. But on the other hand, boy, are these two talking about fundamentally different things. Zuckerberg isn't trying to cut humans out of the Super Bowl ad process. He's not arguing that they can build brand better than an agency can. Zuckerberg is focused on making the treadmill of social media advertising, in other words, direct response advertising, not brand advertising, direct response advertising, as cheap and efficient as possible.

While TechCrunch calls this a social media nightmare, this is pretty much just an indication of TechCrunch's barely contained loathing of the industry they cover now. And also, it doesn't really matter because this is completely inevitable. Social media advertising is already essentially a number crunching exercise to figure out what works. Adding AI-generated ads and automating the testing seems like the obvious next step.

Look, man, I've made Super Bowl ads. This is the part of the ad process that the only people who truly love are the calculator brains who figured out how to game this system. The ad folks who love their creative and go to Cannes and do all that stuff have nothing to worry about here. But for the vast majority of small businesses and people who are just trying to sell stuff on the internet, this is just likely to work a lot better.

Anyways, friends, that obviously could be a whole episode, but that is going to do it for this very extended edition of the headlines. I had pre-recorded the main episode and saw that it was a little shorter than normal today, so I decided it was okay to go a little long. But in any case, with that, let's move over to that main part of the episode. Today's episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context.

Which, if you don't know exactly what that means yet, do not worry, we're going to explain, and it's awesome. So Blitze is used alongside your favorite coding copilot as your batch software development platform for the enterprise, and it's meant for those who are seeking dramatic development acceleration on large-scale codebases. Traditional copilots help developers with line-by-line completions and snippets, but

But Blitze works ahead of the IDE, first documenting your entire codebase, then deploying more than 3,000 coordinated AI agents working in parallel to batch build millions of lines of high-quality code for large-scale software projects. So then whether it's codebase refactors, modernizations, or bulk development of your product roadmap, the whole idea of Blitze is to provide enterprises dramatic velocity improvement.

To put it in simpler terms, for every line of code eventually provided to the human engineering team, Blitze will have written it hundreds of times, validating the output with different agents to get the highest quality code to the enterprise and batch. Projects then that would normally require dozens of developers working for months can now be completed with a fraction of the team in weeks, empowering organizations to dramatically shorten development cycles and bring products to market faster than ever.

If your enterprise is looking to accelerate software development, whether it's large-scale modernization, refactoring, or just increasing the rate of your STLC, contact Blitzy at blitzy.com, that's B-L-I-T-Z-Y dot com, to book a custom demo, or just press get started and start using the product right away. Today's episode is brought to you by Superintelligent.

Now, you have heard me talk about agent readiness audits probably numerous times at this point. This is our system that uses voice agents and a hybrid human AI analysis process to benchmark your agent readiness and map your agent opportunities and

and give you some really pointed, actionable next steps to move further down the path in your agentic journey. But we are coming up on the slow time of the year, and if you want to use this time to get out ahead of peers and competitors, we're excited to announce something we're calling Agent Summer. The idea here isn't that complicated. It's basically just an accelerated program to get you agentified and fast.

First of all, it's going to include an agent readiness audit, figuring out where your biggest agent opportunities are. Next, we're going to support both your internal change management process, helping you figure out AI policy, data readiness, things like that, as well as doing action planning around the agent opportunities that are most relevant for you. And finally, we're going to connect you to the right vendors to actually go and deliver this.

Now for this, we want to work with a very small handful of companies that really want to move. We're going to be bundling more than $50,000 of services for something that starts closer to $30,000. And so if you want to use this summer to jump ahead on your company's agent journey, email agent at besuper.ai with summer in the subject line, claim one of these limited spots, and let's go have an agent summer.

Today's episode is brought to you by Vertise Labs, the AI-native digital consulting firm specializing in product development and AI agents for small to medium-sized businesses. Now, guys, this is a market that we have seen so much interest for, so much demand for, and many times, great AI dev shops and builders out there

just have so much business from the high end of the mid-market and big enterprises that this is a group of buyers that gets neglected. Now for Vertise, AI native means that they don't just build AI, they use it in every step of their process. They embed agents in their workflows so that they better know how to help you embed agents in your workflows. And indeed, what they specialize in is building AI agents and agentic workflows that augment knowledge work.

from customer support to internal ops, so that your team can focus on higher value work. Vertise wants to ensure that this is not just another co-pilot, but something that works end-to-end, translating business problems into working software in weeks, not quarters.

They have found that their clients typically see a 60% reduction in time and cost, with significantly higher output than traditional technology partners. So if you are a founder, a CTO, a business leader, or you've just got a product idea to launch, check out verticelabs.io. That's V-E-R-T-I-C-E labs dot I-O.

Welcome back to the AI Daily Brief. Today we are getting a little bit geopolitical as OpenAI announces a new program for countries and the Trump administration has announced a potentially big change to the way we're handling AI chip controls.

Let's start there and work our way backwards. The TLDR is that Bloomberg is reporting that the administration is planning to repeal the so-called AI diffusion rule, which was, of course, a midnight rule from the outgoing Biden administration just one week before it's supposed to go into effect.

The rule, you might remember, separated the world into three tiers with varying levels of export restrictions. One of the most notable parts of this was that the second tier countries were tightly limited in the number of advanced AI chips they could import, and this tier included most of the world, including allied nations like India, Israel, and South Korea.

The stated logic was that placing broad export controls would stem the tide of advanced chips being passed into China via other countries. Now, one of the points that I've made is that the diffusion rule was kind of actually working towards cross-purposes. The diffusion word in the rule referred both to stemming the diffusion of AI chips to China, but also increasing the diffusion of American AI to other countries,

But the way that they seemed to be going about it sort of had those things at cross purposes. In any case, it was very controversial from the moment it was introduced. NVIDIA and Oracle released open letters opposing the rule right away, with NVIDIA stating that it would, quote, put global progress in jeopardy.

Another common complaint was that the rule was overly complex and difficult to enforce. The Trump team appears to have picked up a little from each of those arguments in their reasoning. In a statement, the Commerce Department said, The Biden AI rule is overly complex, overly bureaucratic, and would stymie American innovation. We will be replacing it with a much simpler rule that unleashes American innovation and ensures American AI dominance.

They also said that they plan to continue to strictly enforce previous chip controls on China. And Bloomberg also reports that officials are planning on cracking down on countries like Malaysia and Thailand that are suspected of diverting chips into China.

Ultimately, the policy shift seems to be more about the question of whether preventing exports to third countries with no history of smuggling is actually an effective way to curb the development of AI in China. NVIDIA, for their part, have been championing the view that export controls on allied nations is a wrongheaded way to go about the AI race. They held this view from the start, but ramped up their rhetoric significantly over recent weeks.

CEO Jensen Huang was in Washington last week and pushed officials to recognize that China had caught up with the U.S., not because of chip smuggling, but because of dedicated scientific work. The big statement that resonated with the press that I mentioned numerous times last week was when Jensen said, China's not behind. China's right behind us. We're very close. This is a country with great will. They have great technical capacities. 50% of the world's AI researchers are Chinese. This is an industry that we'll have to compete for.

Overall, the tenor of Huang's visit to Washington was that the U.S. needs to fundamentally rethink the way it's competing in AI to acknowledge recent Chinese breakthroughs. And while he was meeting with administration officials to try to change the diffusion rule, which is obviously in his company's interests, he appeared at least to be genuine in his concern that the U.S. was about to cut itself out of the AI race. He said, "...I'm not sure what the new diffusion rule is going to be, but whatever it turns out to be, it really has to recognize that the world has fundamentally changed since the previous diffusion rule was released."

NVIDIA lauded the changes yesterday, releasing a statement which said, And one key point here was the idea of U.S. Applied Infrastructure.

When the outgoing Biden administration established the diffusion rule, there was an explicit assumption that the U.S. would be the monopoly supplier of advanced AI chips. Tier 2 nations could import sufficient chips to build large-scale data centers, but only with approval from the Commerce Department.

The same approach was applied to the most advanced AI models, which potentially limited global developments via cloud services. Of course, what blew up these assumptions was the release of DeepSeq R1. Before that, the belief was widespread that Chinese chip foundries were still several years behind NVIDIA's current products, but that appears pretty clearly to be no longer the case. Chinese open-source models from DeepSeq and Alibaba's Qen team are catching up quickly to the state-of-the-art.

And more to the point, they're now at a place where they represent a viable alternative to models like OpenAI's GPT-4.0. These models are easily good enough to power basic AI functionality across the global south, and China has expressed a desire to export them far and wide.

Chips are a similar story. Even if Huawei isn't on par with Nvidia and doesn't have production ramped up, it's only a matter of time. And it's now more likely to be measured in months rather than years. What we don't know at this stage is what the replacement for the diffusion rule will look like. There's been discussion of the administration folding chip policy into their broader trade negotiations, using access as a bargaining chip. One region of particular interest is the Middle East. States like Saudi Arabia and the UAE have been racing to develop advanced AI capabilities but were stymied by the Biden administration.

There were concerns that chips destined for the Middle East would be passed on to China. On Wednesday, when queried about loosening restrictions on the Gulf states, Trump responded, We might be doing that, yeah. It will be announced soon. Now, Trump himself is currently preparing for his first major diplomatic trip, which includes a three-country Middle East leg beginning in Saudi Arabia. The Gulf states, of course, sit between China and the U.S. both geographically and diplomatically. And if Trump strikes a deal, we could be about to witness the first example of chip diplomacy in the AI era.

Now, the other big question is how the U.S. will flood the world with its AI to get in ahead of China. Right, again, there was these two parts of the diffusion rule. One was preventing diffusion of AI chips to China, but the other was diffusing American AI technology to friendly countries around the world. Well, to that end, OpenAI just announced a new initiative called AI for Countries.

The company wrote that following the announcement of their Stargate project, quote, we've heard from many countries asking for help in building out similar AI infrastructure, that they want their own Stargates and similar projects. It's clear to everyone now that this kind of infrastructure is going to be the backbone of future economic growth and national development. Under the initiative, OpenAI will partner with governments to build out data centers in a series of co-funded projects.

They said that the goal is to pursue 10 international projects to start with, but they didn't say anything about where they'll be located. At the same time, the announcement blog post did carry extremely heavy overtones that this initiative is squarely aimed at out-competing Chinese AI deployment. OpenAI wrote, We want to help these countries and in the process spread democratic AI, which means the development, use, and deployment of AI that protects and incorporates longstanding democratic principles. We believe that partnering closely with the U.S. government is the best way to advance democratic AI.

So there are lots of interesting elements of this story. How much of this is just NVIDIA putting its thumb on the scale of the US administration and having them respond? How much of this is about larger trade negotiations? Whatever the answer, AI's place in geopolitics continues to do nothing but increase. For now though, that is going to do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.