We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
A
Alfredo Lopez
B
Ben Buchanan
D
Daniel Burke
D
Didi Das
E
Elon Musk
以长期主义为指导,推动太空探索、电动汽车和可再生能源革命的企业家和创新者。
E
Ethan Mollick
No available information on Ethan Mollick.
F
Feng Chucheng
J
Jared Dunmon
M
Malte Ubel
M
Michael O'Hanlon
S
Sergey Brin
X
Xi Feng
X
Xiaomeng Liu
Topics
Austin Allred: 我认为苹果公司在人工智能领域的战略部署严重滞后,错失了良机。他们对Siri的改进计划过于缓慢,与其他科技巨头相比,至少落后了两年。这反映出公司内部缺乏紧迫感和创新动力,可能导致其在未来的人工智能竞争中处于不利地位。 Bloomberg: 苹果公司在人工智能领域的发展速度过慢,面临着被竞争对手超越的风险。他们未能及时开发出具有竞争力的AI技术,错失了人工智能带来的巨大机遇。虽然苹果公司仍有机会扭转局面,但窗口期正在迅速关闭。 Ethan Mollick: 根据AI实验室的公开声明,AGI的研发速度可能比改进Siri更快。这表明,苹果公司在人工智能领域的战略方向可能存在偏差,需要重新评估其技术发展路线。 Daniel Burke: 苹果公司在人工智能竞争中表现糟糕,错失了千载难逢的机遇。Siri功能的缺失和不足,使得苹果公司在人工智能市场上缺乏竞争力。如果苹果公司能够及时改进Siri,Siri完全可以成为一个具有万亿美元市值的独立产品。 Scoble: 我认为苹果公司仍然拥有许多优势和深层技术,这些技术将在AR和VR领域发挥作用。然而,从目前来看,苹果公司在人工智能领域明显落后于竞争对手。 Sergey Brin: 谷歌公司需要加倍努力才能赢得人工智能竞赛。我们需要改变产品设计思路,并提高工作效率。竞争已经非常激烈,我们必须全力以赴才能赢得这场竞争。 Malte Ubel: 谷歌公司过去缺乏工作强度,这是一个关键的弱点。要从弱变强,就必须大幅度地调整方向。我很高兴看到公司至少正在尝试改变。 Alfredo Lopez: 仅仅延长工作时间是不够的,谷歌公司还需要改变目标和战略,才能在人工智能竞争中取得成功。 Elon Musk: 软银公司负债过高,其大规模投资人工智能存在风险。 Xiaomeng Liu: 中国政府担心人才流失,因此限制人工智能领域人才前往美国。这表明中国政府将人工智能视为国家安全重点,并采取相应政策来保护国家利益。 Palmer Luckey: 吸引人才的签证政策可能会损害人才输出国的利益。 Feng Chucheng: 中国政府需要团结科技企业,防止资本外流。政府需要向市场和犹豫不决的地方官员发出明确信号,表明他们支持这些科技企业。 Didi Das: DeepSeek公司的盈利能力非常强,其收入和利润率远超预期。这表明在人工智能领域存在降低成本和提高效率的巨大潜力。 Ben Buchanan: DeepSeek公司在降低人工智能成本方面取得了显著进展,这标志着人工智能技术的快速发展。 Jared Dunmon: 美国需要支持开源人工智能模型,以对抗中国的影响力。美国政府必须采取更多措施支持开源模型,同时确保美国公司仍然能够构建最强大的AI模型。 Xi Feng: 中美两国应该在人工智能风险方面加强合作,而不是进行技术封锁。 Michael O'Hanlon: 不受控制的人工智能可能引发核战争。

Deep Dive

Chapters
Apple's Siri lags behind competitors like Amazon's Alexa and Google's Gemini, raising concerns about the company's AI strategy and its ability to compete in the rapidly evolving AI market. Internal issues and slow development are highlighted, leaving Apple potentially at a make-or-break point.
  • Apple's fully agentic Siri is not expected until 2027.
  • Current Siri version is described as having 'two brains', leading to an error-prone experience.
  • Competitors Amazon and Google are significantly ahead in AI assistant technology.

Shownotes Transcript

Translations:
中文

Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes.

Unless they make some major changes soon, Apple is headed straight to the history books for their unparalleled bungling of the AI opportunity. The latest in this insane saga is that according to Bloomberg sources, a fully agentic and conversational version of Siri isn't expected until 2027 and the release of iOS 20. Austin Allred sums up all of our feelings when he tweets, "'2027 is like 25 years away in AI years. They literally built Grok from scratch in a shorter timeframe.'"

In the interim, before that full version, Apple will release an updated version of Siri in May, focused on delivering features that were first previewed last June, like basic interactions with apps. The update will still be based on the current architecture which Bloomberg describes as having, quote, two brains. One for processing simple assistant tasks like setting reminders, and another for answering in-depth queries using generative AI. This split structure is one of the reasons the Siri experience remains error-prone and disjointed.

Following this year's release, Apple will attempt to bring the two halves of Siri together in a single infrastructure before working on a more conversational version. With Amazon releasing AI Alexa later this month and Google shipping Gemini Assistant late last year, Apple will be a full two years behind these companies, which weren't frankly all that up-to-date themselves. Bloomberg suggests there will be no major new consumer-facing features for Apple Intelligence with iOS later this year, with the company still working to deliver and refine features that were promised in 2024.

Bloomberg writes, This has left Apple at a make-or-break point. Clearly, the company isn't moving fast enough internally to create the underlying AI technology it needs to keep up with the competition. And that suggests a change is required. That is putting it mildly. Bloomberg concludes, AI is a once-in-a-generation technology. Apple probably still has time to turn things around, but that window is closing fast. I'm not that optimistic, frankly.

Ethan Mollick contextualizes it, saying, So according to the AI Lab's public statements, we get AGI before an improved Siri? Daniel Burke writes, Apple fumbled the bag so bad on the AI race, just atrociously decimated the opportunity of a lifetime. Siri is such a useless waste of space, but could have been a standalone product with a trillion-dollar market cap.

Now, some, like Scoble, are still saying that Apple has all these advantages and deep tech that will come to fruition around AR and VR. But I don't know, man. From where I'm sitting, they are just staggeringly behind. What's more, it's not clear that they feel a lot of urgency around this. At least Google seems to be recognizing that they are in an existential fight. According to an internal memo leaked to The New York Times, co-founder Sergey Brin is back in the building and is urging the company to knuckle down to win the AI race.

In the memo, Brin told AI staff, I recommend being in the office at least every weekday. 60 hours a week is the sweet spot of productivity. Brin wrote, competition has accelerated immensely and the final race to AGI is afoot. I think we have all the ingredients to win this race, but we're going to have to turbocharge our efforts.

More fundamentally than just a commitment to working hard, Bryn urged a change in the way Google thinks about product design. He commented,

coincidentally demonstrating the issue why Combinator partner Tom Blumfeld posted an example of Gemini refusing to polish a slide deck, something you might think would be a core function for a document assistant. To his credit, Google AI Studio product lead Logan Kilpatrick jumped in, saying that they'd be fixing the problem, but others pointed out that this just seems to be a standard response.

Verso CTO Malte Ubel writes, Google's key weakness when I was there was a lack of intensity. It's hard to escape from that as a large organization. In fact, to get from weak to normal, you have to massively oversteer. Glad to see the company is at least trying. Alfredo Lopez, however, responded, It's disappointing that the general vibe of the article and from others is that somehow working longer hours together is enough. It has to go along with a renewed sense of quickly changing goals and strategy from leadership. Do the same but harder won't cut it.

Lastly today, SoftBank is betting the farm on AI and they are levering up to do it. The information reports that SoftBank is in talks to borrow $16 billion to finance the project Stargate data centers and might borrow another $8 billion next year.

SoftBank has over $300 billion in assets, so they're not exactly stretching to take on this kind of debt. But levering up to bet big on the latest tech trend is one of SoftBank's signature moves that has unraveled in the past. The company is already carrying $29 billion in debt and had three consecutive down years leading to 2024. SoftBank sold or wrote off $29 billion in losses during that stretch as markets turned negative. Can CEO Masa-san actually execute on this?

Masa hater Elon Musk obviously chimed in from the peanut gallery saying that he's overleveraged, but it's pretty clear at this point that Masa considers AI his legacy play, so it stands to reason that he's going to push as hard as he can.

From our seats over here, it'll be interesting to watch, but that is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in.

Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company.

Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. There is a massive shift taking place right now from using AI to help you do your work

to deploying AI agents to just do your work for you. Of course, in that shift, there is a ton of complication. First of all, of the seemingly thousands of agents out there, which are actually ready for primetime? Which can do what they promise? And beyond even that, which of these agents will actually fit in my workflows? What can integrate with the way that we do business right now? These are the questions at the heart of the super intelligent agent readiness audit.

We've built a voice agent that can scale across your entire team, mapping your processes, better understanding your business, figuring out where you are with AI and agents right now in order to provide recommendations that actually fit you and your company.

Our proprietary agent consulting engine and agent capabilities knowledge base will leave you with action plans, recommendations, and specific follow-ups that will help you make your next steps into the world of a new agentic workforce. To learn more about Super's agent readiness audit, email agent at bsuper.ai or just email me directly, nlw at bsuper.ai, and let's get you set up with the most disruptive technology of our lifetimes.

Welcome back to the AI Daily Brief. In general, this show focuses more on the technology advancements of artificial intelligence as well as their practical applications, how they impact business, how they're changing the work we do. But for anyone paying close attention, there is unavoidably a geopolitical undercurrent of artificial intelligence as well.

AI has become one of the key fronts in the jostling for power between the United States and China, and is impacting not only the policy between these two countries relative to one another, but also a huge amount about how their allies interact as well. One only has to look at, for example, our policy in the Middle East and the Gulf states to understand that AI and AI supremacy are shaping a huge amount of policy even beyond just chip exports. Over the weekend, we got a bit of an escalation in this battle, as China is blocking its AI leaders from visiting the U.S.,

The Wall Street Journal is reporting that Beijing has instructed top AI researchers and entrepreneurs to avoid traveling to the United States. There's a concern that Chinese scientists could hand over confidential information about the nation's progress on AI, or that we could even see a repeat of the 2019 incident where a Huawei executive was detained in Canada at Washington's request.

Importantly, this isn't a strict travel ban, but in China, a stern warning to stick close to state interests might as well be. It certainly suggests that Beijing is increasingly viewing AI as an economic and national security priority and is willing to put policy into place to match.

Now, the effects of the policy are already showing up. DeepSeek's founder Liang Wengeng was a notable absentee at the AI Action Summit in Paris last month. Xiaomeng Liu, a technology analyst at Eurasia Group, believes that Beijing is worried about losing their best and brightest, commenting, For the tech sector, brain drain can have a devastating effect on a country. The initial signal is, stay here, don't run away.

Interestingly, if any of you listened to Palmer Luckey on the Sean Ryan show recently, the Underworld founder has been loudly advocating the return of defector visas. Basically, visas that are not only about attracting talented labor from other countries, but which as a requirement also hurt the countries of origin taking those skills out.

In any case, if you connect the dots with China policy, you can definitely see the start of a new bargain being built between the political and tech class. President Xi Jinping attended a symposium with tech leaders in February, which included formerly blacklisted Alibaba founder Jack Ma.

This is significant because a few years ago when Ant Financial was set to go public in what would have likely been the biggest IPO in history, China stepped in, killed the IPO, and Jack Ma was barely heard from for the next couple of years other than a few token appearances to let people know that he wasn't dead. Now at this February event, President Xi shook hands with Ma, signaling certainly a thawing of relations with the assembled CEOs.

Feng Chucheng, founding partner of Beijing advisory firm Hutong Research, said that this was a, quote, strong gesture to tell the market and hesitant local officials that these are our champions and we need to unwaveringly support in light of all the risks. Feng added, with many of these entrepreneurs having significant stakes in the U.S., Beijing needs a united front also to prevent major capital flight. The news also comes as Chinese investment picks up in the wake of deep-seek.

Smartphone maker Honor has announced a $10 billion R&D budget over the next five years. The former Huawei division is also planning to go public in the near future. Reuters reports that AI companies like Honor are seeing interest from local governments in a way that wouldn't have been possible as recently as last year.

Last week, Alibaba announced plans to spend $53 billion on AI data centers over the next three years. This would be a record spend for the Chinese AI sector and significantly outpaced analysts' forecasts. Both Tencent and Alibaba now have models that they claim outperform DeepSeek's R1, while DeepSeek themselves are gearing up to launch their R2 model in May. Quick detour into DeepSeek land for a minute. The company also made a ton of news this weekend when they claimed a 545% profit margin while serving some of the cheapest inference in the industry.

The Chinese lab released the code for their inference system so other labs can replicate their results, with Didi Das of Menlo Ventures writing, DeepSeek just let the world know they make $200 million per year at a 500 plus percent profit margin. Revenue per day, $562,000. Cost per day, $87,000. Revenue per year, around $205 million. This is all while charging 2.19 per million tokens on R1, around 25x less than OpenAI 01. If this was in the US, this would be a $10 billion company.

Now, lest we get totally lost in the hype here, there is a lot of fudging being done with the numbers. DeepSeq were assuming that all tokens were charged at their full R1 pricing, rather than the various discounts currently applied. Indeed, they even admitted, quote, our actual revenue is substantially lower. Still, these figures do suggest that there's a lot of room for cheaper AI.

Ben Buchanan wrote, "In case anyone is wondering, this represents a 255x improvement in cost per token since the launch of the original ChatGPT. Yes, this is exactly what a fast takeoff looks like." DeepSeek seemed to largely be achieving this through optimizing their GPU use. The team wrote code to access their inference cluster at a lower level, bypassing CUDA and unlocking more efficiency. DeepSeek's claimed optimization of underpowered H800 chips was even a close match to Nvidia's optimization for the Blackwell B200 chip.

Developer Gnar wrote, In any case, the DeepSeek model is clearly having an influence. The Financial Times this weekend wrote about how companies are racing to use distillation processes in the wake of DeepSeek's results.

Some are arguing that the implication is that the future of frontier AI in the U.S. needs to be open source. Jared Dunmon, a former AI director at the Pentagon, wrote in Foreign Affairs that, quote, clearly the United States can no longer rely solely on closed AI systems from big companies to compete with China, and the U.S. government must do more to support open source models even as it strives to limit Chinese access to cutting-edge chip technologies and training data. To continue its dominance, the United States should mount a comprehensive program to develop and deploy the best open source LLMs,

while also ensuring that U.S. firms are still the ones building the most capable AI models, those that are still likely to reside within highly capitalized private companies. Dunman commented that, "...an unfortunate side effect of DeepSeek's massive growth is that it could give China the power to embed widely used generative AI models with the values of the Chinese Communist Party." He suggested that the potential influence of Chinese AI could be even more powerful than TikTok.

This, of course, has been one of the central ideas coming from the Trump administration, that the U.S. needs to present a viable option to the world so Chinese AI isn't seen as the global default.

Still, while Chinese and US AI industries grow apart, some diplomats are urging collaboration on risk. China's ambassador to the US, Xi Feng, called for closer cooperation on AI. He said, "...as the new round of scientific and technological revolution and industrial transformation is unfolding, what we need is not a technological blockade, but deep-seeking, quote-unquote, for human progress."

Feng urged the two global superpowers to jointly promote global AI governance, warning, emerging high technology like AI could open Pandora's box. If left unchecked, it could bring gray rhinos. Gray rhinos here refers to easily foreseeable risks that people ignore until they become a crisis.

And the rhetoric around concern over those grey rhinos is ratcheting up as well. Writing for Brookings last week, Director of Research Michael O'Hanlon even went so far as to suggest that unchecked AI could trigger a nuclear war. He asserted, "...by examining several cases from the U.S.-Soviet rivalry during the Cold War, one can see what might have happened if AI had existed back in that period and had been trusted with the job of deciding to launch nuclear weapons or to preempt an anticipated nuclear attack, and had been wrong in its decision-making."

Now, one note is that when it comes to the Overton window right now, I have seen literally zero suggestion from anyone that AI ever have access to the nuclear codes. In fact, this is maybe the one thing that everyone can agree on is absolutely not a thing that should ever happen.

And so what we have here is this incredibly complex melange of political issues, economic issues. Remember last week we talked about how Microsoft was advocating for the Trump administration to take down export controls, saying that they weren't working. And even if that isn't the play, others like Rand are also arguing that DeepSeek conclusively says that chip controls have failed to slow down China by too much, and that at the very least they need to be recalibrated for an inference-centric world.

What all of this adds up to is that the closer that these models get in terms of capabilities, the more the focus on the soft power battle that AI represents comes into focus. The speed at which AI is developing is stretching all of our capabilities across business and technology. So I guess, why should geopolitics be any different? Anyways, friends, that is going to do it for today's AI Daily Brief. Appreciate you listening, as always. And until next time, peace.