Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes.
Unless they make some major changes soon, Apple is headed straight to the history books for their unparalleled bungling of the AI opportunity. The latest in this insane saga is that according to Bloomberg sources, a fully agentic and conversational version of Siri isn't expected until 2027 and the release of iOS 20. Austin Allred sums up all of our feelings when he tweets, "'2027 is like 25 years away in AI years. They literally built Grok from scratch in a shorter timeframe.'"
In the interim, before that full version, Apple will release an updated version of Siri in May, focused on delivering features that were first previewed last June, like basic interactions with apps. The update will still be based on the current architecture which Bloomberg describes as having, quote, two brains. One for processing simple assistant tasks like setting reminders, and another for answering in-depth queries using generative AI. This split structure is one of the reasons the Siri experience remains error-prone and disjointed.
Following this year's release, Apple will attempt to bring the two halves of Siri together in a single infrastructure before working on a more conversational version. With Amazon releasing AI Alexa later this month and Google shipping Gemini Assistant late last year, Apple will be a full two years behind these companies, which weren't frankly all that up-to-date themselves. Bloomberg suggests there will be no major new consumer-facing features for Apple Intelligence with iOS later this year, with the company still working to deliver and refine features that were promised in 2024.
Bloomberg writes, This has left Apple at a make-or-break point. Clearly, the company isn't moving fast enough internally to create the underlying AI technology it needs to keep up with the competition. And that suggests a change is required. That is putting it mildly. Bloomberg concludes, AI is a once-in-a-generation technology. Apple probably still has time to turn things around, but that window is closing fast. I'm not that optimistic, frankly.
Ethan Mollick contextualizes it, saying, So according to the AI Lab's public statements, we get AGI before an improved Siri? Daniel Burke writes, Apple fumbled the bag so bad on the AI race, just atrociously decimated the opportunity of a lifetime. Siri is such a useless waste of space, but could have been a standalone product with a trillion-dollar market cap.
Now, some, like Scoble, are still saying that Apple has all these advantages and deep tech that will come to fruition around AR and VR. But I don't know, man. From where I'm sitting, they are just staggeringly behind. What's more, it's not clear that they feel a lot of urgency around this. At least Google seems to be recognizing that they are in an existential fight. According to an internal memo leaked to The New York Times, co-founder Sergey Brin is back in the building and is urging the company to knuckle down to win the AI race.
In the memo, Brin told AI staff, I recommend being in the office at least every weekday. 60 hours a week is the sweet spot of productivity. Brin wrote, competition has accelerated immensely and the final race to AGI is afoot. I think we have all the ingredients to win this race, but we're going to have to turbocharge our efforts.
More fundamentally than just a commitment to working hard, Bryn urged a change in the way Google thinks about product design. He commented,
coincidentally demonstrating the issue why Combinator partner Tom Blumfeld posted an example of Gemini refusing to polish a slide deck, something you might think would be a core function for a document assistant. To his credit, Google AI Studio product lead Logan Kilpatrick jumped in, saying that they'd be fixing the problem, but others pointed out that this just seems to be a standard response.
Verso CTO Malte Ubel writes, Google's key weakness when I was there was a lack of intensity. It's hard to escape from that as a large organization. In fact, to get from weak to normal, you have to massively oversteer. Glad to see the company is at least trying. Alfredo Lopez, however, responded, It's disappointing that the general vibe of the article and from others is that somehow working longer hours together is enough. It has to go along with a renewed sense of quickly changing goals and strategy from leadership. Do the same but harder won't cut it.
Lastly today, SoftBank is betting the farm on AI and they are levering up to do it. The information reports that SoftBank is in talks to borrow $16 billion to finance the project Stargate data centers and might borrow another $8 billion next year.
SoftBank has over $300 billion in assets, so they're not exactly stretching to take on this kind of debt. But levering up to bet big on the latest tech trend is one of SoftBank's signature moves that has unraveled in the past. The company is already carrying $29 billion in debt and had three consecutive down years leading to 2024. SoftBank sold or wrote off $29 billion in losses during that stretch as markets turned negative. Can CEO Masa-san actually execute on this?
Masa hater Elon Musk obviously chimed in from the peanut gallery saying that he's overleveraged, but it's pretty clear at this point that Masa considers AI his legacy play, so it stands to reason that he's going to push as hard as he can.
From our seats over here, it'll be interesting to watch, but that is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in.
Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company.
Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. There is a massive shift taking place right now from using AI to help you do your work
to deploying AI agents to just do your work for you. Of course, in that shift, there is a ton of complication. First of all, of the seemingly thousands of agents out there, which are actually ready for primetime? Which can do what they promise? And beyond even that, which of these agents will actually fit in my workflows? What can integrate with the way that we do business right now? These are the questions at the heart of the super intelligent agent readiness audit.
We've built a voice agent that can scale across your entire team, mapping your processes, better understanding your business, figuring out where you are with AI and agents right now in order to provide recommendations that actually fit you and your company.
Our proprietary agent consulting engine and agent capabilities knowledge base will leave you with action plans, recommendations, and specific follow-ups that will help you make your next steps into the world of a new agentic workforce. To learn more about Super's agent readiness audit, email agent at bsuper.ai or just email me directly, nlw at bsuper.ai, and let's get you set up with the most disruptive technology of our lifetimes.
Welcome back to the AI Daily Brief. In general, this show focuses more on the technology advancements of artificial intelligence as well as their practical applications, how they impact business, how they're changing the work we do. But for anyone paying close attention, there is unavoidably a geopolitical undercurrent of artificial intelligence as well.
AI has become one of the key fronts in the jostling for power between the United States and China, and is impacting not only the policy between these two countries relative to one another, but also a huge amount about how their allies interact as well. One only has to look at, for example, our policy in the Middle East and the Gulf states to understand that AI and AI supremacy are shaping a huge amount of policy even beyond just chip exports. Over the weekend, we got a bit of an escalation in this battle, as China is blocking its AI leaders from visiting the U.S.,
The Wall Street Journal is reporting that Beijing has instructed top AI researchers and entrepreneurs to avoid traveling to the United States. There's a concern that Chinese scientists could hand over confidential information about the nation's progress on AI, or that we could even see a repeat of the 2019 incident where a Huawei executive was detained in Canada at Washington's request.
Importantly, this isn't a strict travel ban, but in China, a stern warning to stick close to state interests might as well be. It certainly suggests that Beijing is increasingly viewing AI as an economic and national security priority and is willing to put policy into place to match.
Now, the effects of the policy are already showing up. DeepSeek's founder Liang Wengeng was a notable absentee at the AI Action Summit in Paris last month. Xiaomeng Liu, a technology analyst at Eurasia Group, believes that Beijing is worried about losing their best and brightest, commenting, For the tech sector, brain drain can have a devastating effect on a country. The initial signal is, stay here, don't run away.
Interestingly, if any of you listened to Palmer Luckey on the Sean Ryan show recently, the Underworld founder has been loudly advocating the return of defector visas. Basically, visas that are not only about attracting talented labor from other countries, but which as a requirement also hurt the countries of origin taking those skills out.
In any case, if you connect the dots with China policy, you can definitely see the start of a new bargain being built between the political and tech class. President Xi Jinping attended a symposium with tech leaders in February, which included formerly blacklisted Alibaba founder Jack Ma.
This is significant because a few years ago when Ant Financial was set to go public in what would have likely been the biggest IPO in history, China stepped in, killed the IPO, and Jack Ma was barely heard from for the next couple of years other than a few token appearances to let people know that he wasn't dead. Now at this February event, President Xi shook hands with Ma, signaling certainly a thawing of relations with the assembled CEOs.
Feng Chucheng, founding partner of Beijing advisory firm Hutong Research, said that this was a, quote, strong gesture to tell the market and hesitant local officials that these are our champions and we need to unwaveringly support in light of all the risks. Feng added, with many of these entrepreneurs having significant stakes in the U.S., Beijing needs a united front also to prevent major capital flight. The news also comes as Chinese investment picks up in the wake of deep-seek.
Smartphone maker Honor has announced a $10 billion R&D budget over the next five years. The former Huawei division is also planning to go public in the near future. Reuters reports that AI companies like Honor are seeing interest from local governments in a way that wouldn't have been possible as recently as last year.
Last week, Alibaba announced plans to spend $53 billion on AI data centers over the next three years. This would be a record spend for the Chinese AI sector and significantly outpaced analysts' forecasts. Both Tencent and Alibaba now have models that they claim outperform DeepSeek's R1, while DeepSeek themselves are gearing up to launch their R2 model in May. Quick detour into DeepSeek land for a minute. The company also made a ton of news this weekend when they claimed a 545% profit margin while serving some of the cheapest inference in the industry.
The Chinese lab released the code for their inference system so other labs can replicate their results, with Didi Das of Menlo Ventures writing, DeepSeek just let the world know they make $200 million per year at a 500 plus percent profit margin. Revenue per day, $562,000. Cost per day, $87,000. Revenue per year, around $205 million. This is all while charging 2.19 per million tokens on R1, around 25x less than OpenAI 01. If this was in the US, this would be a $10 billion company.
Now, lest we get totally lost in the hype here, there is a lot of fudging being done with the numbers. DeepSeq were assuming that all tokens were charged at their full R1 pricing, rather than the various discounts currently applied. Indeed, they even admitted, quote, our actual revenue is substantially lower. Still, these figures do suggest that there's a lot of room for cheaper AI.
Ben Buchanan wrote, "In case anyone is wondering, this represents a 255x improvement in cost per token since the launch of the original ChatGPT. Yes, this is exactly what a fast takeoff looks like." DeepSeek seemed to largely be achieving this through optimizing their GPU use. The team wrote code to access their inference cluster at a lower level, bypassing CUDA and unlocking more efficiency. DeepSeek's claimed optimization of underpowered H800 chips was even a close match to Nvidia's optimization for the Blackwell B200 chip.
Developer Gnar wrote, In any case, the DeepSeek model is clearly having an influence. The Financial Times this weekend wrote about how companies are racing to use distillation processes in the wake of DeepSeek's results.
Some are arguing that the implication is that the future of frontier AI in the U.S. needs to be open source. Jared Dunmon, a former AI director at the Pentagon, wrote in Foreign Affairs that, quote, clearly the United States can no longer rely solely on closed AI systems from big companies to compete with China, and the U.S. government must do more to support open source models even as it strives to limit Chinese access to cutting-edge chip technologies and training data. To continue its dominance, the United States should mount a comprehensive program to develop and deploy the best open source LLMs,
while also ensuring that U.S. firms are still the ones building the most capable AI models, those that are still likely to reside within highly capitalized private companies. Dunman commented that, "...an unfortunate side effect of DeepSeek's massive growth is that it could give China the power to embed widely used generative AI models with the values of the Chinese Communist Party." He suggested that the potential influence of Chinese AI could be even more powerful than TikTok.
This, of course, has been one of the central ideas coming from the Trump administration, that the U.S. needs to present a viable option to the world so Chinese AI isn't seen as the global default.
Still, while Chinese and US AI industries grow apart, some diplomats are urging collaboration on risk. China's ambassador to the US, Xi Feng, called for closer cooperation on AI. He said, "...as the new round of scientific and technological revolution and industrial transformation is unfolding, what we need is not a technological blockade, but deep-seeking, quote-unquote, for human progress."
Feng urged the two global superpowers to jointly promote global AI governance, warning, emerging high technology like AI could open Pandora's box. If left unchecked, it could bring gray rhinos. Gray rhinos here refers to easily foreseeable risks that people ignore until they become a crisis.
And the rhetoric around concern over those grey rhinos is ratcheting up as well. Writing for Brookings last week, Director of Research Michael O'Hanlon even went so far as to suggest that unchecked AI could trigger a nuclear war. He asserted, "...by examining several cases from the U.S.-Soviet rivalry during the Cold War, one can see what might have happened if AI had existed back in that period and had been trusted with the job of deciding to launch nuclear weapons or to preempt an anticipated nuclear attack, and had been wrong in its decision-making."
Now, one note is that when it comes to the Overton window right now, I have seen literally zero suggestion from anyone that AI ever have access to the nuclear codes. In fact, this is maybe the one thing that everyone can agree on is absolutely not a thing that should ever happen.
And so what we have here is this incredibly complex melange of political issues, economic issues. Remember last week we talked about how Microsoft was advocating for the Trump administration to take down export controls, saying that they weren't working. And even if that isn't the play, others like Rand are also arguing that DeepSeek conclusively says that chip controls have failed to slow down China by too much, and that at the very least they need to be recalibrated for an inference-centric world.
What all of this adds up to is that the closer that these models get in terms of capabilities, the more the focus on the soft power battle that AI represents comes into focus. The speed at which AI is developing is stretching all of our capabilities across business and technology. So I guess, why should geopolitics be any different? Anyways, friends, that is going to do it for today's AI Daily Brief. Appreciate you listening, as always. And until next time, peace.