Today on the AI Daily Brief, AI agent deployments tripled over the last quarter. Before that in the headlines, why the markets were wrong on DeepSeek. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI.
Hello friends, quick announcements today. First of all, thank you to today's sponsors, Superintelligent, Blitzy, Agency.org, and Vanta. And to get an ad-free version of the show, go to patreon.com slash ai daily brief. If you are interested in sponsoring the show, we're selling Fall and Into Winter right now, and you can email me at nlw at breakdown.network. But with that, let's get into a very different story on DeepSeek.
Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. We kick off today with a pretty interesting interview from Goldman Sachs co-head of public tech investing, Sung Cho. The TLDR is that Goldman believes that AI CapEx is still accelerating. Cho said, just a few months ago when AI stocks were on their lows, there was a perception that AI CapEx was in later innings. Now we've gotten to the point where the market believes the AI CapEx is in the middle innings. He
he discussed the shift in perception being due to three big factors. Firstly, Meta's hiring blitz reinforcing that the competition to train new frontier models is far from over. Secondly, reasoning models causing a huge wave of demand for inference compute. And third and finally, the administration's policy changes that have opened up foreign demand for NVIDIA chips. When asked to comment on the DeepSeek scare in January, he noted that the market got it completely wrong.
Specifically, that the low cost of model training was negligible compared to the huge inference needs of reasoning models. Now this is exactly what we talked about when people were freaking out about this back in January. And yet, if none of this comes as a surprise to you, it is noteworthy that the Wall Street consensus is shifting. Current projections from Goldman have hyperscaler CapEx ending the year at $330 billion, up almost 50% from 2024. They expect to see $391 billion in CapEx spend for 2026,
and $427 billion in 2027. Looking at the hockey stick chart, RevCapital commented, the biggest capital cycle since railroads. We are one year out from Goldman Sachs questioning whether AI CapEx would see a return, and spending has just done nothing but skyrocket since then. Boyette Street Capital posted, crazy numbers here. The street killed meta for spending $31 billion in CapEx, now they are estimating nearly twice that. Year of AI over the year of efficiency.
Speaking of DeepSeek, that company has blamed their lack of progress on export controls. The information reports that the Chinese startup's R2 reasoning model hasn't shipped because their CEO isn't satisfied with it. R2, the follow-up to the R1 model that went viral in January, was originally slated for release in May. The goal was to improve coding ability and introduce reasoning in languages other than English. DeepSeek engineers have reportedly been working over the past several months to refine the model, but the release is yet to get the green light.
There are also concerns, however, that Chinese AI infrastructure can't handle the release of the more powerful model. The report said that a surge in demand for R2 would quickly overwhelm the inference capacity of Chinese cloud providers. The model runs best on NVIDIA H20 chips, which were banned from export in April, and the report stated that DeepSeq engineers have been providing cloud companies with tech specs to help guide their deployment of the new model, all of which suggests that export controls have been more successful than many believed.
R2 was supposed to be the showcase model for Huawei's competing Ascend chips, but it sounds as though supply or performance are crimping the rollout. Says VrasserX, DeepSeek's R2 hitting a wall isn't just a supply chain footnote. It's a small win for global AI safety. U.S. export controls have starved Chinese labs of the latest NVIDIA Silicon, forcing DeepSeek to shelve its R2 rollout for now. They then go on to give their explanation of why to cheer the slowdown. But in any case, definitely a different narrative than I think most people had in their minds and something that's worth keeping an eye on.
Speaking of keeping an eye on, we continue to watch to see if Zuckerberg's aggressive poaching spree will yield results, and it appears that another leading researcher has jumped ship from OpenAI. Trapit Banzal has joined Meta after helping OpenAI get reasoning off the ground, working directly with Ilya Sutskiver. Banzal is listed as a foundational contributor to O1, the company's first big reasoning model.
Separately, Bloomberg reports that Meta is in talks to buy AI voice startup Play.ai alongside with hiring some of its employees. At this stage, Meta's superintelligence team is starting to take shape. Reporting had stated that Zuckerberg was trying to hire around 50 AI researchers, and so far we have about a dozen names. The big question is whether Meta is simply looking to catch up with the competition with Lama5, or if they really are taking a direct shot at superintelligence.
One indicator on that is that one of the OpenAI researchers that jumped ship, Lucas Beyer, tweeted to deny the $100 million rumors. He said, Hey all, yes, we will be joining Meta. Two, no, we did not get a $100 million sign-on. That's fake news. Excited about what's ahead, though, we'll share more in due time. My take is that this has to be more interesting than just trying to get Llama 5 to be good, or else it would be very hard, even with huge bonuses, to attract this caliber of researcher.
Still, the tone is definitely shifting on Twitter to basically wondering if Zuckerberg really can buy his way to supremacy here. Alex on X writes, At this point, I'm quite convinced Zuck will just keep spending until there's parity between OpenAI and Meta. Meta is making $60 billion in profit a year, and OpenAI just raised $60 billion so far. At some point, you can't just keep raising. Frank New writes, My hot take, Zuck and Meta are going to beat all other companies in the Mag7 and achieve AI dominance.
Lastly today, although people turning to AI for companionship makes for a splashy headline, Anthropic argues that it's not as widespread as you might think. Recently, there's been a wave of reporting on people falling in love with chatbots that make it seem like the phenomenon is widespread. Harvard Business Review found that therapy and companionship is now the number one use case for AI. A Forbes study suggested that 80% of Gen Zers would marry an AI.
And The Wall Street Journal's Joanna Stern even referenced the trend in a recent commencement speech, warning college grads not to fall in love with a robot or chatbot. She said, seriously, it's happening more than you think.
but how much is it happening really? A new report from Anthropic suggests not that much. The AI lab analyzed a sample of anonymized CLAW data and found that very few conversations are about therapy or companionship. They wrote, "'Affective conversations are relatively rare and AI-human companionship is rarer still. Only 2.9 of CLAW-AI interactions are effective conversations, which aligns with findings from previous research by OpenAI. Companionship and roleplay combined comprise less than 0.5% of conversations."
Still, some people aren't buying it. Justine Moore from A16Z says, In my opinion, it's dumb to conclude people aren't using AI for companionship based on clawed data. People use different models for different things. Most clawed use cases, as Anthropic Reports highlight, are work-related and coding. People use other LLMs for emotional support. She continued, Go on TikTok or Instagram and search me and ChatGBT. Your feed will be full of these types of videos which really resonate with people. And she showed people using ChatGBT in exactly that sort of therapist or companion way.
I think from my perspective, we are just still figuring out how people are going to use these tools. And there may be some major generational differences here, although it is always worth being skeptical of headlines, which obviously have a very different set of goals than just keeping you informed.
For now that that's going to do it for today's headlines, next up, the main episode. Today's episode is brought to you by Superintelligent, specifically agent readiness audits. Everyone is trying to figure out what agent use cases are going to be most impactful for their business, and the agent readiness audit is the fastest and best way to do that.
We use voice agents to interview your leadership and team and process all of that information to provide an agent readiness score, a set of insights around that score, and a set of highly actionable recommendations on both organizational gaps and high-value agent use cases that you should pursue. Once you've figured out the right use cases, you can use our marketplace to find the right vendors and partners. And what it all adds up to is a faster, better agent strategy.
Check it out at bsuper.ai or email agents at bsuper.ai to learn more. This episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context.
Blitzy is used alongside your favorite coding co-pilot as your batch software development platform for the enterprise-seeking dramatic development acceleration on large-scale codebases. While traditional co-pilots help with line-by-line completions, Blitzy works ahead of the IDE by first documenting your entire codebase, then deploying over 3,000 coordinated AI agents in parallel
to batch build millions of lines of high-quality code. The scale difference is staggering. Copilots might give you a few hundred lines of code in seconds, but Blitze can generate up to three million lines of thoroughly vetted code. If your enterprise is looking to accelerate software development, contact us at blitze.com to book a custom demo or press get started to begin using the product right away. Today's episode is brought to you by Agency, an open-source collective for interagent collaboration.
Agents are, of course, the most important theme of the moment right now, not only on this show, but I think for businesses everywhere. And part of that is the expanded scope of what agents are starting to be able to do. While single agents can handle specific tasks, the real power comes when specialized agents collaborate to solve complex problems. However...
Right now, there is no standardized infrastructure for these agents to discover, communicate with, and work alongside one another. That's where Agency, spelled A-G-N-T-C-Y, comes in. Agency is an open-source collective building the Internet of Agents, a global collaboration layer where AI agents can work together. It will connect systems across vendors and frameworks, solving the biggest problems of discovery, interoperability, and scalability for enterprises.
With contributors like Cisco, Crew.ai, Langchain, and MongoDB, Agency is breaking down silos and building the future of interoperable AI. Shape the future of enterprise innovation. Visit agency.org to explore use cases now. That's A-G-N-T-C-Y dot org.
Today's episode is brought to you by Vanta. In today's business landscape, businesses can't just claim security, they have to prove it. Achieving compliance with a framework like SOC 2, ISO 27001, HIPAA, GDPR, and more is how businesses can demonstrate strong security practices.
The problem is that navigating security and compliance is time-consuming and complicated. It can take months of work and use up valuable time and resources. Vanta makes it easy and faster by automating compliance across 35-plus frameworks. It gets you audit-ready in weeks instead of months and saves you up to 85% of associated costs. In fact, a recent IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months.
Welcome back to the AI Daily Brief.
Today, we return to the realm of enterprise AI deployments, specifically how agents are finding their way beyond pilots, beyond experimentation, and into production. KPMG has just released their latest quarterly pulse survey, and these are a very useful longitudinal way to track how attitudes and execution around AI and agents has proceeded among big companies.
The survey captures the perspective of over 130 C-suite and business leaders for companies with over a billion dollars in revenue across the US. In other words, this is a study of some of the largest companies. Now, we have been tracking this for over a year now. And going back to the beginning of 2024, the story was very different than it is now. In those early surveys, there was a recognition that AI investment is necessary, but companies were theoretically hung up on things like ROI, or more specifically, not even knowing how to measure ROI.
Between Q4 24 and Q1 25, though, the story became very different.
There was a continual increase in the anticipated spend on Gen AI. But more than that, the last pulse survey showed that the tools that we were paying attention to in early 2024 had become totally commonplace by the beginning of 2025. Between Q4 24 and Q1 25, the percentage of workers using knowledge assistance on a daily basis, so think chat GPT, jumped from 22 to 58%. In other words, the assistant era of AI had become something of a table stakes.
The big story then was enterprises were very clearly moving into agent land. The percentage of organizations that were piloting agents almost doubled between Q4 and Q1, from 37% all the way up to 65%. In other words, fully two-thirds of organizations were piloting AI agents in the beginning of this year.
What's more, basically everyone was planning on deploying AI agents, even if they weren't piloting them yet. 99% said that they were planning to deploy AI agents. And as I've joked previously, I think that means 1% weren't reading the survey correctly. So what is the story this time around? And in short, it's that agents are moving out of the pilot stage and into production.
Now, at first glance, you might notice that the percentage of organizations that were piloting agents was down from 65% to just 57%. The percentage of organizations exploring the possibility of using agents was down from 25% down to 10%. But the story is not about a decrease in agentic interest. It's about the fact that we were moving to actual deployment. Agent deployments tripled between Q1 and Q2 among these big enterprises, from about a
to a full third. Another way that KPMG put it was that 90% of organizations are now past the experimentation stage. What's more, there were some interesting findings around what people were using agents for. When KPMG asked how much you were focused on efficiency and productivity,
versus revenue growth as it relates to their AI agent strategies. 36%, a little over a third, said that they were mostly focused on efficiency with some exploration of new revenue opportunities. 18% said that they were mostly focused on new revenue opportunities with some efficiency prioritization. Literally no one said that they were focused entirely on either operational efficiency and cost reduction or on the other end of the spectrum, entirely focused on creating new markets and revenue streams. And in fact, the
the biggest slice of these enterprises, nearly half at 46%, were equally focused on efficiency and revenue growth. Now, this is, of course, super interesting to me as someone who talks a lot about the difference between efficiency and opportunity AI. At least from an intent perspective, it seems like most organizations are at least a little bit paying attention to both.
A couple other interesting notes, statements that leaders agreed with about agents over the next 12 months. 87% agreed that agents would prompt organizations to upskill employees in roles that will be displaced. In other words, they anticipate that agents will take over some big chunks of work that
that will require employees to be upskilled to do other things. 87% agree that agents will redefine performance metrics. 86% believe that agents will enhance job satisfaction by helping manage workloads. And right now, it's worth noting that every survey that comes out, and one can be skeptical of this reasonably, but it is very consistent, shows employers viewing agents as something that makes the work experience better for their people, not just a tool to ruthlessly cut headcount.
Maybe the most interesting statement that leaders agreed with was 82% of leaders agreed that in the next 12 months, AI agents will become valued teammates and contributors. This really puts a fine point on that idea that we are moving out of the exclusively assistant AI stage and even the agent pilot stage into something where leaders are anticipating full digital workers collaborating with their people. Commensurately, as agents become more ubiquitous, data has gone up as a concern.
Both data privacy concerns and quality of organization data concerns have increased over the last couple of quarters. When it comes to barriers to agent deployment, a lot of them have to do with employees. 39% viewed systems complexity as one of the major challenges, but 47% of those surveyed thought that their workforces were resistant to change, and 59% said that they had technical skills gaps. And given that I've lamented in the past,
how out of sync most educational resources are with agents, it was interesting to see what strategies they were deploying for exactly that purpose. 69% still are teaching prompt skills to maximize agent effectiveness, but you also see some more interesting and creative things. 49% said that they're creating agent-specific sandbox environments where employees can practice. 49% said that they're creating
41% said that they're implementing some sort of AI agent shadowing programs where employees observe experts who are working with agents. And 39% said that they're developing role-specific guidelines for effective agent collaboration.
Overall, one of the most interesting big banner headline was that 82% of those surveyed agreed that because of AI in the next 24 months, their industry's competitive landscape would look different. I think that that means everything from the potential for new business models, new vectors of competition, to new winners and losers in their sector.
The story is very clearly things changing, things changing fast, and agents being at the very heart of it. Steve Chase, vice chair of AI and digital innovation at KPMG said, Todd Lohr, the head of ecosystems at KPMG commented,
Our clients are no longer asking if AI will transform their business, they're asking how fast it can be deployed. This isn't just about technology adoption. It's about fundamental business transformation that requires reimagining how work gets done and how it is measured. Now on that front, one example of an organization going through an AI transformation came from Salesforce CEO Mark Benioff this week.
In a recent interview, he claimed that AI is doing 30 to 50% of the work at Salesforce now, referring to roles including software engineering and customer service. Now, of course, we've heard a number of different tech companies claim that AI is now writing 30% or more of their code, but this is the first time that a Fortune 500 company has framed it as AI doing a substantial portion of the work overall. Benioff says that his company's customer service agents have reached 93% accuracy, giving them the ability to take over entire roles.
Benioff said, Now, some discussion on X or Twitter suggested that this was an indication of how many people were likely to be displaced. But I think AI creator Matthew Berman had the right of it when he wrote, And I think that this is exactly the case for Optimism.
Yes, some companies will be rewarded in the short term for cutting costs for the same amount of output, but eventually they're going to be out-competed by the organizations that reinvest those cost savings and efficiency gains in new models, new opportunities, and just better products and services. Which is not to say that there aren't some serious challenges that remain as we move out of the pilot stage and into the full deployment stage of agents.
Speaking at the VentureBeat Transform conference this week, writer-CEO May Habib noted some of the challenges that come as agents start to hit scale. She said, Agents don't reliably follow rules. They're outcome-driven. They interpret. They adapt. And the behavior really only emerges in real-world environments. She said that companies are having issues adjusting to non-deterministic agents. And this is something that we absolutely have seen even in our own experience at Superintelligent.
For one type of our core agent readiness audit, we really have to constrain what the agent does. Because we are scoring the answers, we want them to be asked in a particular way and in a particular sequence. And so we basically have to really constrain the voice agent and how it interacts. You can tell that what it wants to do is interact with what's being said to jump around the different questions, ask more sub-questions, and basically have more freedom to explore and try to get to the same result.
Now, with some of our other types of audits that don't have that scoring system, we allow it more freedom, and you can tell it's a more natural pattern for that agent. But of course, enterprises are going to face similar challenges where they have some things that they just need agents to do in a very prescriptive, rote sort of way. And yet it also remains likely that we will discover even more opportunities if we can put agents in the context where they do have a little bit more freedom to flex.
In any case, the story is exactly as the KPMG Press really sums up. AI agents are moving beyond experimentation and leaders are preparing for competitive transformation. For now, that's going to do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.