Today on the AI Daily Brief, how people are actually using AI today. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Thanks to today's sponsors, Plum, Vanta, and Superintelligent. And to get an ad-free version of the AI Daily Brief, go to patreon.com slash AI Daily Brief.
Hello, friends. Quick notes before we dive into today's show. The main episode got long and it's a fun exploratory topic, so no headlines today. We will be back with them tomorrow. As I've been mentioning a couple times, for those of you who are looking for an ad-free version of the AI Daily Brief, you can now head on over to patreon.com slash AI Daily Brief to find that. Lastly, something I want to gauge people's perspective on.
The AI Daily Brief community has been hugely supportive of and important in the superintelligence story. We're considering reserving part of our current round for investors from this community. However, I'm trying to gauge interest. If this is something you think we should explore, send me a note at nlw at besuper.ai with super in the title. Thanks in advance for your perspective. And with that, let's get into today's show.
Welcome back to the AI Daily Brief. One of the questions that has lurked around AI ever since ChatGPT launched is, all right, but what are people actually using this stuff for? Whenever I do episodes that have subjects like how I actually use AI, what my stack is, there's always interest there. Also, if you know Lenny's podcast, they just announced a new podcast from their network called How I AI that's explicitly about that. Point is that people are really interested in how others are using and getting value out of AI, which makes sense. This
This is a totally new space, and as much as people experiment first with the sort of obvious things, there's a sense I think among everyone who uses these tools that we are barely scratching the surface and that the best is yet to come.
And so it's always valuable, even if it's just for a very short window and a very short snapshot in time, when someone does some study or looks at data around how people are actually using AI. Today, we're looking at a couple examples of that that have recently come out, and we're kicking it off with a new report from Anthropic, specifically from their economic index, that they called AI's impact on software development. Now, previous editions of this research had looked at AI usage across different occupations and educational field,
But this one zeroes in on coding, which, as you know, is one of the major, if not the major, use case for Claude right now. Indeed, they wrote, That is, there were many more conversations with Claude about computer-related tasks than one would predict from the number of people working in relevant jobs.
And so they wanted to dig a little bit deeper. And what they did was take 500,000, so a half million coding-related interactions across Claude.ai, as well as Claude Code, which is their specialist coding agent, to see how people were interacting with code in this way. After all that, they found three patterns. The first was that the coding agent is used for more automation. And this is really important and will justify for some the use of that word agent. And
Anthropic writes, 79% of conversations on Claude code were identified as automation, in other words, where AI directly performs tasks, rather than augmentation, where AI collaborates with and enhances human capabilities. That compares to 49% of Claude.ai conversations.
meaning that the people who were using Clawed Code were using it as an agent, not just as a coding assistant. The next pattern that they found was that AI coders are generally building user-facing apps. The most common programming languages used in their dataset were JavaScript and HTML, and within coding uses, user interface and user experience tasks were among the top. They note that, quote, this suggests that jobs that center on making simple applications and user interfaces may face disruption from AI systems sooner than those focused purely on backend work.
Now, I think there is potentially a bit of a leap there. I understand why that would be one of the conclusions, but I think another one could be that simple applications and interfaces are a lot of where vibe coders are intersecting with these tools. The natural tendency, for example, for tinkerers and new vibe coders and people who aren't engineers who are just starting to get into building things is
is to prototype front ends that bring their ideas to life. To use one example, if you go check out the projects that I've created on Lovable or SoftGen, which is another application like that, that is building out a real community-centric vibe coding platform, probably something like 90% of what I've built would be in that front end or user experience part. Not because these tools can't build the full stack, they can, but because a lot of it is just me experimenting and that's what I experiment with first.
In any case, the third key pattern that Anthropic found was that startups are definitely the early adopters when it comes to cloud code. While startups represent a much smaller percentage of overall coding work than enterprises, they represented 33% of conversations on cloud code as compared to only 13% of
of conversations on cloud code that were identified as enterprise relevant. Honestly, no big surprise there. And in fact, one of the things that we have found most notable at Superintelligent is how much resistance enterprise engineering departments often have to coding tools. I think there are a number of reasons why that could be, with what you might consider good reasons, i.e. challenges of the way the tools are designed for individual use rather than big enterprise collaboration, and some less legitimate uses, like people just basically not wanting to change their existing habits.
Now, there were some other interesting findings in here that get a little bit more granular. In an attempt to dig down in on this vibe coding use case, Anthropic categorized the different coding work into different use cases. These include everything from software architecture and code design to UI/UX component development to debugging and performance optimization to web and mobile app development.
and so on and so forth. Anthropic noted that two of the top five coding tasks were UI development and web and mobile app development, representing 12% and 8% of conversations respectively. They wrote, "...such tasks increasingly lend themselves to a phenomenon known as vibe coding, where developers of varying level of experience describe their desired outcomes in natural language and let AI take the wheel on implementation details."
The company's conclusion was that these lighter-weight programming tasks of making simple apps and interfaces could be the first to be disrupted by AI. They wrote, as AI increasingly handles component creation and styling tasks, these developers might shift towards higher-level design and user experience work. Now looking at the populations using AI coding tools, and
Anthropic found that around 30% of programming conversations were related to personal projects, as compared to around 25%, which were related to enterprise work. Now remember, they're breaking this down between Claude Code conversations and Claude.ai conversations, and startups had a much wider gulf between these. 13% of Claude.ai usage related to programming came from the startup field, while those users represented 33% of sessions with Claude Code.
Aside from recognizing that startups are the early adopters of coding agents, no surprises there, Anthropic added that, quote, uses involving students, academics, personal project builders, and tutorial and learning users collectively represent half of the interactions across both platforms. In other words, individuals, not just businesses, are significant adopters of coding assistance tools. These adoption patterns mirror past technology shifts where startups use new tools for competitive advantage,
while established organizations move more cautiously and often have detailed security checks in place before adopting new tools company-wide. AI's general purpose nature could accelerate this dynamic.
If AI agents provide significant productivity gains, the gap between early and late adopters could translate into substantial competitive advantages. Now, as I mentioned, I don't think this is just a security question. I also think this is design limitations on the current crop of coding assistants and vibe coding tools where they're just not fully set up for the enterprise yet. That said, I've seen numerous startups pop up who are trying to specifically bring vibe coding to the enterprise, and I think that that will happen sooner rather than later.
One more interesting finding was that humans are far more likely to remain in the loop for coding tasks than they are for non-coding tasks. In Anthropic's earlier analysis of non-coding tasks, they found that just 3% of conversations involved automation with human feedback loops.
For coding, the number was 21% of all coding conversations. There was also a corresponding drop in what Anthropic called directive automation. In other words, telling the AI to complete a task and coming back when it's finished. This could reflect the iterative nature of coding with the need to come back and refine code to make everything work properly, or it could be a reflection on AI's current limitations of being able to one-shot complex software with no further feedback or additional steps.
Again, I will say in my use of tools like SoftGen AI or Lovable, certainly one of the things that makes me more or less inclined towards a particular platform is how good it is at interpreting my prompts and more specifically, fixing things after I realized that I hadn't prompted it that well in the first place and had to go back and explain myself better. Now, I think when it comes to community response of Anthropics findings...
Broadly speaking, this passes the sniff test for people. University of Delaware professor Harry Wang writes, My personal experience aligns closely with the three patterns they identified. He also added, although data from Cursor, Windsurf, and Klein were not included, I think incorporating them would further reinforce these findings. All About AI wrote, Turns out AI is doing a lot of heavy lifting in software development, but devs are still in the loop for the big stuff. Today's episode is brought to you by Vanta.
Vanta is a trust management platform that helps businesses automate security and compliance, enabling them to demonstrate strong security practices and scale. In today's business landscape, businesses can't just claim security, they have to prove it. Achieving compliance with a framework like SOC 2, ISO 27001, HIPAA, GDPR, and more is how businesses can demonstrate strong security practices.
And we see how much this matters every time we connect enterprises with agent services providers at Superintelligent. Many of these compliance frameworks are simply not negotiable for enterprises.
The problem is that navigating security and compliance is time-consuming and complicated. It can take months of work and use up valuable time and resources. Vanta makes it easy and faster by automating compliance across 35-plus frameworks. It gets you audit-ready in weeks instead of months and saves you up to 85% of associated costs. In fact, a recent IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months.
The proof is in the numbers. More than 10,000 global companies trust Vanta, including Atlassian, Quora, and more. For a limited time, listeners get $1,000 off at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off.
Today's episode is brought to you by Superintelligent, and I am very excited today to tell you about our consultant partner program. The new Superintelligent is a platform that helps enterprises figure out which agents to adopt, and then with our marketplace, go and find the partners that can help them actually build, buy, customize, and deploy those agents.
At the key of that experience is what we call our agent readiness audits. We deploy a set of voice agents which can interview people across your team to uncover where agents are going to be most effective in driving real business value. From there, we make a set of recommendations which can turn into RFPs on the marketplace or other sort of change management activities that help get you ready for the new agent-powered economy.
We are finding a ton of success right now with consultants bringing the agent readiness audits to their client as a way to help them move down the funnel towards agent deployments, with the consultant playing the role of helping their client hone in on the right opportunities based on what we've recommended and helping manage the partner selection process. Basically, the audits are dramatically reducing the time to discovery for our consulting partners, and that's something we're really excited to see. If you run a firm and have clients who might be a good fit for the agent readiness audit,
reach out to agent at bsuper.ai with consultant in the title, and we'll get right back to you with more on the consultant partner program. Again, that's agent at bsuper.ai, and put the word consultant in the subject line. Today's episode is brought to you by Plum. If you're building agentic workflows for clients or colleagues, it's time to take another look at Plum. Plum is where AI experts create, deploy, manage, and monetize complex automations. We
With features like one-click updates that reach all your subscribers, user-level variables for personalization, and the ability to protect your prompts and workflow IP, it's the best place to grow your AI automation practice. Serve twice the clients in half the time with Plum. Sign up today at useplum.com. That's U-S-E-P-L-U-M-B dot com forward slash N-L-W.
So that is all about the coding use case, but what other use cases are people actually using AI for right now? Well, next up, we have a study from the Harvard Business Review called How People Are Really Using Gen AI in 2025. Author Mark Zao-Sanders writes, A year ago, I wrote a piece about how people were really using Gen AI. The use cases split almost equally between personal and business needs, with roughly half-spanning both. The
The HBR editors and I felt a need to update the research. Much has happened over the past 12 months. And then he goes into all of the things that have been basically the bread and butter for this show during that time. Now, the methodology is important here because I don't know that this will feel as definitive or pass the sniff test as much as the anthropic study. Mark
Mark writes, I adopted the same methodology as last year, but scoured more data. There was much more to scour and limited the results to the past 12 months. I looked at online forums like Reddit and Quora, as well as articles that included explicit specific applications of the technology. So basically, Mark went out and tried to look for how people were talking about using Gen AI. And he found a lot of shifts in top use cases. In 2024, his top use cases in order were generating ideas, therapy and companionship,
specific search, editing text, exploring topics of interest, fun and nonsense, troubleshooting, enhanced learning, personalized learning, and general advice. This time, number one was therapy and companionship. Number two was organizing my life. Number three was finding purpose. Number four was enhanced learning. Number five was generating code. Number six was generating ideas. Remember, that was down from number one in 2024. Number seven was fun and nonsense. Number eight was improving code. Number nine was creativity. And number 10 was healthier living.
Now, I will say right up front that the author of this is not trying to say that this is a scientific study. It's an interesting approach to looking at what people are communicating about their AI usage as a way to understand trends and patterns. There are going to be inherent limitations, and so you should take this as one piece of evidence, not the gospel truth. However, I do think that some of these are surprising.
The incredible concentration, for example, of highly personalized use cases up at the top, therapy and companionship, organizing my life and finding purpose, all before anything having to do with business. My question would be, of course, how much this says those are the top use cases versus
versus these are the top use cases that people have an interest in talking about for whatever reason. Now still, even if that is the case and there's some inherent almost scaling or weighting that we need to do based on how interesting a thing is to talk about versus just do, there are still a couple of areas where the trend lines all add up. I'm thinking, of course, speaking of those anthropic results of what Mark found at number five and number eight, generating code and improving code.
Improving code was up from 19 before, and generating code was up from 47. Basically suggesting that both using coding agents and tools as an augmenter, i.e. improving code, as well as as an agent, i.e. generating code, were way up, but using it as an agent and actually generating the code was up even more.
Now, zooming out from just the specific use cases, Mark also tried to group all of the top 100 use cases by themes. So the six themes he found were content creation and editing, technical assistance and troubleshooting, personal and professional support, learning and education, creativity and recreation, and research analysis and decision making. The biggest mover, as I mentioned, was that personal and professional support almost doubled over the year, jumping from 17% to 31% of top 100 use cases.
Now, it's a little bit outside the scope of this show, which is obviously more interested in general in the business use case for AI, but it's still worth sharing what Mark wrote about the therapy use case. He says, "...many posters talked about how therapy with an AI model was helping them process grief or trauma. Three advantages to AI-based therapy came across clearly. It's available 24-7, it's relatively inexpensive, even free to use in some cases, and it comes without the prospect of judgment from another human being. The AI as therapy phenomenon has also been noticed in China."
And although the debate about the full potential of computerized therapy is ongoing, recent research offers a reassuring perspective, that AI-driven therapeutic interventions have reached a level of sophistication such that they're indistinguishable from human-written therapeutic responses. Now, interestingly, Mark connects the dots between that and a broader trend, which is just guidance and advice moving from human to AI. He continues,
A growing number of professional services are now being partially delivered by GenAI, from therapy and medical advice to legal counsel, tax guidance, and software development. He pointed to EY as an example of an organization where this trend is underway. He writes,
including the deployment of 150 AI agents specifically being used for tax-related tasks. Now, this mirrors a recent long-form piece in Business Insider called Inside the AI Boom That's Transforming How Consultants Work at McKinsey, BCG, and Deloitte.
The thrust of the piece is that consulting firms are not only selling AI consulting services, but in some ways are the fastest enterprise adopters of these tools as well. The article gives a bunch of very specific examples. McKinsey, for example, has an in-house generative AI chatbot called Lilly. Lilly is connected to McKinsey's entire body of intellectual property with over 100,000 documents and interviews, and usage is significant.
McKinsey Partners told BI that over 70% of the firm's 45,000 employees use Lilly now, and those who do use it about 17 times a week. They're using it for research, summarizing documents, analyzing data, and brainstorming. In a case study, they found that workers saved 30% of their time using the tool. This mirrors what KPMG found in their recent quarterly pulse survey as well, where they saw a major jump in daily usage of AI productivity tools.
from 22% in Q4 of last year to 58% in Q1 of this year. KPMG's Pulse survey interviews over 100 senior leaders at companies with a billion dollars or more of revenue. Now, these types of tools aren't the only thing that consultants have access to. There's also more discreet tools like tools for deck and PowerPoint presentation creation. And I think overall, all of this tells the story of just a maturation of usage.
Now, of course, we have a front row seat to a lot of this at Superintelligent as well. Day in, day out, we're doing audits that map opportunities and see how companies are using AI. And that gives us a pretty good perspective on where companies are and aren't using these tools. A lot of the places that they are using these tools will not surprise you.
SDR agents are very popular, content creation agents, customer service agents. These are the areas of external facing work where people are most confident in agents right now. Another area where agents are being used a lot is internal work. So internal knowledge management, helping employees get answers to their questions. And one of the common threads here is that these are important areas of work, but comparatively low stakes to, for example, mission critical product or operational work.
where companies aren't using AI and specifically agents yet, is in parts of their business that are absolutely mission critical to get it right every time. So for example, a lot of finance use cases are being held back.
And even some of those advisory use cases are limited because you just can't deal with 97% success. It's got to be closer to 100. We talked about this recently in the context of Anthropic CEO Dario Amadei's essay about the urgency of interpretability, where he was making the point that interpretability is not just about societal alignment problems, but also about business use cases. Overall, when you take a step back, what's clear is that AI usage is increasing. It's increasing in breadth, and it's increasing in depth. The
The people who are using it are using it more than ever, and there are more of those people than there were before. Honestly, if you want to really look at what the Harvard research suggests, it's not really about one use case over another. It's about the fact that there are use cases for everything. These uses will change, but I think that the trend lines are pretty clear. That is going to do it for today's AI Daily Brief. Appreciate you listening or watching, as always. And until next time, peace. ♪