Thank you.
Hello, friends. Three quick notes before we dive into today's show. First up, as I've been mentioning a couple times, for those of you who are looking for an ad-free version of the AI Daily Brief, you can now head on over to patreon.com slash ai daily brief to find that. Second, next week is spring break, so you will not not have shows, but they will be a little bit different than normal. Lastly, something I want to gauge people's perspective on.
The AI Daily Brief community has been hugely supportive of and important in the superintelligence story. We're considering reserving part of our current round for investors from this community. However, I'm trying to gauge interest. If this is something you think we should explore, send me a note at nlw at besuper.ai with super in the title. Thanks in advance for your perspective. And with that, let's get into today's show.
Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. We kick off today with a story that seems small but might be an even bigger deal than it feels at first. The TLDR is that ChatGPT's memory just got a big upgrade. The chatbot will now remember every conversation automatically. Announcing the feature OpenAI wrote, starting today, memory in ChatGPT can now reference all of your past chats to provide more personalized responses. Drive
drawing on your preferences and interests to make it even more helpful for writing, getting advice, learning, and beyond.
Now, up until now, ChatGPT could remember some of your preferences in memory, but the function was pretty limited. Users would need to prompt the chatbot to commit personal information to memory. Kind of useful, but mostly for storing basic facts like whether you're a vegetarian or have allergies, or what your favorite outdoor activities is, or whatever the relevant context for your common types of requests and prompts. OpenAI said that this new feature would allow ChatGPT to pick up where you left off when you open a new chat window and allow interactions to slowly become tailored to the users.
Those who have already opted out of the memory function will have that setting carried over to the new feature, and the two versions of memory can be toggled independently. AI investor and educator Allie K. Miller believes that this feature will be extremely important moving forward. She wrote, The best feature in ChatGPT is memory. As models and features get commoditized in AI, it's going to come down to personalization, collaboration, and network effects. Imagine what happens when your memories can be combined.
Now, for Ali, this is all about how an otherwise commoditized technology finds a new moat in a very different type of environment. And I think that for investors in ChatGPT, for example, that's absolutely true. For those who are spending any time around agent conversations, memory is something that is getting a slightly bigger and more important place. OpenAI researcher Noam Brown wrote, "'Memory isn't just another product feature. It signals a shift from episodic interactions, think a call center, to evolving ones, more like a colleague or friend.'
Still a lot of research to do, but it's a step towards fundamentally changing how we interact with LLMs.
Think about it. We are trying to, with agents, build the equivalent of digital employees. And memory and context and understanding is one of those things where there is still a huge chasm between a human coworker and an agentic coworker. An investment in memory is directionally something that's really important for that evolution. Swix from Latent Space and the AI Engineering Summit writes, now seems like a particularly opportune time to ask, what are the top memory research papers? Do evals for memory different from long context? Do
Do we want superhuman memory to never forget, or human-like memory? Now on that last front, Professor Ethan Malek had an interesting point here as well. He wrote, Boundaries are good.
And I actually do think that it'll be interesting from a user interface perspective if it becomes a challenge to have to basically prompt ChatGPT to not remember certain things. Point being that not all of the things that I ask ChatGPT do I want the context of previous conversations. Now, again, that's all solvable with user experience. And ultimately, I think the trajectory is extremely important, but it's still fascinating to see how these things will play out.
Next up, Amazon CEO Andy Jassy believes that AI is on the cusp of reinventing every consumer experience. And in his annual letter to shareholders has said that AI is critical to the company's next phase of growth. He writes, if your customer experiences aren't planning to leverage these intelligent models, their ability to query giant corpuses of data and quickly find your needle in a haystack, their ability to keep getting smarter with more feedback and data and their future agenda capabilities, you will not be competitive.
The letter also touched on the scale of investment required to take advantage of the AI wave, commenting, We continue to believe that AI is a once-in-a-lifetime reinvention of everything we know. The demand is unlike anything we've seen before, and our customers, shareholders, and businesses will be well-served by our investing aggressively now. During his fourth-quarter earnings call in February, Jassy announced plans to spend $100 billion in capital expenditures this year, with the vast majority going to building out AI capacity at AWS.
This is the largest spend of the four big tech giants building AI infrastructure at scale this year, even though Amazon is technically the smallest of the four.
Looking at the growth numbers, it's easy to see why. Jassy said that Amazon's AI revenue is currently growing at triple-digit year-on-year percentages at a multi-billion dollar run rate. Basically, Jassy's key point was that AI capex should be seen as a leading indicator for long-term profit potential rather than as a red flag for investors. He wrote, In AWS, the faster demand grows, the more data centers, chips, and hardware we need to procure, and AI chips are much more expensive than CPU chips.
We spend this capital up front, even though these assets are useful for many years. He also noted that the advanced AI capabilities being discussed, quote, won't all happen in a year or two, but won't take 10 either. It's moving faster than almost anything technology has ever seen.
Finally today, former OpenAI CTO Mira Muradi's new company is getting close to closing their seed funding round, and it could be one of the largest in history. Business Insider reports that Thinking Machines Labs is raising upwards of $2 billion in venture funding. That's twice as much as reported in February when BI said the company would seek $1 billion at a $9 billion valuation.
It's not clear how much the valuation has increased during the competitive round, but sources say the new number is at least $10 billion, although for those of you doing math out there, if it was a $1 billion on $9 billion post, then $2 billion would be inherently a $10 billion post. In any case, for a comparison point, fellow OpenAI alum Ilya Sutskaver raised $1 billion at a $5 billion valuation late last year for his safe superintelligence company.
Maradi's company emerged from stealth two months ago with very few details. Thinking Machine Labs said they would be working to make, quote, AI systems more widely understood, customizable, and generally capable. And while the company doesn't have a product or a publicly known roadmap, they do have an extremely capable staff. Their employees include top researchers who previously worked at Meta, DeepMind, and of course, OpenAI. The company recently added two more ex-OpenAI staff to the mix, hiring former chief research officer Bob McGrew and researcher Alex Radford as advisors.
At least one Twitter user, Y Squander, assumed that Mira's pitch was pretty simple. Open AI, but drop the Altman. That's going to do it for today's AI Daily Brief Headlines edition. Next up, the main episode.
Thank you.
Sign up today at useplum.com. That's plum with a B forward slash NLW.
Hey, listeners, are you tasked with the safe deployment and use of trustworthy AI? KPMG has a first-of-its-kind AI Risk and Controls Guide, which provides a structured approach for organizations to begin identifying AI risks and design controls to mitigate threats.
What makes KPMG's AI Risks and Controls Guide different is that it outlines practical control considerations to help businesses manage risks and accelerate value. To learn more, go to www.kpmg.us slash AI Guide. That's www.kpmg.us slash AI Guide. Today's episode is brought to you by Vanta.
Vanta is a trust management platform that helps businesses automate security and compliance, enabling them to demonstrate strong security practices and scale. In today's business landscape, businesses can't just claim security, they have to prove it. Achieving compliance with a framework like SOC 2, ISO 27001, HIPAA, GDPR, and more is how businesses can demonstrate strong security practices.
And we see how much this matters every time we connect enterprises with agent services providers at Superintelligent. Many of these compliance frameworks are simply not negotiable for enterprises.
The problem is that navigating security and compliance is time-consuming and complicated. It can take months of work and use up valuable time and resources. Vanta makes it easy and faster by automating compliance across 35-plus frameworks. It gets you audit-ready in weeks instead of months and saves you up to 85% of associated costs. In fact, a recent IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months.
The proof is in the numbers. More than 10,000 global companies trust Vanta, including Atlassian, Quora, and more. For a limited time, listeners get $1,000 off at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off.
Welcome back to the AI Daily Brief. Today we are talking about Stanford's annual AI Index. This is a report that comes from Stanford's Institute for Human-Centered Artificial Intelligence, and this has been coming out ever since 2022. Now, this year's report clocks in at a whopping 456 pages.
making it one of the most comprehensive reports on how AI is progressing. And given that I have such frequent gripes around the lag time on studies that we end up having to share on the show, you may wonder why all the way into April of 2025, it
It's worth taking the time to actually understand a report that's about 2024, which is, of course, ancient history in AI time. Well, I think the thing is, unlike point-in-time statistical explorations that are basically a snapshot of what people were thinking in June of last year or something like that, this is a much more comprehensive look of the patterns and trends that changed over the course of the entire year.
There's a lot more longitudinal data, in other words, that gives us trend lines that we can extrapolate on into the future. And so I think it is useful in understanding, reinforcing, or perhaps challenging things that we think about how the industry has evolved and where things are headed. Now, as I said, this monster is 456 pages long, so we're not going to get into everything. Instead, we're going to look through some of the team's major conclusions. And I'll, of course, add my own personal slant on how I see them interacting with what's happening right now.
First up, in terms of their own framing, there is a very clear inflection point sort of feeling to this all, where we're moving from the realm of possibility to the realm of what's real. Yolanda Gill and Raymond Perrault, the co-directors of this index report, conclude their introduction, AI is no longer just a story of what's possible. It's a story of what's happening now and how we are collectively shaping the future of humanity. So what then were their top takeaways? First, they write, AI performance on demanding benchmarks continues to improve.
This, I will say, is firmly in the category of so obvious it doesn't really bear too much mentioning. Turns out, guys, over 2024, AI got better and continued to get better. Number two, reinforcing what we were just talking about, they say that AI is increasingly embedded in everyday life.
And they actually have a pretty expansive view of AI here. They're not just talking about Gen AI, although a lot of the report obviously is dominated by Gen AI. But they point to areas including transportation, with Waymo coming online in a big way, as well as medical devices in the health field, to show just how much, again, AI is moving from opportunity to action.
Number three, one that will surprise absolutely no one who is a regular listener of this show, they write, business is all in on AI, fueling record investment and usage as research continues to show strong productivity impacts.
Now, one of the things that's interesting here is that, as you'll see, one of the big themes in the report is China catching up to the U.S. But when it comes to American enterprise, it is very clear that American companies are very much out on the vanguard when it comes to AI investment. The Stanford researchers write, in 2024, U.S. private AI investment grew to $109 billion.
nearly 12 times China's $9.3 billion and 24 times the UK's $4.5 billion. They also point out that more broadly, business usage is accelerating. Whereas 55% of organizations used AI in 2023, that was up to 78% in 2024.
As an aside, I don't know about you guys, but I always wonder when I hear those statistics, who the hell are the 22% of people who are just sitting on their hands? Of course, I more than most, and even you guys as listeners of this show, live in something of a bubble. But still, it's so remarkable to imagine the segments of the business world for whom this technology just has not infiltrated yet, especially given, I think, what we know about what's coming for them.
Their number four bullet, and this is one that is a big theme throughout the report, is that while the US still leads in producing top AI models, China is closing the performance gap. You know, this is interesting because this is, like I said, a longitudinal 2024 report, but where this really exploded into people's awareness was at the end of 24 and especially the beginning of 25 with the launch of DeepSeek's models, especially their reasoning model and their mobile app to go with it.
In some ways, and certainly extrapolating out from a geopolitical perspective, this is one of the biggest trends. Number five, they point to the uneven evolution of the responsible AI ecosystem. This is something we've talked about a bit on the show. After ChachiBT in 2023, there was a big explosion in interest in risks and concerns around AI, and that sort of fell off across the course of 2024.
Not everywhere. I think in this case, I'm particularly focused on the U.S. And certainly the U.S. presidential election was a pretty big distraction for any sort of actual policy getting done. But it also seemed like attitudes shifted in a pretty dramatic way, away from concern around responsible AI and more towards just an out-and-out battle for AI supremacy.
Certainly, in some ways, this wars with the trend right before it, the rise of China, given that in many policy discourses, at least, the responses to these two particular trends is at loggerheads with one another. Number six, the researchers found that global AI optimism is rising, but a lot of that is concentrated in the developing world, particularly in Asia.
Countries like China, Indonesia, and Thailand have significant majorities of their population who see AI products and services as being more beneficial than harmful. In China, that number is 83%. Indonesia, 80%. Thailand, 77%. Compare that to the United States, where just 39% see AI as more beneficial than harmful.
Now, that number is up. In the United States, the balance has gone up 4%, whereas in other developed economies like Germany, France, and Canada, the sentiment has shifted positively by an even bigger direction. But still, I think that the differential between older Western economies and these Southeast Asian economies is extremely notable and could have serious downstream effects on the way that the whole thing shakes out.
Number seven, an extremely important trend, AI becomes more efficient, affordable, and accessible. This is something that they reinforced in a chart that they shared as well. Smaller models got a lot better over the course of 2024. And additionally, the models became cheaper to use. Those two things combined meant that categories of use cases that were previously inaccessible came online. The landscape of available AI uses, in other words, has expanded as the cost has gone down and the accessibility has come up.
Despite how much we talk about the high cost of AI, given things like the high cost of training new models, as well as the high cost of inference for the reasoning models, the reality is that the cost curve has come down on these technologies way faster than on previous curves.
Now we even have the potentially artificial pressure of the Chinese models, which are coming in and being even cheaper. Although to what extent that's just them reaping new efficiencies versus artificially lowering their prices. I think skeptics have different takes on that. But the point is net net for people who are building on these models, there has been a huge increase in what they and we can do. And I think that that's obviously creating extremely beneficial effects for the end users of all the things that come out of that new process.
Now, as I mentioned, in addition to the takeaways at the beginning of their report, they also shared these charts, which sort of tell the story of the AI index in a handful of visuals. You see China's models catching up as another big trend.
arise in what they call problematic AI, which is AI harm-related incidents. Although I will pause here because although there is a pretty dramatic increase in 2024, I think part of the reason that people have shifted away from some of the more horrified risk narratives is that in raw state, these numbers just aren't as big as I think people thought they were going to be. Now,
Now, in many ways, that could lead us to have a false negative and underestimate the potential challenges and risks going forward. But we had a couple of instances last year where we assumed things were going to be terrible and they just weren't. I think most notably election tampering. Everyone thought that there was going to be huge problems and AI was going to absolutely impact the U.S. presidential elections. And it just did squat nothing when it came to that.
Another chart that the Stanford Center shares is the rise of more useful agents. And of course, I think when we zoom ahead to the 2026 index report, looking back on 2025, this is going to be the big story of this year, but it started in 2024. I actually just had a conversation with Swix, which will come out next week, where we talked about how for a fair chunk of 2024, agent was kind of a bad word because the gap between what people were talking about and what was actually possible was just so immense. And it wasn't until the introduction of the reasoning models that that really started to shift.
So you can see there's a pretty dramatic increase in the utility of agents even starting last year. And I think that that's clearly speeding up and changing even more now. To the surprise of no one, AI continues to be an extremely high investment category. And as we discuss constantly here, AI has immense corporate adoption. And if anything, I think that these surveys tend to underestimate those numbers.
Overall, I don't think if you dug into this, you'd find anything super revolutionary. But I think that the important takeaways are absolutely in the trend lines. Corporate adoption going to 100, agents getting more and more performant, China rapidly reaching parity with the US. These are the things that we're seeing dominate the discussion in 2025 because of how significant they are in shaping the landscape and environment in which we all operate.
Anyways, big thanks to Stanford University's Human-Centered Artificial Intelligence Center, HAI, for this report. And a big thanks, of course, to you guys for listening. Till next time. Peace.