The GPT store was initially seen as a potential game-changer, akin to the Apple Store, but its impact diminished over time. While custom GPTs proved useful for personal workflows, the store aspect for sharing and monetization failed to gain traction. It became more about simplifying repetitive tasks rather than creating a new business model.
OpenAI's Sora preview in February 2024 showcased groundbreaking video generation capabilities, causing immediate industry disruption. Hollywood mogul Tyler Perry even paused an $800 million studio expansion after seeing Sora. Although Sora wasn't released until late 2024, its preview set the tone for significant advancements in video generation throughout the year.
Microsoft's acquisition of Inflection in March 2024 was a strategic hedge against its reliance on OpenAI. Inflection's IP, team, and leader Mustafa Suleiman (ex-Google DeepMind CEO) bolstered Microsoft's internal AI capabilities. This move also highlighted a trend of 'non-acquisition acquisitions,' likely driven by antitrust considerations.
Meta's Llama 3 launch in April 2024 marked a significant milestone in narrowing the gap between open-source and closed-source AI models. By summer, Llama 3.1 405b achieved GPT-4 class performance, demonstrating that open-source AI could rival proprietary models, reshaping the AI landscape.
Ilya Sutskever's departure from OpenAI in May 2024 was significant because it signaled a shift in the frontier AI lab approach. He later founded Safe Superintelligence, focusing solely on achieving superintelligence without a business model. This departure also marked the beginning of a trend of OpenAI executives leaving throughout the year.
NVIDIA's rise to become the world's most valuable company in June 2024 underscored the massive scale of AI infrastructure buildout. This milestone highlighted NVIDIA's dominance in AI hardware and set the stage for debates about ROI and sustainability in AI investments later in the year.
In July 2024, the key debate centered on whether AI investments were yielding sufficient returns. Reports from Sequoia and Goldman Sachs questioned the ROI of the $600 billion spent on AI infrastructure. While these reports sparked discussions, they were seen as less negative upon deeper analysis, with Goldman Sachs' chief economist's comments particularly criticized.
California's SB 1047 legislation in August 2024 sparked national debate due to its focus on theoretical AI risks (X-risk) rather than immediate challenges. Critics argued it could harm California's competitiveness. Although it passed, Governor Gavin Newsom vetoed it, reflecting broader shifts in AI discourse from safety to national security and leadership.
OpenAI's O1 preview in September 2024 introduced a new class of reasoning models designed to 'think longer' before responding. This approach addressed emerging concerns about a plateau in AI scaling methods, setting the stage for future innovations in AI reasoning and problem-solving.
Notebook LM's recognition in October 2024 was significant due to its innovative audio overviews, which created personalized verbal discussions from uploaded materials. This feature, highlighted by former OpenAI leader Andrej Karpathy, marked a new frontier in LLM product formats, sparking widespread interest and discussion.
The US presidential election in November 2024 had a major impact on AI policy, with President Trump appointing an AI czar to oversee AI initiatives. Questions remained about whether Biden's AI executive order would be repealed and what new policies might emerge, making AI a central issue in national discourse.
In December 2024, Ilya Sutskever declared the 'pre-training era' over, highlighting that AI labs were no longer seeing significant gains from traditional training methods. This marked a turning point, sparking renewed experimentation and exploration of alternative approaches like world models for achieving AGI.
Today, we are recapping 2024 by going month by month to see what the most important story in AI was each time. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. All right, friends, as we close out this big 2024, I thought a fun way to recap the year would be to go month by month looking at what I think the most important story was each month.
To do this, I went back, obviously, through the AI Daily Brief episodes. I did look a little bit at the download numbers, but that wasn't the main criteria. It really is instead a combination of what seemed to be the most important thing at the time, as well as what has over time proven to be the most important thing.
Interestingly, where we kick off felt very important then and feels much less so now, and that is the launch of the GPT store. This was, of course, announced in the fall of 2023, but we didn't have access to it until January of 2024. To many, it seemed like we were headed to the next version of the Apple store. In point of fact, the way it played out, while custom GPTs were and are extremely useful on a personal use kind of level, the store aspect of them and sharing with others has been much less so.
This is not turned into any sort of business model. And in general, the way that we, for example, at Superintelligent talk about custom GPTs for our enterprise customers is entirely about how to simplify repeated workflows. So if there is a particular prompt that you come back to over and over again, you can embed it in a custom GPT to not have to copy and paste. It's that sort of thing. And so while this was a very useful feature and is something that we've continued to use throughout the year, it obviously wasn't as big a deal as it seemed back then.
And this is part and parcel of what happens in a very fast-moving industry. Sometimes things that seem extremely significant are not, and sometimes things that don't even warrant notice become extremely important later on, as we'll see in just a minute. OpenAI, however, gets the nod again in February. There had been kind of a question about whether OpenAI was losing its edge, but then it previewed Sora, which at the time completely blew everyone out of the water in terms of its video generation capabilities.
Unlike most OpenAI products, it was not available right away. And in fact, as we know now, it would take the entire year before we actually got access to Sora. Still, it was so significant and the jump in capability so profound that it had immediate ripple effects. One very notable example, Hollywood mogul Tyler Perry had been building out an $800 million studio and straight up put it on hold after seeing a preview of Sora.
As we know, video generation would be a key theme throughout the year, but it would be other companies like Runway, Luma Labs, Pika that were the standard bearers until Sora, and then just after it, Google's VO2 came out and finally became available in December. Still, it set the tone that there was going to be a lot of disruption this year, and on that front, 2024 did not disappoint. In March, I believe that the most important story was Microsoft's surprise acquisition slash not exactly acquisition of Inflection.
Basically, Inflection, which had raised more than a billion dollars less than a year before, effectively sold all of its IP and most of its team to Microsoft. And what it really was, was a wholesale hedge against Microsoft's over-reliance on OpenAI, following the absolute debacle of the OpenAI board from November of 2023. In many ways, that moment pretty much set the template for what would happen with Inflection during the time when it seemed like Sam Altman would be bringing over most of OpenAI and building out from within Microsoft.
As we know, that didn't happen. Sam Altman went back to OpenAI. But Microsoft clearly felt like it still needed to be doing more internally. And so with Inflection, they not only got tech and talent, they got a leader in Mustafa Suleiman, who had also been the CEO of Google DeepMind before starting Inflection.
In addition to this being a big deal because of what it said about the Microsoft and OpenAI relationship, which has been a constant point of discussion throughout the year, it also started a trend of these non-acquisition acquisition things, which seems to be about an antitrust consideration. Although how much it's actually going to prevent those inquiries, it's not exactly clear.
In April, my pick for the biggest story is the launch of Meta's Llama 3. And really, this wouldn't come to full fruition until Llama 3.1 405b came out over the summer. But basically, Llama 3 showed just how close open source AI was getting to closed source AI.
And by the time we got 405b over the summer, that gap had been almost entirely closed. Lama 3.1 405b was a GPT-4 class model. That closing of the gap between open and closed source, I think is a major moment from this year. And that's why it ranks as my top story from April. In May, things started to heat up a little bit. My pick for top story, especially weighing it in terms of discussion at the time, was that OpenAI chief scientist and co-founder Ilya Sutskever finally officially left.
Now, no one had heard from Ilya for the six months following the November 2023 board shakeup. Ilya had initially been one of the firers of Sam Altman, but had then converted and come back to rehiring him. Many speculated that he had agreed to stay quiet for six months before officially leaving, but that after Altman came back, he was always on his way out. There are a couple reasons why I think this story rates mention.
First of all, it was the beginning of a trend where a lot of OpenAI executives would be leaving throughout the year. But two, and more significant, what Ilya would resurface doing in his safe superintelligence company really represented something totally different in the frontier lab approach.
Safe superintelligence is totally rejecting the idea of having a business model and releasing products. All they care about is getting to superintelligence, and they've raised a billion dollars to not have to deal with pesky things like business model. I think this could be significant in the years ahead, and that's why I give the Ilya leaving OpenAI story the nod from May. Now, one thing I didn't include as my top story is Google's I.O.,
At the time, even though they announced a ton of things, they all felt kind of far out. There wasn't a huge new model announcement. They released a lighter weight version of Gemini 1.5 Flash and their 1.5 Pro was improved. Probably the biggest announcement at the time was the million token context window.
But Google, from a brand perspective, was still really struggling. It hadn't shaken off the idea that it was moving slower than OpenAI. And what's more, a lot of its year had been spent in narrative battle around its image generator forcing diversity into historical images, which was probably most notably embodied in Black Nazis.
What I think is really interesting about the fact that Google I.O. didn't feel like the biggest story in May was that if you look back at this blog post, 100 Things We Announced at I.O. 2024, number nine was we demoed an early prototype of audio overviews for Notebook LM, which uses a collection of uploaded materials to create a verbal discussion personalized for the user. It would be months before people really started to take notice, but when they did, they really, really did. Moving on to June, I have a tie.
The first is that Apple finally got off its duff and announced what it was going to be doing in AI. Of course, they had to rename it. It wouldn't be artificial intelligence. It would be Apple intelligence. And it seemed like it was trying to be the AI for normies, which makes a ton of sense from Apple's standpoint. They have a huge base of existing installs, and they're historically good at making complex experiences simple for the user. People were pretty enthusiastic about the idea that maybe Apple could really simplify AI for a general purpose audience.
As we know now, this vision has not come to fruition so far and has been one of the year's big disappointments. The other big thing that happened in June and really put a capstone on the market dimension of AI was that NVIDIA became the world's most valuable company, zooming up above Microsoft and Apple. NVIDIA had, of course, been on a two-year tear, but this really, really reinforced just how significant the AI build-out was.
It also, however, set us up for July, where the biggest story was actually a debate and a discussion.
At this point, it's been two summers in a row where the media jumps up and down on a narrative of AI slowing down. In 2023, it was all about ChatGPT having its first down month, which seemed to be attributable to students going on break for the summer. And this year, it was embodied in two blog posts slash reports. One was by Sequoia partner David Kahn called AI's $600 billion question. And the other was a report from Goldman Sachs called Gen AI, Too Much Spend, Too Little Benefit.
I spent a lot of time on these over the summer and basically argued that neither of them were as negative as they might seem. The key thrust of the Sequoia piece was that so much had been spent on this infrastructure buildout that it was going to be very hard to realize ROI from that. And the Goldman Sachs report was actually a much more diversified look at this, just with one very negative set of comments from their chief economist that I think will go down as very, very uninformed if anyone ever chooses to look back at this report, which frankly, they probably won't.
Welcome to the episode of The Plum Show.
Whether you're an operations leader, marketer, or even a non-technical founder, Plum gives you the power of AI without the technical hassle. Get instant access to top models like GPT-4.0, CloudSonic 3.5, Assembly AI, and many more. Don't let technology hold you back. Check out Use Plum, that's Plum with a B, for early access to the future of workflow automation. Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices, and establishing trust is more important than ever.
Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI. Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time.
Learn more at vanta.com slash nlw. That's vanta.com slash nlw. If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.
That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Bye.
By August, the conversation had moved to the policy and political dimension. California legislation SB 1047 had jumped from a state issue to a very national debate. Key national policymakers, including California's delegation to Congress and the Senate, started weighing in. Huge voices on either side shared their perspective. Ultimately, the bill passed in California but would be vetoed by California Governor Gavin Newsom.
Ultimately, the biggest challenge with it, I think, was that it spent too much time dealing with X risk and theoretical future issues rather than challenges that were right here right now. That was certainly what Democratic lawmakers who were against it pointed to in their disagreements with it.
The other piece, of course, was just a concern that this would impact California competitiveness. And another big key theme of the year, which we got to in our Long Reads end of year episode, which you might have listened to at this point, is that there was kind of a shift in general, I think, from an AI safety consideration from 2023 to a national security and American leadership consideration of AI in 2024. The debate around SB 1047 certainly exemplified that.
In September, we got OpenAI's O1 preview. O1 they called a new series of reasoning models for solving hard problems. And as we would learn in the not-too-distant future, the approaches that were being explored with O1, where they were, quote, designed to spend more time thinking before responding, would actually become the centerpiece of a much larger conversation around a potential plateau in our previous scaling methodology.
I'm actually going to save much of the discussion for that for a little bit later, but that 01 preview I think was the big story from September. Moving on to October, there were two things that I thought were very significant. The first was the recognition of the power of Notebook LM and specifically its audio overviews.
Former OpenAI leader Andrej Karpathy actually got at this at the very end of September when he tweeted, It's possible that Notebook LM podcast episode generation is touching on a whole new territory of highly compelling LLM product formats. Feels reminiscent of ChatGPT. I talk extensively about what a big deal Notebook LM was and why in my Top Products episode, but suffice it to say that there was a ton of conversation around Notebook LM in October.
The other really important story though, and this one I think is certainly more forward-looking, is Anthropic's announcement of a computer use model. Basically, this is a generalized pre-agent that can actually control the mouse using some very clever strategies and points to our agentic future and how the big frontier labs are trying to get there. It wasn't so much significant because it changed how we could interact with Anthropic, although you could use it via API. It was much more a preview of what was coming down the line.
November might have been the easiest call of all of these when it comes to the biggest story. It was kind of the biggest story in every area, and AI was no exception. And that, of course, was what the results of the US presidential election were going to mean for AI.
Even now, a month and a half later, there are still big questions around this. We don't know if President Trump will actually repeal Biden's AI executive order, and if he does, will anything replace it? We do know, however, that AI is clearly a very important issue. We know that because Trump named an AI czar, a White House-level appointee, who would have overall oversight around AI policy in the United States. Exactly what an AI czar does is a question for 2024, but there's no doubt that this was the big story from November.
Finally, as we round out the year, we hearken back to September a little bit, but really understand where that conversation came to. It turns out that 01 wasn't just a new class of reasoning models, but was an answer to a challenge that had clearly emerged across all the labs, which is that they simply weren't getting as much out of their training runs as they had in the past. A fine point was put on this when Ilya Sutskever did a very rare conference appearance to call the idea that we had achieved peak data and say that the pre-training era is over.
Where I think this leaves us heading into next year is this big open question around where the next great developments are going to come from. Frankly, that's pretty exciting. You can almost feel the reemergence of experimentation and excitement around some questions that for a time were assumed to be heading in a clear direction. We've got all this exploration now around world models and whether they're a better path to AGI. And I think 2025 is going to start with a lot of that exploration built in.
Overall, a pretty significant year, of course, in the history of generative AI. Can't wait to cover 2025 as it happens with you. Appreciate you listening. As always, thanks for a wonderful year here at the AI Daily Brief. And until next time, peace.