The latest update introduced an interactive mode, allowing users to engage with AI hosts during audio interviews. It also includes a complete redesign with improved UX, splitting notebooks into three panels: sources, chats, and a studio panel for outputs. Additionally, a premium enterprise version was launched with enhanced security, privacy, and team collaboration features.
The interactive mode provides a better starting point for learning by allowing users to listen to a conversation about a topic and engage in it, making knowledge acquisition more accessible and engaging compared to traditional deep reading and comprehensive learning.
The premium version focuses on enterprise needs with enhanced security and privacy features, allows five times more audio overviews to be generated, and enables team-wide sharing of notebooks. The standard version does not have these work-focused enhancements.
Ilya Sutskever argues that pre-training as a scaling method has reached its limits due to peak data availability. He suggests that future AI advancements will rely on new approaches like agents, synthetic data, and inference time compute, rather than just scaling compute and data.
Sutskever believes that current models have already been trained on the entire internet, and while private or synthetic data could expand datasets, they are unlikely to introduce novel concepts or ideas. He argues that once all human thought is memorized, there is little more to learn.
Sutskever predicts that future AI models will be fundamentally agentic, capable of reasoning and carrying out tasks without human supervision. These models will be unpredictable and will understand things from limited data without getting confused, representing a significant leap from current capabilities.
Sutskever draws a parallel between AI and human evolution, noting that while the human brain stopped growing in size, humanity continued to advance. He suggests that AI will follow a similar path, with progress fueled by agentic behavior and tools on top of LLMs, rather than just scaling compute and data.
Perplexity is projecting a doubling of annualized revenue next year to $127 million, with a goal of quintupling revenue by 2026. The company is seeking to raise $500 million at a $9 billion valuation, positioning itself as a major player in AI search, despite growing competition from Google.
Grok 2.0 is three times faster, offers improved accuracy and multilingual capabilities, and includes a new Grok button on the X platform for generating context about posts. The Grok API pricing was also reduced, and the Aurora image model will be added to the API soon.
Pika 2.0 enhances user control and customization of video clips, allowing users to upload images of elements like characters and props. It also enables refining clips by swapping scene elements and tweaking prompts, making it more accessible for content creators and small campaigns.
Today on the AI Daily Brief, a big pronouncement from one of the giants of the AI field. And before that, in the headlines, Notebook LM gets new functionality and an enterprise version. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. Welcome back to the AI Daily Brief headlines edition, all the daily AI news you need in around five minutes.
One of the most popular new products of the year, Google's Notebook LM, got a slew of new features. Although why they chose to bury this on a Friday afternoon, I have no idea. Guys, you got to scream this a little bit louder and not on a Friday. In any case, this new update includes a bunch of new features plus an enterprise version.
Probably the biggest new feature is that it's got an interactive mode. And this is exactly what it sounds like. It allows users to interact with the AI hosts in their audio interviews. After generating an audio overview, users can request to join the conversation while it's being presented, similar to the way Xspaces work. Google wrote in their blog post, "...it's like having a personal tutor or guide who listens attentively and then responds directly, drawing from the knowledge in your sources."
The feature is still experimental, with Google noting the AI-generated hosts might respond inaccurately or pause awkwardly before answering. But this super reinforces the argument that I've been making, that this is likely to be the way that people start learning new things in the future. It's not that it's a replacement for deep, comprehensive learning and reading the big, complex reports for yourself. But the ability to start by listening to a conversation about a particular topic and now engage in it is just such a better starting point for any sort of new knowledge acquisition.
This follows a previous update where users were able to guide the host and edit the content. That was an important one for content creators like myself, who in the initial instance of the audio overviews didn't really get a say in what came out on the other side. Driven by Gemini's enhanced voice-to-voice capabilities, the new feature allows for a fundamentally different experience with Notebook LM.
Aside from this new feature, Notebook LM is also getting a complete redesign with improved UX. Notebooks are now split across three panels: sources, chats, and a studio panel that contains output documents like study guides and audio overviews. Google said: "From the start, we wanted Notebook LM to be a tool that would let you move effortlessly from asking questions to reading your sources to capturing your own ideas. Today, we're rolling out a new design that makes it easier than ever to switch between those different activities in a single, unified interface."
Finally, and this one will be very exciting for many of you listeners, Google is also rolling out a premium version of the app aimed at enterprises. It introduces work-focused security and privacy, as well as allowing five times as many audio overviews to be generated. Notebooks in the premium version can be shared across a whole team. The subscription is now available for businesses, schools, and universities. Given how much we at Super are talking about Notebook LM for enterprises, this I feel like is absolutely a slam dunk.
The response has been very positive, as has most everything been around Notebook LM. The AI for Success account on X says, can someone tell Google to calm down, please? They've dropped another major update to Notebook LM. Christian7 writes, this is actually insane. Huge kudos to the Notebook LM team for nailing the interrupt and participate UX. Super hard to get this right. Have been struggling with voice product stuff the past few months.
Even Professor Ethan Malek, who is in no way a hypster, writes, Google has a knack for making non-chatbot interfaces for serious work with LLMs. When I demo them, both Notebook LM and Deep Research are instantly understandable and fill real organizational needs. They represent a tiny range of AI capability, but are easy for everyone to get. So one writes research reports and the other summarizes your documents and turns them into podcasts? Got it.
Krishna writes, I don't think Google leading with UI innovation was on my card, but I totally agree with this. When Anthropic released Artifacts, it felt like a new interface, but that's still very chatbot-like. Notebook LM and Deep Research are net new. Google Labs designer Jason Spielman wrote, Thanks, I think we're just seeing the beginning of a lot of new UI innovation. I mentioned in a previous post that we're in a transition period. Chatbots are just the most digestible for humans to start exploring AI, but people are now ready to try new formats.
One company that you got to think is watching what Google's doing closely is, of course, competitor perplexity. A company that I honestly think is the first actual potential disruptor of Google search since Google search became dominant. The company is projecting booming growth and fat margins as they seek more funds to compete in AI search. The company is reportedly looking to raise $500 million at a $9 billion valuation, a process that began in October.
According to pitch decks viewed by the information, the company is projecting a doubling in annualized revenue next year to reach $127 million. The company further projected a quintupling of revenue by the end of 2026 as their subscription model ramps up. When the fundraising round was first reported, the lofty valuation seemed like a no-brainer. The company was scaling fast and shipping relentlessly. Their November election coverage was the toast of the industry, standing alone as the only AI company that backed their product to serve solid answers during the controversial event.
Then again, since then, we've seen rapid advancements in reasoning models, AI search, and a deep research feature from Google, suggesting that the Menlo giant will not go down without a fight. That could put pressure on Perplexity as they push to convert free users into paying customers over the coming years. Perplexity continues to be one of the most dynamic and exciting companies in the space, but the lumbering giant that they're competing against is a lot less sleepy right now than it was just a little while ago.
Getting into the spirit of the season, XAI have revealed a new upgrade to their flagship model, Grok 2. The new model is claimed to be three times faster and offers improved accuracy, instruction following, and multilingual capabilities. Once again, shipping on a Friday night, XAI also unveiled a new Grok button on the X platform.
Users can click the button to generate additional context about a post or a trending discussion. The Grok API is also slashed prices for developers with input tokens now priced at $2 per million and $10 per million output. The company also announced that their cutting edge Aurora image model will be added to the API in the coming weeks. Grok 2.0 access is free for all X users with subscribers getting higher usage limits.
Lastly today, hot on the heels of OpenAI releasing Sora, Pika Labs have unveiled the second generation of their video model. Pika 2.0 is focused on greatly improving user control and customization of generated video clips. Users can now upload images of elements to be used in clips such as characters, props, and settings. One interesting example showed Van Gogh's self-portrait waltzing with the girl with the pearl earring. Characters and aesthetics demonstrate very good consistency across scenes. You can also adjust individual elements such as character posing and object interactions.
Clips can now be refined if the model doesn't get it right on the first try, with users able to swap scene elements and tweak the prompts. Access to Pika 2.0 is still priced extremely aggressively, positioning itself to be affordable for content creators and small advertising campaigns. You can even see that in their tagline, not just for pros, for actual people.
That is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever.
Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI.
Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash nlw. That's vanta.com slash nlw. Today's episode is brought to you, as always, by Superintelligent.
Have you ever wanted an AI daily brief but totally focused on how AI relates to your company? Is your company struggling with AI adoption, either because you're getting stalled figuring out what use cases will drive value or because the AI transformation that is happening is siloed at individual teams, departments, and employees and not able to change the company as a whole? Superintelligent has developed a new custom internal podcast product that inspires your teams by sharing the best AI use cases from inside and outside your company.
Think of it as an AI Daily Brief, but just for your company's AI use cases. If you'd like to learn more, go to bsuper.ai slash partner and fill out the information request form.
I am really excited about this product, so I will personally get right back to you. Again, that's besuper.ai slash partner. Welcome back to the AI Daily Brief. One of the really interesting conversations that we've been paying attention to for the last month or so has to do with the idea of whether we've hit a plateau in LLM performance based on the current methods for training AI models.
Former OpenAI co-founder and now founder of the company's safe superintelligence, Ilya Sutskever, made a rare appearance in Vancouver on Friday to make some fairly ground-shaking predictions about the future of AI. Speaking at the NeurIPS conference, Ilya claimed, pre-training as we know it will unquestionably end.
Let's go back and get a little bit of context for these comments before we dig into exactly what Ilya had to say. All current foundation models rely on scaling up pre-training to make progress. Basically, they throw more data and more compute at the problem to achieve the next paradigm shift in model capability. A few months ago, however, sources inside of Frontier Labs started to express concerns that pre-training had hit a scaling wall. Training runs were starting to show diminishing returns from adding more compute to training clusters and more data to training sets.
and what had originally just been reporting from the information started to get credence from big CEO appearances. At the Microsoft Ignite conference last month, CEO Satya Nadella said, we're seeing the emergence of a new scaling law. He was, of course, referring to scaling time test compute, which is the technology that underpins OpenAI's O1 model. Google CEO Sundar Pichai at the New York Times Dealbook Summit said, I think that progress is going to get harder when I look at 2025. The low-hanging fruit is gone. The hill is steeper.
Now, for OpenAI's part, they think that the new opportunity of the reasoning models and test time compute means that, as Sam Altman put it, there is no wall. But it's clear that even they have shifted their strategy and that basically instead of simply scaling computing power and adding additional data, new approaches that involve allowing the models to, quote unquote, think longer is a viable if alternative scaling strategy.
Ilya himself weighed in on the debate and really took it to a new level, given that he had been such a long-term proponent of just throw more computing data at it. Ilya told Reuters, the 2010s were the age of scaling. Now we're back in the age of wonder and discovery once again. Everyone is looking for the next thing. Scaling the right thing matters now more than ever.
So what is the right thing? Well, let's go back to these comments from last week at the NeurIPS conference. Ilya seems to believe that the end of the pre-training era is for more fundamental reasons. He believes the industry has reached the practical limit for scaling, stating, "'While compute is growing, we've achieved peak data and there'll be no more. We have to deal with the data that we have. There's only one internet.'" Instead, Ilya is proposing a very different pathway to achieving the next generation of AI models.
He mentioned agents, synthetic data, and inference time compute as experiments that are already being run. When it comes to agents, Ilya's belief is that the current crop of so-called agents are extremely limited and don't necessarily evolve much further using current methods. While this current crop are an impressive first stage, they're still prone to becoming confused and require human supervision to carry out tasks correctly.
Ilya said, "Right now, the systems are not agents in any meaningful sense. They're just beginning." He says that in the future, models will be able to reason more. Once again, he said we're in the early stages, claiming that current models only replicate human intuition rather than coming up with their own novel strings of logic. Ilya gave chess-playing AI as an example, noting that the leading models were completely unpredictable to human grandmasters. He said the more a system reasons, the more unpredictable it becomes.
Zooming all the way out, Ilya gave an example from nature, which to him suggests fundamental breakthroughs in AI sophistication are possible. He noted that most mammals display the same predictable relationship between body weight and brain size. Non-human primates are slightly above this curve but scale in the same manner. Hominids, like humans and their ancestors, show a completely different relationship between body mass and brain size. As humans evolved, brain size skyrocketed in a way that was unpredictable based on comparisons with other species.
Ilya claimed, quote, this means there is a precedent for biology figuring out some kind of different scaling.
Ultimately, Ilia believes that the path to superintelligence will yield drastically different capabilities to the pre-training era of AI. He said he expects to see superintelligent models that are fundamentally agentic, meaning they will be natively capable of carrying out tasks in the same way that a human can. He also believes that they will be necessarily unpredictable, saying, "...we will have to be dealing with AI systems that are incredibly unpredictable. They will understand things from limited data. They will not get confused. All of the things which are really big limitations."
Importantly, these were all very generalized predictions of how AI will evolve. Ilya said, "I'm not saying how and I'm not saying when. I'm saying that it will. When all of those things come together, we will have systems of radically different qualities and properties than exist today." And this is sort of the big point: that whatever happens next, it likely looks very different than what we have today.
One of the most important points is this observation that we've reached peak data. Current models have been trained on the entire internet at this stage. And while a lot of folks jumped to say maybe there are sources of data as yet untapped, entrepreneur Ibrahim Ahmed wrote, "...the one point I somewhat disagree with is that we've tapped all data. There's immense private data that's completely untapped."
Mike Mnemonic writes,
But Ilya's point is somewhat different. It seems to me like he's saying that while private and synthetic data could theoretically expand the size of the dataset, they're unlikely to contain any novel concepts or ideas. Put another way, once you've memorized the entire catalog of human thought, what more is there to learn? Yovgo summed it up this way. Learning to complete partial observations is not sufficient to get intelligence. He said, I think this was kind of obvious to many, but maybe noteworthy that a true scale believer said it.
Some of the other commentary was frustration around what was not said. Dimitri Erhan from Google's DeepMind said, Nate Sanders took it a step farther, saying,
I will say that even if that's true, it's not necessarily just a fundraising thing, or at least the direction of the correlation isn't clear. In other words, is Ilya pushing this narrative because it's helpful for fundraising, or is it something that he believes that he's just capitalizing on? We also don't even have any confirmed reports that he actually is fundraising. This is all just speculation.
Perhaps most interesting is the part of the conversation where it's almost like Ilya has unlocked people to think more broadly about what might come next. John Rush wrote, Ilya finally confirmed scaling LLMs at the pre-training stage plateaued. The compute is scaling, but data isn't, and new or synthetic data isn't moving the needle. What's next? Same as the human brain. Stopped growing in size, but humanity kept advancing. The agents and tools on top of LLMs will fuel the progress.
Sequence-to-sequence learning, agentic behavior, teaching self-awareness. Think of it as the iPhone, which kept getting bigger and more useful from a hardware point, but plateaued and focus shifted to applications. I don't know if that's how it plays out, but I think it's great that we're starting to have that conversation. Really, really interesting stuff from Ilya. Glad he gave that talk and excited to see how this conversation proceeds into the new year. For now, though, that is going to do it for today's AI Daily Brief. Until next time, peace.