The latest update introduced an interactive mode, allowing users to engage with AI hosts during audio interviews. It also includes a complete redesign with improved UX, splitting notebooks into three panels: sources, chats, and a studio panel for outputs. Additionally, a premium enterprise version was launched with enhanced security, privacy, and team collaboration features.
The interactive mode provides a better starting point for learning by allowing users to listen to a conversation about a topic and engage in it, making knowledge acquisition more accessible and engaging compared to traditional deep reading and comprehensive learning.
The premium version focuses on enterprise needs with enhanced security and privacy features, allows five times more audio overviews to be generated, and enables team-wide sharing of notebooks. The standard version does not have these work-focused enhancements.
Ilya Sutskever argues that pre-training as a scaling method has reached its limits due to peak data availability. He suggests that future AI advancements will rely on new approaches like agents, synthetic data, and inference time compute, rather than just scaling compute and data.
Sutskever believes that current models have already been trained on the entire internet, and while private or synthetic data could expand datasets, they are unlikely to introduce novel concepts or ideas. He argues that once all human thought is memorized, there is little more to learn.
Sutskever predicts that future AI models will be fundamentally agentic, capable of reasoning and carrying out tasks without human supervision. These models will be unpredictable and will understand things from limited data without getting confused, representing a significant leap from current capabilities.
Sutskever draws a parallel between AI and human evolution, noting that while the human brain stopped growing in size, humanity continued to advance. He suggests that AI will follow a similar path, with progress fueled by agentic behavior and tools on top of LLMs, rather than just scaling compute and data.
Perplexity is projecting a doubling of annualized revenue next year to $127 million, with a goal of quintupling revenue by 2026. The company is seeking to raise $500 million at a $9 billion valuation, positioning itself as a major player in AI search, despite growing competition from Google.
Grok 2.0 is three times faster, offers improved accuracy and multilingual capabilities, and includes a new Grok button on the X platform for generating context about posts. The Grok API pricing was also reduced, and the Aurora image model will be added to the API soon.
Pika 2.0 enhances user control and customization of video clips, allowing users to upload images of elements like characters and props. It also enables refining clips by swapping scene elements and tweaking prompts, making it more accessible for content creators and small campaigns.
At a recent conference appearance, SSI founder (and former OpenAI leader) Ilya Sutskever claimed that we had reached peak data and that the era of pre-training as a scaling method had come to a close. NLW explores the implications. Plus, NotebookLM releases an enterprise edition.
Brought to you by:
Vanta - Simplify compliance - https://vanta.com/nlw
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown