Andrew Ng advises that the primary focus should be on building a valuable and functional AI model first. Cost optimization should only be considered after the model is proven to work and is in use. Initial costs are often trivial, and tools like supervised fine-tuning can later reduce expenses if needed.
AI and LLM engineers must select the appropriate model for a task, understand business requirements, evaluate data quality and quantity, and consider non-functional factors like budget and time to market. They often start with a baseline model before moving to more complex LLMs, ensuring the model aligns with business outcomes.
Closed-source models like GPT-4 are often recommended for initial prototyping due to their advanced capabilities, while open-source models are preferred for proprietary or sensitive data, cost optimization, or on-device applications. The choice depends on data privacy, budget, and specific use-case requirements.
Agentic AI systems interact with multiple data sources, some on-premise and others in the cloud, complicating data security. Innovations like homomorphic encryption are emerging to address these challenges, ensuring secure data processing across distributed systems.
The biggest wow moment was OpenAI's Notebook LM, which impressed users with its human-like conversational abilities. It was highlighted for its ability to make even mundane topics engaging, earning widespread praise and adoption.
Waymo treats its fleet of autonomous vehicles as a single, unified driver, emphasizing the scalability of machine intelligence. This approach demonstrates how focused, domain-specific AI models can revolutionize entire industries, such as transportation.
Tech agnosticism advocates for a balanced, skeptical approach to emerging technologies, avoiding blind faith in their potential. It emphasizes the importance of uncertainty and critical thinking, especially in the face of grandiose claims about AI and the singularity.
Greg Epstein warns that rushing AI development, driven by narratives of technological rapture, may lead to unintended consequences. He suggests that slowing down and focusing on human values and compassion could yield more sustainable and beneficial outcomes.
AI security, LLM engineering, how to choose the best LLM, and tech agnosticism: In our first “In Case You Missed It” of 2025, Jon Krohn starts the year with a round-up of our favorite recent interview moments. He selects from interviews with Andrew Ng, Ed Donner, Eiman Ebrahimi, Sadie St Lawrence, and Greg Epstein, covering the latest in AI development, touching on agentic workflows, promising new roles in AI, and what blew our minds last year.
Additional materials: www.superdatascience.com/852)
Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.