We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 853: Generative AI for Business, with Kirill Eremenko and Hadelin de Ponteves

853: Generative AI for Business, with Kirill Eremenko and Hadelin de Ponteves

2025/1/14
logo of podcast Super Data Science: ML & AI Podcast with Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

AI Deep Dive AI Insights AI Chapters Transcript
People
H
Hadelin de Ponteves
J
Jon Krohn
K
Kirill Eremenko
Topics
Kirill Eremenko: 我们推出了一个新的公司BravoTech.ai,致力于帮助企业实施和咨询生成式AI。我们提供全面的服务,从概念验证到部署和变更管理培训,以确保生成式AI的成功应用。我们为播客听众提供免费的3小时咨询服务,帮助他们开始使用生成式AI。 我们专注于利用基础模型,这就像蛋糕的底层,可以根据不同需求进行定制。我们帮助企业选择合适的基础模型,并提供微调等定制服务,以满足其特定业务需求。我们还提供持续预训练、特定领域微调、基于指令的微调和基于人类反馈的强化学习等服务。 在部署方面,我们利用AWS提供的服务,例如Amazon Q(一个易于使用的插件式解决方案)、Bedrock(一个提供多种基础模型和定制选项的平台)和SageMaker(一个允许进行更精细化模型构建、训练和部署的平台)。我们还使用RAG(检索增强生成)技术,将外部数据源与基础模型集成,并利用AI代理技术将复杂任务分解成步骤,以提高基础模型的性能。我们也使用提示模板来简化用户与模型的交互。 Hadelin de Ponteves: 我同意Kirill的观点。我们BravoTech.ai提供的服务涵盖了生成式AI应用的整个生命周期,从数据准备到模型选择、预训练、微调、评估、部署、监控和迭代维护。 我个人在使用AWS Bedrock和SageMaker方面拥有丰富的经验。Bedrock是一个易于使用的平台,可以轻松创建各种生成式AI应用,例如我们最近与学生一起创建的模拟尤达大师的聊天机器人。SageMaker则是一个功能强大的工具,可以快速构建和训练机器学习模型,甚至可以超越我之前花费数小时训练的模型。SageMaker JumpStart提供了各种基础模型,可用于不同的应用场景。 在模型定制方面,我既有使用特定领域微调的经验,也有使用基于指令的微调的经验。基于指令的微调可以有效地控制模型的输出风格和简洁性。RAG技术也极大地简化了模型的应用,例如我们最近创建的法国甜点烹饪助手。 Jon Krohn: 两位嘉宾对生成式AI的应用和基础模型的定制提供了全面的见解。他们强调了选择合适的基础模型的重要性,并列举了12个关键因素,包括成本、模态、定制选项、推理选项、延迟、架构、性能基准、语言支持、规模和复杂性、可扩展性、合规性和环境影响。 他们还讨论了训练期间和部署期间修改基础模型的两种主要方法。训练期间的方法包括特定领域微调、基于指令的微调和基于人类反馈的强化学习。部署期间的方法包括调整推理参数、RAG、AI代理和提示模板。 最后,他们介绍了AWS提供的三个主要生成式AI服务:Amazon Q(一个高层次的即插即用解决方案)、Bedrock(一个具有模型定制选项的中级服务)和SageMaker(一个用于技术实现的低层次精细控制选项)。

Deep Dive

Key Insights

What are foundation models and how do they relate to large language models?

Foundation models are pre-trained AI models that serve as a base layer for building custom applications. They include large language models (LLMs) like ChatGPT, which are a subset of foundation models. Foundation models can also include models for images, videos, and other modalities. They are called 'foundation' because they provide a pre-trained base that businesses can customize for specific use cases, similar to how a basic cake layer can be customized with different toppings.

What are the eight steps in the foundation model lifecycle?

The foundation model lifecycle consists of eight steps: 1) Data preparation and selection, 2) Model selection and architecture, 3) Pre-training, 4) Fine-tuning, 5) Evaluation, 6) Deployment, 7) Monitoring and feedback, and 8) Iteration and maintenance. The first three steps (data prep, model selection, and pre-training) are typically handled by large organizations, while businesses focus on fine-tuning, evaluation, deployment, monitoring, and maintenance to customize the model for their specific needs.

What are the 12 key factors to consider when selecting a foundation model?

The 12 key factors for selecting a foundation model are: 1) Cost, 2) Modality (text, image, video, etc.), 3) Customization options, 4) Inference options (real-time, batch, etc.), 5) Latency, 6) Architecture, 7) Performance benchmarks, 8) Language support, 9) Size and complexity, 10) Ability to scale, 11) Compliance and licensing agreements, and 12) Environmental impact. These factors help businesses choose the right model for their specific use case and requirements.

What are some methods to customize foundation models during training?

Customization during training can be done through methods like domain-specific fine-tuning (narrowing the model's focus to a specific industry or dataset), instruction-based fine-tuning (training the model to respond in a specific way), and reinforcement learning from human feedback (RLHF), where humans evaluate and provide feedback on the model's responses. These methods allow businesses to tailor foundation models to their specific needs without having to pre-train a model from scratch.

What is retrieval augmented generation (RAG) and how does it work?

Retrieval augmented generation (RAG) is a method to enhance foundation models during inference by allowing them to pull information from external data stores, such as documents or databases, to augment their responses. The data is stored in a vector database, enabling the model to quickly retrieve relevant information and integrate it into its responses. This is particularly useful for applications like customer support or internal knowledge bases, where the model can dynamically access and use organizational data.

What are the three main AWS services for generative AI?

AWS offers three main services for generative AI: 1) Amazon Q, a high-level, plug-and-play solution for businesses; 2) Amazon Bedrock, a mid-level service that provides access to foundation models and allows for customization; and 3) SageMaker, a low-level, granular control option for technical implementations, offering tools for building, training, and deploying machine learning models, including generative AI.

What is the role of agents in modifying foundation models during inference?

Agents, or agentic AI, involve breaking down complex tasks into logical steps that can be performed by one or several foundation models. This approach allows simpler models to perform tasks more effectively by handling them step-by-step, rather than relying on a single, more complex model. It is a cost-effective way to enhance the capabilities of foundation models during inference, making them more versatile and efficient for specific workflows.

What is the significance of temperature in inference parameters for foundation models?

Temperature is an inference parameter that controls the variability and creativity of a foundation model's responses. A higher temperature results in more diverse and creative outputs, while a lower temperature produces more deterministic and predictable responses. For example, setting the temperature to zero ensures the model always provides the same response, while increasing it allows for more varied and imaginative answers.

Chapters
This chapter introduces foundation models, explaining their relationship with large language models and providing real-world examples. It uses the analogy of a cake to illustrate how foundation models serve as a base for building custom applications.
  • Foundation models are pre-trained AI models that serve as a base for custom applications.
  • Large language models are a subset of foundation models.
  • Foundation models can be used for text, image, and video data.

Shownotes Transcript

Kirill Eremenko and Hadelin de Ponteves, AI educators whose courses have been taken by over 3 Million students, sit down with Jon Krohn to talk about how foundation models are transforming businesses. From real-world examples to clever customization techniques and powerful AWS tools, they cover it all.

bravotech.ai) - Partner with Kirill & Hadelin for GenAI implementation and training in your business. Mention the “SDS Podcast” in your inquiry to start with 3 complimentary hours of consulting.

This episode is brought to you by ODSC), the Open Data Science Conference. Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected]) for sponsorship information.

In this episode you will learn:

  • (07:00) What are foundation models?

  • (15:45) Overview of the foundation model lifecycle: 8 main steps.

  • (29:11) Criteria for selecting the right foundation model for business use.

  • (41:35) Exploring methods to customize foundation models.

  • (53:04) Techniques to modify foundation models during deployment or inference.

  • (01:11:00) Introduction to AWS generative AI tools like Amazon Q, Bedrock, and SageMaker.

Additional materials: www.superdatascience.com/853)