We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI and Safety: How Responsible Tech Leaders Build Trustworthy Systems (National Safety Month Special)

AI and Safety: How Responsible Tech Leaders Build Trustworthy Systems (National Safety Month Special)

2025/6/26
logo of podcast AI and the Future of Work

AI and the Future of Work

AI Deep Dive AI Chapters Transcript
People
B
Ben Kus
E
Eric Siegel
N
Navindra Yadav
S
Silvio Savarese
Topics
Silvio Savarese: 作为Salesforce的首席科学家,我认为构建对用户安全的AI至关重要。这意味着我们的AI必须符合信任AI原则,确保准确性、安全性、透明度和可持续性。我们绝不使用客户数据进行模型训练,并通过技术手段防止数据泄露和偏见。在企业环境中,我们需要构建更专业化、更小的模型,以便更好地控制数据和输出,从而减少幻觉、毒性和偏见。我们还应该认识到AI应该在支持人类的角色中发挥作用,而不是取代人类的决策。 Ben Kus: 作为Box的CTO,我认为在企业级应用中,权限控制至关重要。AI需要尊重不同的访问权限,不能期望AI模型自行理解权限,需要特别处理。我们需要安全地使用AI,避免成为最大的数据泄露源。同时,不应使用AI直接做最终决定,而应让人来做决定。在利用AI提高生产力的同时,也要对当前技术保持负责任的态度。我们的文化是不作恶,并认真对待客户的信任。即使演示看起来很酷,也要不断尝试,找出AI的失败边界。 Eric Siegel: 我关注的是预测领域的道德问题,特别是与民权相关的歧视性决策。我认为人工智能和机器学习模型被完美地设计用来复制人类的偏见。通过量化和突出显示偏见,我们可以看到当前世界不公正的量化体现。通过透明地记录模型的活动,我们可以调整不平等现象,并在模型部署中实现社会正义。调整假阳性率的差异类似于平权行动。 Navindra Yadav: 作为Theom的CEO,我认为我们正在保护数据,无论数据位于何处。我们面临的技术挑战是在不安装代理或任何类似东西的情况下,将数据与数据存储一起移动。我们需要以经济高效的方式进行数据保护,所有数据都保留在客户环境中。我们的分析会自动对数据进行分类,并确定数据的关键性。我们还会评估数据的价值,参考暗网上的交易价格。我们使用NLP来查看数据的上下文,以更准确地标记数据。我们的目标是减少假阴性,并不断改进产品以提高精确度和召回率。

Deep Dive

Chapters
A brief introduction to the episode, highlighting the focus on AI safety during National Safety Month and introducing the four guests who share their insights.
  • Episode focuses on AI safety during National Safety Month.
  • Features four experts discussing AI safety in various contexts.

Shownotes Transcript

In honor of National Safety Month, this special compilation episode of AI and the Future of Work brings together powerful conversations with four thought leaders focused on designing AI systems that protect users, prevent harm, and promote trust

Featuring past guests:

What You’ll Learn: 

  • What it means to design AI with safety, transparency, and human oversight in mind
  • How leading enterprises approach responsible AI development at scale
  • Why data privacy and permissions are critical to safe AI deployment
  • How to detect and mitigate bias in predictive models
  • Why responsible AI requires balancing speed with long-term impact
  • How trust, explainability, and compliance shape the future of enterprise AI

  

Resources

 

**Other special compilation episodes **

  • Ethical AI in Hiring: How to Stay Compliant While Building a Fairer Future of Work (HR Day Special Episode))
  • Data Privacy Day Special Episode: AI, Deepfakes & The Future of Trust)
  • The Future of AI Ethics Special: Perspectives from Women Leaders in AI on Bias, Accountability & Trust)
  • World Health Day Special: How AI Is Making Healthcare Smarter, Cheaper, and Kinder)