We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode EP 148: Safer AI - Why we all need ethical AI tools we can trust

EP 148: Safer AI - Why we all need ethical AI tools we can trust

2023/11/20
logo of podcast Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast

Shownotes Transcript

Send Everyday AI and Jordan a text message)

Do you trust the AI tools that you use? Are they ethical and safe? We often overlook the safety behind AI and it's something we should pay attention to. Mark Surman, President at Mozilla Foundation, joins us to discuss how we can trust and use ethical AI.

Newsletter: Sign up for our free daily newsletter)**More on this Episode: Episode Page)Join the discussion: )Ask Mark Surman and Jordan questions about AI safety)Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup)Website: YourEverydayAI.com)Email The Show: [email protected]Connect with Jordan on LinkedIn)Timestamps:[00:01:05] Daily AI news[00:03:15] About Mark and Mozilla Foundation[00:06:20] Big Tech and ethical AI [00:09:20] Is AI unsafe?[00:11:05] Responsible AI regulation[00:16:33] Creating balanced government regulation[00:20:25] Is AI too accessible?[00:23:00] Resources for AI best practices[00:25:30] AI concerns to be aware of[00:30:00] Mark's final takeawayTopics Covered in This Episode:1. Future of AI regulation2. Balancing interests of humanity and government3. How to make and use AI responsibly 4. Concerns with AIKeywords:**AI space, risks, guardrails, AI development, misinformation, national elections, deep fake voices, fake content, sophisticated AI tools, generative AI systems, regulatory challenges, government accountability, expertise, company incentives, Meta's responsible AI team, ethical considerations, faster development, friction, balance, innovation, governments, regulations, public interest, technology, government involvement, society, progress, politically motivated, Jordan Wilson, Mozilla, show notes, Mark Surman, societal concerns, individual concerns, misinformation, authenticity, shared content, data, generative AI, control, interests, transparency, open source AI, regulation, accuracy, trustworthiness, hallucinations, discrimination, reports, software, OpenAI, CEO, rumors, high-ranking employees, Microsoft, discussions, Facebook, responsible AI team, Germany, France, Italy, agreement, future AI regulation, public interest, humanity, safety, profit-making interests