The lab focuses on integrating human values and ethics into new technologies during the design process to ensure they benefit society and the planet.
Design constraints help shape technologies in ways that align with moral and technical imaginations, leading to better engineering outcomes that benefit society.
She acknowledges that unintended consequences are inevitable but stresses the importance of staying vigilant and accountable after deploying technologies, actively addressing new issues as they arise.
She distinguishes between the discovery of basic knowledge (e.g., splitting the atom) and the engineering of tools for society, advocating for diverse scientific exploration while focusing on ethical constraints in engineering.
She avoids framing ethical decisions as forced trade-offs and instead focuses on resolving tensions by exploring a range of better solutions, rather than perfect ones.
She discusses how her lab helped update Washington State's Access to Justice Technology Principles by involving diverse stakeholders, including formerly incarcerated individuals and immigrants, leading to new principles around human touch and language.
He worries about the emergence of AI systems with basic drives like self-preservation, resource acquisition, and self-replication, which could be dangerous in the human world if not properly controlled.
He suggests limiting AI to being tools that assist humans without agency, and using technology to enforce these limitations, particularly through hardware controls.
He notes that while governments are beginning to partner with AI companies, the current system is still incoherent, and there is a need for stronger governmental involvement, possibly through a cabinet-level position focused on AI.
He thinks AI and quantum computing will go hand in hand, with AI potentially breaking post-quantum encryption algorithms, leading to significant advancements but also new risks.
He envisions AI-designed hardware based on mathematical proofs and physical laws, which would provide absolute guarantees about the safety and limitations of AI systems.
He worries that ethical guidelines may not be universally adopted, especially by bad actors like China, Russia, and North Korea, rendering them ineffective in a global context.
He proposes leveraging AI to help create ethical and safety architectures, essentially using AI to save humanity from itself by countering bad actors.
How can we ensure technology evolves ethically in a rapidly advancing world? Neil deGrasse Tyson, Chuck Nice & Gary O’Reilly explore the challenges of designing a future where human values and AI coexist with The Future of Life Award’s 2024 recipients, Batya Friedman & Steve Omohundro.
Thanks to our friends at Future of Life Institute for supporting today’s episode. To learn more about FOL and this year's winners, make sure to visit FutureofLife.org).
NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/the-ethics-of-ai-with-batya-friedman-steve-omohundro/)
Thanks to our Patrons Meech, Sara Hubbard, Jesse Thilo, Myles Stanton, Francisco Meza, Edvardas Žemaitis, Ronny Waingort, Cyrelle Sumner, Ticheal Murner, Chase, Karen Morlatt, Brian Kelley, Kevin Vail, Rob, Razey9, Mark Th, Kyle M, Zygmunt Wasik, Hulk, Jon McPeak, smiggy93, Tolulope Oladimeji Oyeniyi, Bernie Cat, David Conradt, Ian Mercado, Daniel Bielawski, Indika, and Aris for supporting us this week.
Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.