We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Protecting Society From AI Harms: Amnesty International’s Matt Mahmoudi and Damini Satija (Part One)

Protecting Society From AI Harms: Amnesty International’s Matt Mahmoudi and Damini Satija (Part One)

2023/8/29
logo of podcast Me, Myself, and AI

Me, Myself, and AI

AI Deep Dive AI Chapters Transcript
People
D
Damini Satija
M
Matt Mahmoudi
Topics
Matt Mahmoudi: 人工智能驱动的监控技术,特别是面部识别技术,在执法中的应用导致了对弱势群体的歧视和不平等,侵犯了其基本权利。在纽约市、印度海得拉巴市和巴勒斯坦被占领土等地,这些技术的部署加剧了对弱势群体的权利侵蚀。国际特赦组织致力于调查和揭露这些技术的滥用,并倡导更强的保障措施和规章制度,以保护人权。通过对具体案例的调查,例如纽约市警方对活动家Derek Ingram的骚扰事件,揭示了面部识别技术在执法中的歧视性应用。此外,Matt Mahmoudi还强调了技术修复的局限性,认为解决人工智能偏见问题需要更全面的方法,关注技术与人的互动以及社会环境。他认为,所有技术都是政治性的,应关注技术背后的政治和政策因素,而非盲目追求技术解决方案。他呼吁对人工智能技术进行更严格的监管,以防止其被滥用并对人权造成损害。 Damini Satija: 算法问责实验室关注公共部门(特别是福利领域)中自动化和人工智能技术的应用,调查这些技术对弱势群体的歧视性影响。旧金山公共住房算法的案例说明了即使工具的初衷并非恶意,其应用也可能导致负面后果。她强调,解决人工智能偏见问题不能仅仅依赖技术手段,还需采取更全面的方法,关注技术与人的互动以及社会环境。即使技术上解决了人工智能的偏见问题,其在监控和数据方面的问题依然存在,并且可能产生间接的负面影响。她认为,人工智能技术的发展和部署中存在权力失衡,弱势群体的利益往往被忽视。在社会福利领域,对人工智能技术的投资往往不如直接增加社会工作者等资源更有效。她呼吁在人工智能技术的早期阶段就进行严格审查,以评估其对人权的潜在影响,并确保相关利益方的声音得到倾听。

Deep Dive

Chapters
The episode introduces Matt Mahmoudi and Damini Satija from Amnesty International, discussing their roles and the focus of Amnesty Tech on AI technologies used in surveillance and public sector automation, highlighting potential risks and discriminatory impacts.

Shownotes Transcript

Amnesty International brings together more than 10 million staff members and volunteers worldwide to advocate for social justice. Damini Satija and Matt Mahmoudi work with Amnesty Tech, a division of the human rights organization that focuses on the role of government, Big Tech, and technologies like artificial intelligence in areas like surveillance, discrimination, and bias.

On this episode, Matt and Damini join Sam and Shervin to highlight scenarios in which AI tools can put human rights at risk, such as when governments and public-sector agencies use facial recognition systems to track social activists or algorithms to make automated decisions about public housing access and child welfare. Damini and Matt caution that AI technology cannot fix human problems like bias, discrimination, and inequality; that will take human intervention and changes to public policy. Read the episode transcript here).

For more on what organizations can do to combat the unintended negative consequences arising from the use of automated technologies, tune in to our next episode, Part 2 of our conversation with Matt and Damini, airing September 13, 2023.

*Me, Myself, and AI *is a collaborative podcast from MIT Sloan Management Review and Boston Consulting Group and is hosted by Sam Ransbotham and Shervin Khodabandeh. Our engineer is David Lishansky, and the coordinating producers are Allison Ryder and Sophie Rüdinger.

Stay in touch with us by joining our LinkedIn group, AI for Leaders at mitsmr.com/AIforLeaders) or by following Me, Myself, and AI on LinkedIn.

Guest bios:

Matt Mahmoudi is a lecturer, researcher, and organizer. He’s been leading Amnesty International’s research and advocacy efforts on banning facial recognition technologies and exposing their uses against racialized communities, from New York City to the occupied Palestinian territories. He was the inaugural recipient of the Jo Cox Ph.D. scholarship at the University of Cambridge, where he studied digital urban infrastructures as new frontiers for racial capitalism and remains an affiliated lecturer in sociology. His work has appeared in the journals The Sociological Review and* International Political Sociology* and the book *Digital Witness *(Oxford University Press, 2020). His forthcoming book is *Migrants in the Digital Periphery: New Urban Frontiers of Control *(University of California Press, 2023).

Damini Satija is a human rights and public policy expert working on data and artificial intelligence, with a focus on algorithmic discrimination, welfare automation, government surveillance, and tech equity. She is head of the Algorithmic Accountability Lab and a deputy director at Amnesty Tech. She previously worked as an adviser to the U.K. government on data and AI ethics and represented the U.K. as a policy expert on AI and human rights at the Council of Europe. She has a master’s degree in public administration from Columbia University’s School of International and Public Affairs.

We encourage you to rate and review our show. Your comments may be used in *Me, Myself, and AI *materials.