We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Starting Now On Technology Ethics: Elizabeth Renieris

Starting Now On Technology Ethics: Elizabeth Renieris

2021/6/22
logo of podcast Me, Myself, and AI

Me, Myself, and AI

AI Deep Dive AI Chapters Transcript
People
E
Elizabeth Renieris
S
Sam Ransbotham
Topics
Elizabeth Renieris: Notre Dame-IBM 技术伦理实验室致力于将学术研究成果转化为实践应用,通过创建开源工具包、示范性立法和解释性视频等实用工具来解决实际问题,例如疫情期间的复工复学。实验室的工作重点是召集来自不同行业和领域的专家,共同合作以应对现实挑战。技术伦理讨论应同时关注技术的伦理问题和构建技术的个体及社会价值观,促使我们反思理想的社会、公司和个人形象,以及相应的价值观。应对AI伦理挑战不应简单地将创新与其他价值观(如安全、隐私)对立起来,而应从价值观层面进行更广泛的讨论,并考虑长远影响。解决AI伦理问题需要跨学科视角,结合历史、人类学等领域的知识,从价值观、权衡和优先级等方面进行综合考量。技术人员和管理者应积极学习相关知识,了解技术伦理的核心原则,并将其应用于工作中,避免在技术伦理问题上采取忽视或冒险的态度。个人责任和行业监管应并行不悖,不能因为缺乏完善的监管而推卸个人在技术伦理方面的责任。过程导向型法规(例如强制算法审计和结果透明化)以及对公司董事会构成和专业知识的要求,有助于更好地协调利益相关者之间的利益,从而促进技术伦理。在AI伦理领域,应根据现有知识采取行动,并保持谦逊的态度,认识到未来可能需要调整和改进。 Sam Ransbotham: 解决AI伦理问题需要组织内部开展对话,并分配资源来解决问题,这需要管理层积极参与。 Shervin Khodabandeh: 许多组织缺乏必要的资源和激励机制来进行技术伦理方面的对话和行动。

Deep Dive

Chapters
Elizabeth Renieris discusses her role at the Notre Dame-IBM Technology Ethics Lab and how organizations can proactively address ethical AI practices without waiting for perfect solutions.

Shownotes Transcript

Technology presents many opportunities, but it also comes with risks. Elizabeth Renieris is uniquely positioned to advise the public and private sectors on ethical AI practices, so we invited her to join us for the final episode of Season 2 of the Me, Myself, and AI podcast.

Elizabeth has worked for the Department of Homeland Security and private organizations in Silicon Valley, and she founded the legal advisory firm Hackylawyer. She now serves as founding director of the Notre Dame-IBM Technology Ethics Lab, which is focused on convening leading academic thinkers and technology executives to help develop policies for the stronger governance of AI and machine learning initiatives. In this episode, Elizabeth shares her views on what public and private organizations can do to better regulate their technology initiatives. Read the episode transcript here).

**Thank you for joining us for Season 2 of *Me, Myself, and AI. *We'll be back this fall with new episodes, and may have a bonus for you this summer. In the meantime, stay in touch by joining our LinkedIn group, **AI for Leaders) at mitsmr.com/AIforLeaders).

Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast).

Me, Myself, and AI is a collaborative podcast from MIT Sloan Management Review and Boston Consulting Group and is hosted by Sam Ransbotham and Shervin Khodabandeh. Our engineer is David Lishansky, and the coordinating producers are Allison Ryder and Sophie Rüdinger.

Guest bio:

Elizabeth Renieris is the founding director of the Notre Dame-IBM Technology Ethics Lab, the applied research and development arm of the University of Notre Dame’s Technology Ethics Center, where she helps develop and oversee projects to promote human values in technology. She is also a technology and human rights fellow at the Carr Center for Human Rights Policy at Harvard’s Kennedy School of Government, a practitioner fellow at Stanford’s Digital Civil Society Lab, and an affiliate at the Berkman Klein Center for Internet and Society. Renieris’s work is focused on cross-border data governance as well as the ethical challenges and human rights implications of digital identity, blockchain, and other new and advanced technologies.

As the founder and CEO of Hackylawyer, a consultancy focused on law and policy engineering, Renieris has advised the World Bank, the U.K. Parliament, the European Commission, and a variety of international and nongovernmental organizations on these subjects. She is also working on a forthcoming book about the future of data governance through MIT Press.

Renieris holds a master of laws degree from the London School of Economics, a Juris Doctor from Vanderbilt University, and a bachelor of arts degree from Harvard College.

We encourage you to rate and review our show. Your comments may be used in Me, Myself, and AI materials.

We want to know how you feel about Me, Myself, and AI. Please take a short, two-question survey).