We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Mini Episode: Redeeming AI, More Lessons in AI Bias, and a National AI Research Cloud

Mini Episode: Redeeming AI, More Lessons in AI Bias, and a National AI Research Cloud

2020/7/5
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
D
Daniel Bashir
Topics
Daniel Bashir: 本期节目讨论了深度伪造技术在HBO纪录片《欢迎来到车臣》中的应用,以及其在政治宣传中的恶意用途。该技术能够以一种看似真实的方式操纵视频和图像,这引发了人们对其潜在危害的担忧。然而,在《欢迎来到车臣》中,深度伪造技术被用来保护LGBTQ受害者的身份安全,这展示了该技术潜在的积极用途。此外,节目还讨论了人工智能系统的碳排放问题,以及一个用于测量机器学习项目碳排放的工具。该工具的开发旨在帮助减少人工智能系统对环境的影响。最后,节目还讨论了人工智能偏见问题,以及Timnit Gebru和Yann LeCun之间关于人工智能偏见来源的争论。这场争论突显了人工智能领域在教育和包容性方面仍需改进。

Deep Dive

Shownotes Transcript

Translations:
中文

Hello and welcome, this is Daniel Bashir here with Skynet, today's Week in AI. This week, we'll look at deepfakes, AI's carbon footprint, educating researchers on AI bias, and the push for a recent bill in Congress. First, deepfakes, AI-created synthetic media that is often indistinguishable from reality, are one of the scariest technologies out there today.

They've already seen some pretty terrifying uses, such as in an Indian politician's campaign. They can be used to create fake videos of politicians and prominent figures that could easily be mistaken for the real thing. But there might be some redemption for this seemingly malicious technology. Vox reported on a new HBO documentary called Welcome to Chechnya, which interviews survivors of Chechnya's persecution of its LGBTQ population.

Because it's unsafe for survivors to reveal their identities, the documentary uses deepfake-like technology to overlay volunteers' faces onto the survivors' faces, allowing each survivor to speak their truth to the camera, with the volunteers' face as a medium to display their facial expressions and emotions.

In other ways AI needs to redeem itself, the recent scrutiny of carbon emissions has brought into focus the fact that machine learning systems have a massive environmental impact. For example, a 2019 MIT Technology Review article reported that a single machine learning project could emit as much carbon as five cars in the span of their lifetimes.

That focus has sparked a strong push towards developing leaner, less power-hungry AI models. The green AI movement in particular has pushed for using carbon emissions as a key metric in evaluating AI systems. But to know we're making progress, we need quantitative measurements of how much we're reducing our carbon footprint.

To that end, according to an article from Stanford's Institute for Human-Centered AI, a team of researchers from Stanford, Facebook AI Research, and McGill University have created a tool to measure the carbon emissions of a machine learning project.

Machine learning's ubiquity will ensure that its carbon impact represents a greater and greater fraction of total carbon emissions in the future. With tools such as this one, we can make sure that we make concrete progress towards mitigating the additional impact that AI systems will have on the environment.

And now, a follow-up from one of last week's stories. To recap, Timnit Jebru, a researcher who has extensively studied the racial and intersectional implications of AI bias, gave a presentation at the Computer Vision and Pattern Recognition Conference on the various sources of AI bias. After Jan Latsun, another famed researcher made the misinformed comment that machine learning systems owed their bias to data alone, an exchange erupted on Twitter.

Synced Review reports that the week-long back and forth between the two researchers drew in many other prominent figures who expressed dissatisfaction with LeSun's comments. The field has made progress towards diversity and inclusion, but many researchers lament that such prominent figures in the community still need to be educated on these issues.

The exchange ended with LeCun making his last substantial post on Twitter after calling on others to stop attacking Jebru and others critical of his posts.

While AI researchers educate each other on bias, a number of organizations want to democratize access to compute resources. VentureBeat reports that a group of over 20 organizations, including Amazon Web Services, Google, IBM, and NVIDIA, joined schools such as Stanford and Ohio State University in backing the idea of a national AI research cloud.

The cloud would allow researchers across the United States to gain access to datasets freely available to companies like Google, but not to researchers in academia. The idea was first proposed last year by the co-directors of Stanford's Institute for Human-Centered AI, Dr. Fei-Fei Li and John Echemendi, as a strategic investment to bolster the United States' competitiveness and status as a leader in AI.

While China leads the way in data-heavy applications such as facial and speech recognition, Li thinks that less data-heavy applications, such as a genetic study of rare disease and drug discovery, may be fruitful ground for the US to make progress. That's all for this week. Thanks so much for listening. If you enjoyed the podcast, be sure to rate and share. If you'd like to hear more news like this, please check out SkyNetToday.com, where you can find our weekly news digests with similar articles.