Hello and welcome, this is Daniel Bashir here with Skynet, today's Week in AI. This week, we'll look at deepfakes, AI's carbon footprint, educating researchers on AI bias, and the push for a recent bill in Congress. First, deepfakes, AI-created synthetic media that is often indistinguishable from reality, are one of the scariest technologies out there today.
They've already seen some pretty terrifying uses, such as in an Indian politician's campaign. They can be used to create fake videos of politicians and prominent figures that could easily be mistaken for the real thing. But there might be some redemption for this seemingly malicious technology. Vox reported on a new HBO documentary called Welcome to Chechnya, which interviews survivors of Chechnya's persecution of its LGBTQ population.
Because it's unsafe for survivors to reveal their identities, the documentary uses deepfake-like technology to overlay volunteers' faces onto the survivors' faces, allowing each survivor to speak their truth to the camera, with the volunteers' face as a medium to display their facial expressions and emotions.
In other ways AI needs to redeem itself, the recent scrutiny of carbon emissions has brought into focus the fact that machine learning systems have a massive environmental impact. For example, a 2019 MIT Technology Review article reported that a single machine learning project could emit as much carbon as five cars in the span of their lifetimes.
That focus has sparked a strong push towards developing leaner, less power-hungry AI models. The green AI movement in particular has pushed for using carbon emissions as a key metric in evaluating AI systems. But to know we're making progress, we need quantitative measurements of how much we're reducing our carbon footprint.
To that end, according to an article from Stanford's Institute for Human-Centered AI, a team of researchers from Stanford, Facebook AI Research, and McGill University have created a tool to measure the carbon emissions of a machine learning project.
Machine learning's ubiquity will ensure that its carbon impact represents a greater and greater fraction of total carbon emissions in the future. With tools such as this one, we can make sure that we make concrete progress towards mitigating the additional impact that AI systems will have on the environment.
And now, a follow-up from one of last week's stories. To recap, Timnit Jebru, a researcher who has extensively studied the racial and intersectional implications of AI bias, gave a presentation at the Computer Vision and Pattern Recognition Conference on the various sources of AI bias. After Jan Latsun, another famed researcher made the misinformed comment that machine learning systems owed their bias to data alone, an exchange erupted on Twitter.
Synced Review reports that the week-long back and forth between the two researchers drew in many other prominent figures who expressed dissatisfaction with LeSun's comments. The field has made progress towards diversity and inclusion, but many researchers lament that such prominent figures in the community still need to be educated on these issues.
The exchange ended with LeCun making his last substantial post on Twitter after calling on others to stop attacking Jebru and others critical of his posts.
While AI researchers educate each other on bias, a number of organizations want to democratize access to compute resources. VentureBeat reports that a group of over 20 organizations, including Amazon Web Services, Google, IBM, and NVIDIA, joined schools such as Stanford and Ohio State University in backing the idea of a national AI research cloud.
The cloud would allow researchers across the United States to gain access to datasets freely available to companies like Google, but not to researchers in academia. The idea was first proposed last year by the co-directors of Stanford's Institute for Human-Centered AI, Dr. Fei-Fei Li and John Echemendi, as a strategic investment to bolster the United States' competitiveness and status as a leader in AI.
While China leads the way in data-heavy applications such as facial and speech recognition, Li thinks that less data-heavy applications, such as a genetic study of rare disease and drug discovery, may be fruitful ground for the US to make progress. That's all for this week. Thanks so much for listening. If you enjoyed the podcast, be sure to rate and share. If you'd like to hear more news like this, please check out SkyNetToday.com, where you can find our weekly news digests with similar articles.