Hello and welcome to Skynet Today's Let's Talk AI podcast where you can hear from AI researchers about what's actually going on with AI and what is just clickbait headlines. I am Andrey Karenkov, a third year PhD student at the Stanford Vision and Learning Lab. I focus mostly on learning algorithms for robotic manipulation in my research and with me is my co-host...
I'm Sharon, a third-year PhD student in the Machine Learning group working with Andrew Ng. I do research on generative models, improving generalization of neural networks, and applying machine learning to tackling the climate crisis.
And this week we are going to sort of continue on conversations we've had last week on the theme of facial recognition. So there was a giant news week last week with IBM, Microsoft and Amazon all announcing that they will not sell facial recognition technology to a police.
And there's actually not too much news that deviates from that since then. And so we're going to be discussing some more stories related to that, starting with the story, the two year fight to stop Amazon from selling facial recognition to a police by the technology review.
So as a summary of this article, many, many stories, first of all, in the last few months have covered the use of facial recognition by law enforcement. And it's definitely been very troubling and actually exacerbated by the growing list of companies that are offering these services to the police as well as others.
In 2018, the article says, quote, nearly 70 civil rights and research organizations wrote a letter to Jeff Bezos demanding that Amazon stop providing face recognition technology to governments. So these companies were also presented with the results of a 2018 project called Gender Shades, which looked at AI bias in gender classification tasks.
And recent moves by IBM, Microsoft and Amazon have responded to such concerns to varying degrees at long last two years later. So IBM, for one, has stopped developing facial recognition technology. Microsoft has pledged to stop until regulations are put in place. And Amazon has placed a one year moratorium on police use of its recognition system, which is their facial recognition system.
And facial recognition systems suffer from pretty terrible bias. So in a gender classification task, IBM system performed 34.4% worse at classifying dark skinned women than light skinned men.
But even accurate facial recognition systems could be deployed in dangerous ways. Thus, the vocal arguments we've seen for banning and both regulating facial recognition technology as opposed to merely improving it before its eventual use. And there's definitely been mixed optimism about whether companies like Amazon will stay committed to their acts like this moratorium. As a result, there's a lot of support for regulation and regulation fast.
Exactly. So this article, I think, is a pretty nice summary of the road to these events. And it showcases that it really does require researchers and also normal people putting pressure on these companies and looking and scrutinizing these technologies so that over time, these decisions can be made and these moves can happen.
One interesting thing noted by the article is that there's been various our actions leading up to this. So for instance, after Gender Shades, the project was published, IBM was one of the first companies that reached out to researchers to figure out how it could fix its bias problem.
Whereas on the other hand, Amazon, when the Gender Shades project encompassed its product and showed that it had this very bad bias, pretty much did not seem to work with the activists and was pretty unsupportive of the conclusions. So it's important to continue being skeptical and skeptical
and having support for activists who are looking into this. A member of the ACLU actually said, the cynical part of me says Amazon is going to wait until the protests die down, until the national conversation shifts to something else in order to revert to its prior position, which is that of using facial recognition technology and selling that to the police.
So this is probably ever important for activists and researchers to push regulation moving forward. Yeah, and I don't know about you, Sharon. I think as I've been learning and hearing this news and so on, especially as an AI researcher, I feel more and more of that.
you know, if I can support these movements, you know, if only by signing petitions or keeping up with possible legislation, I think this is one topic I'll be keeping an eye on and probably trying to weigh in on and yeah, try to push for more regulation to be passed because it's about time. Yeah.
Yes, I definitely think it's extremely important. It makes me think back to what PJ Reddy did, the author of YOLO, stepping back and away from doing computer vision research because he saw his systems being used for ill and his systems are powering some of these technologies. And I can see why he chose that path.
Right. So it's cool to see that there are these other researchers who are also activists and have been pushing these companies. And I think hopefully will be a point of inspiration for a broader community now that their actions have resulted in this large shift to these companies complying. And on that topic, we can move to our next piece, which is actually written by one of these activists. So this was on medium
and was by John Boulamwini and also more broadly by members of the Algorithmic Justice League. And it is titled IBM Leads More Should Follow. Racial Justice Requires Algorithmic Justice. So this is broadly the Algorithmic Justice League's response to last week's events and commenting on
sort of their view on the announcements and what the future, how we should see things moving forward.
So the systems that were reviewed in these studies, which included those from IBM, Microsoft, and Amazon, were indeed found to perform worse on darker faces than lighter faces in general, worse on female faces than male faces, and the absolute worst on darker female faces. And this highlights the often unseen yet pretty critical implications of that intersectionality.
Yeah, so that's one point pointed out in the article. And beyond that, there's also, yeah, as the title implies, a call for...
more supportive racial justice as well as algorithmic justice. So it states that to bolster public information that Black Lives Matter, companies also need to commit resources to make that statement a reality. And it calls more specifically on tech companies that substantially profit from AI, starting with IBM, to commit at least $1 million each towards advancing racial justice in the tech industry.
Specifically, they have a safe face pledge and they're calling on these companies to become signatories on this. This is a mechanism that they developed to make essentially public commitments towards mitigating the abuse of this technology.
Yeah, so I think this is quite a good read. It's also pretty short. So if you have the time, I would recommend you look it up. Again, it's titled, A BM Leads, More Should Follow. And yeah, like the title says, it's really about kind of next steps and continuing on this path of algorithmic justice and racial justice and...
Offering some very concrete steps for these large companies to continue taking action and not just sort of, you know, saying some nice statements and then not doing much more. And with that, our last article is called Amazon Can't Make Facial Recognition Go Away.
And at a high level, this article is getting at even though maybe we're setting a moratorium on facial recognition technology or curbing it in some way, we probably still need regulation to make sure that it's used effectively because we can't make it go away completely. The technology is there.
So essentially, even the best efforts, the article says, of three big companies can't stop the technology spread or misuse. Licensing agreements might allow police departments to use parts of this technology, even if they can't use specific algorithms. And there are also plenty of
Other purveyors such as Clearview AI that we've talked about before, as well as Palantir, and they're available to essentially fill this void now that these three big companies have stepped away for at least a year.
And interestingly, also, this article notes that from a legal perspective, back in 2005, Congress adopted the Real ID Act to address a problem that 9-11 attacks exposed, which was that most of the terrorists involved had acquired fake IDs. So that legislation actually required officials to verify individuals only have one license that they
entails collecting biometric data and sharing among different state and federal agencies. And biometric data includes possibly facial photos. So it's legally a little bit tricky, and the REAL IDEA Act in particular makes it kind of hard to legislate facial recognition, as the article of this case is making. And so I suppose...
My takeaway is it's a little bit kind of complicated and hopefully following up on these announcements from companies, we can start looking at particular legislation and action from politicians we can fight for. Right. I agree. I think there's always this tricky line or tradeoff between security and privacy and that with, let's say, a benevolent government,
uh, purely benevolent, uh, government, uh, that uses facial recognition technology only for good, then, then in that case, it's great. It can help increase safety. It can do all these great things, but the problem is we're imperfect. We're human and we're in charge of this technology and this technology is trained on data that is produced by humans and it's also used by humans. And so, um,
Yeah, it's this fine line of what's appropriate. Definitely. It also kind of brings to mind some stories we heard before of cleanly AI shares technology, not of the government necessarily, but also just companies and just individuals that were favorable to the company. So I think that's the case where we can pretty clearly state that
There should be legislation forbidding companies from just allowing or making an app that anyone can download to then recognize anyone else with just a photo of their face. That's a good starting point, at least.
And then from there, it'll be a tricky, but I think a necessary question of how we can restrict the government's use of facial recognition while still allowing it for cases of security where we might want it. I definitely see this
analogous to or at least parallel to the issues with eugenics in the past or at least gene editing and what that would mean for society and the regulations that have gone into
genetics in general and that they are very important in terms of how we move forward as a society to do something good. And yes, it does cause a lot of extra maybe approval layers and all that stuff and limitations. But I think
overall it's worth it so that we don't completely degrade and degenerate into something that we really fear. Yeah, I guess it's inevitable with the development of any technology that we need to reckon with how it's being used and its applications. With AI, it's actually maybe past time to do that given that companies like ClearUI are already selling technology to large groups of people and organizations.
So as we get a clearer picture and sort of get an understanding of it, hopefully over the next few years, we'll finally start to stabilize and understand how to control the situation while still getting the benefits of the technology. Agreed.
And with that, thank you so much for listening to this week's episode of Skynet Today's Let's Talk AI podcast. You can find the articles we discussed here today and subscribe to our weekly newsletter with similar ones at skynettoday.com. Subscribe to us wherever you get your podcasts and don't forget to leave us a rating if you like the show. Be sure to tune in next week.