We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode The Path to Facial Recognition Reform and Regulation

The Path to Facial Recognition Reform and Regulation

2020/6/19
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Kurenkov
S
Sharon Zhou
Topics
Andrey Kurenkov:本周人工智能领域的主要新闻是IBM、微软和亚马逊停止向警方出售面部识别技术。近几个月来,执法部门使用面部识别技术引发了诸多担忧,并且越来越多的公司向警方及其他机构提供此类服务加剧了这一问题。2018年,近70个民权和研究组织致信杰夫·贝佐斯,要求亚马逊停止向政府提供面部识别技术,并向这些公司展示了2018年“性别阴影”项目的结果,该项目研究了人工智能在性别分类任务中的偏差。IBM停止开发面部识别技术;微软承诺在制定法规前停止;亚马逊对其面部识别系统实施为期一年的暂停使用。这些举措是对此前担忧的不同程度的回应。面部识别系统存在严重的偏差,例如在性别分类任务中,IBM系统对深肤色女性的分类准确率比对浅肤色男性的分类准确率低34.4%。即使准确的面部识别系统也可能被危险地部署,因此出现了呼吁禁止和规范面部识别技术的强烈声音,而不是仅仅在最终使用前改进它。需要研究人员和普通民众向这些公司施压,仔细审查这些技术,才能促成这些决定和行动。在“性别阴影”项目发布后,IBM率先联系研究人员,试图解决其系统中的偏差问题;而亚马逊则对该项目的结果反应冷淡。应该继续保持怀疑态度,并支持对此进行调查的活动家。亚马逊可能会等到抗议活动平息后,再恢复其向警方出售面部识别技术的做法,因此活动家和研究人员推动立法至关重要。作为人工智能研究人员,应该支持这些运动,例如签署请愿书或关注可能的立法,推动更多法规的通过。即使暂停或限制面部识别技术,也需要法规来确保其有效使用,因为这项技术无法完全消失。即使大型公司暂停使用面部识别技术,其他公司仍然可以提供此类服务,并且这项技术可能通过许可协议继续被执法部门使用。《真实身份法案》要求收集生物识别数据,包括面部照片,这使得面部识别技术的立法变得复杂。 Sharon Zhou:需要研究人员和普通民众向这些公司施压,仔细审查这些技术,才能促成这些决定和行动。YOLO的作者PJ Reddy退出计算机视觉研究,因为他看到他的系统被用于不良用途,这引发了对人工智能研究人员责任的思考。在安全与隐私之间存在权衡,需要在允许面部识别技术用于安全目的的同时,限制政府对其的使用。应该立法禁止公司允许任何人下载应用程序来识别他人。面部识别技术的监管类似于过去优生学或基因编辑的监管,需要谨慎处理,以避免负面后果。随着技术的不断发展,需要及时应对其使用方式和应用,特别是考虑到Clearview AI等公司已经向大量个人和组织出售技术。算法正义联盟对IBM等公司停止向警方出售面部识别技术的回应,呼吁关注算法正义和种族正义。研究表明,面部识别系统对深肤色女性的识别效果最差,突出了交叉性在人工智能中的重要性。算法正义联盟呼吁科技公司为种族正义事业捐款,并签署“安全面孔承诺”,以减少该技术的滥用。

Deep Dive

Chapters
The episode discusses the recent decisions by IBM, Microsoft, and Amazon to halt the sale of facial recognition technology to police, highlighting the role of civil rights and research organizations in pressuring these companies.

Shownotes Transcript

Translations:
中文

Hello and welcome to Skynet Today's Let's Talk AI podcast where you can hear from AI researchers about what's actually going on with AI and what is just clickbait headlines. I am Andrey Karenkov, a third year PhD student at the Stanford Vision and Learning Lab. I focus mostly on learning algorithms for robotic manipulation in my research and with me is my co-host...

I'm Sharon, a third-year PhD student in the Machine Learning group working with Andrew Ng. I do research on generative models, improving generalization of neural networks, and applying machine learning to tackling the climate crisis.

And this week we are going to sort of continue on conversations we've had last week on the theme of facial recognition. So there was a giant news week last week with IBM, Microsoft and Amazon all announcing that they will not sell facial recognition technology to a police.

And there's actually not too much news that deviates from that since then. And so we're going to be discussing some more stories related to that, starting with the story, the two year fight to stop Amazon from selling facial recognition to a police by the technology review.

So as a summary of this article, many, many stories, first of all, in the last few months have covered the use of facial recognition by law enforcement. And it's definitely been very troubling and actually exacerbated by the growing list of companies that are offering these services to the police as well as others.

In 2018, the article says, quote, nearly 70 civil rights and research organizations wrote a letter to Jeff Bezos demanding that Amazon stop providing face recognition technology to governments. So these companies were also presented with the results of a 2018 project called Gender Shades, which looked at AI bias in gender classification tasks.

And recent moves by IBM, Microsoft and Amazon have responded to such concerns to varying degrees at long last two years later. So IBM, for one, has stopped developing facial recognition technology. Microsoft has pledged to stop until regulations are put in place. And Amazon has placed a one year moratorium on police use of its recognition system, which is their facial recognition system.

And facial recognition systems suffer from pretty terrible bias. So in a gender classification task, IBM system performed 34.4% worse at classifying dark skinned women than light skinned men.

But even accurate facial recognition systems could be deployed in dangerous ways. Thus, the vocal arguments we've seen for banning and both regulating facial recognition technology as opposed to merely improving it before its eventual use. And there's definitely been mixed optimism about whether companies like Amazon will stay committed to their acts like this moratorium. As a result, there's a lot of support for regulation and regulation fast.

Exactly. So this article, I think, is a pretty nice summary of the road to these events. And it showcases that it really does require researchers and also normal people putting pressure on these companies and looking and scrutinizing these technologies so that over time, these decisions can be made and these moves can happen.

One interesting thing noted by the article is that there's been various our actions leading up to this. So for instance, after Gender Shades, the project was published, IBM was one of the first companies that reached out to researchers to figure out how it could fix its bias problem.

Whereas on the other hand, Amazon, when the Gender Shades project encompassed its product and showed that it had this very bad bias, pretty much did not seem to work with the activists and was pretty unsupportive of the conclusions. So it's important to continue being skeptical and skeptical

and having support for activists who are looking into this. A member of the ACLU actually said, the cynical part of me says Amazon is going to wait until the protests die down, until the national conversation shifts to something else in order to revert to its prior position, which is that of using facial recognition technology and selling that to the police.

So this is probably ever important for activists and researchers to push regulation moving forward. Yeah, and I don't know about you, Sharon. I think as I've been learning and hearing this news and so on, especially as an AI researcher, I feel more and more of that.

you know, if I can support these movements, you know, if only by signing petitions or keeping up with possible legislation, I think this is one topic I'll be keeping an eye on and probably trying to weigh in on and yeah, try to push for more regulation to be passed because it's about time. Yeah.

Yes, I definitely think it's extremely important. It makes me think back to what PJ Reddy did, the author of YOLO, stepping back and away from doing computer vision research because he saw his systems being used for ill and his systems are powering some of these technologies. And I can see why he chose that path.

Right. So it's cool to see that there are these other researchers who are also activists and have been pushing these companies. And I think hopefully will be a point of inspiration for a broader community now that their actions have resulted in this large shift to these companies complying. And on that topic, we can move to our next piece, which is actually written by one of these activists. So this was on medium

and was by John Boulamwini and also more broadly by members of the Algorithmic Justice League. And it is titled IBM Leads More Should Follow. Racial Justice Requires Algorithmic Justice. So this is broadly the Algorithmic Justice League's response to last week's events and commenting on

sort of their view on the announcements and what the future, how we should see things moving forward.

So the systems that were reviewed in these studies, which included those from IBM, Microsoft, and Amazon, were indeed found to perform worse on darker faces than lighter faces in general, worse on female faces than male faces, and the absolute worst on darker female faces. And this highlights the often unseen yet pretty critical implications of that intersectionality.

Yeah, so that's one point pointed out in the article. And beyond that, there's also, yeah, as the title implies, a call for...

more supportive racial justice as well as algorithmic justice. So it states that to bolster public information that Black Lives Matter, companies also need to commit resources to make that statement a reality. And it calls more specifically on tech companies that substantially profit from AI, starting with IBM, to commit at least $1 million each towards advancing racial justice in the tech industry.

Specifically, they have a safe face pledge and they're calling on these companies to become signatories on this. This is a mechanism that they developed to make essentially public commitments towards mitigating the abuse of this technology.

Yeah, so I think this is quite a good read. It's also pretty short. So if you have the time, I would recommend you look it up. Again, it's titled, A BM Leads, More Should Follow. And yeah, like the title says, it's really about kind of next steps and continuing on this path of algorithmic justice and racial justice and...

Offering some very concrete steps for these large companies to continue taking action and not just sort of, you know, saying some nice statements and then not doing much more. And with that, our last article is called Amazon Can't Make Facial Recognition Go Away.

And at a high level, this article is getting at even though maybe we're setting a moratorium on facial recognition technology or curbing it in some way, we probably still need regulation to make sure that it's used effectively because we can't make it go away completely. The technology is there.

So essentially, even the best efforts, the article says, of three big companies can't stop the technology spread or misuse. Licensing agreements might allow police departments to use parts of this technology, even if they can't use specific algorithms. And there are also plenty of

Other purveyors such as Clearview AI that we've talked about before, as well as Palantir, and they're available to essentially fill this void now that these three big companies have stepped away for at least a year.

And interestingly, also, this article notes that from a legal perspective, back in 2005, Congress adopted the Real ID Act to address a problem that 9-11 attacks exposed, which was that most of the terrorists involved had acquired fake IDs. So that legislation actually required officials to verify individuals only have one license that they

entails collecting biometric data and sharing among different state and federal agencies. And biometric data includes possibly facial photos. So it's legally a little bit tricky, and the REAL IDEA Act in particular makes it kind of hard to legislate facial recognition, as the article of this case is making. And so I suppose...

My takeaway is it's a little bit kind of complicated and hopefully following up on these announcements from companies, we can start looking at particular legislation and action from politicians we can fight for. Right. I agree. I think there's always this tricky line or tradeoff between security and privacy and that with, let's say, a benevolent government,

uh, purely benevolent, uh, government, uh, that uses facial recognition technology only for good, then, then in that case, it's great. It can help increase safety. It can do all these great things, but the problem is we're imperfect. We're human and we're in charge of this technology and this technology is trained on data that is produced by humans and it's also used by humans. And so, um,

Yeah, it's this fine line of what's appropriate. Definitely. It also kind of brings to mind some stories we heard before of cleanly AI shares technology, not of the government necessarily, but also just companies and just individuals that were favorable to the company. So I think that's the case where we can pretty clearly state that

There should be legislation forbidding companies from just allowing or making an app that anyone can download to then recognize anyone else with just a photo of their face. That's a good starting point, at least.

And then from there, it'll be a tricky, but I think a necessary question of how we can restrict the government's use of facial recognition while still allowing it for cases of security where we might want it. I definitely see this

analogous to or at least parallel to the issues with eugenics in the past or at least gene editing and what that would mean for society and the regulations that have gone into

genetics in general and that they are very important in terms of how we move forward as a society to do something good. And yes, it does cause a lot of extra maybe approval layers and all that stuff and limitations. But I think

overall it's worth it so that we don't completely degrade and degenerate into something that we really fear. Yeah, I guess it's inevitable with the development of any technology that we need to reckon with how it's being used and its applications. With AI, it's actually maybe past time to do that given that companies like ClearUI are already selling technology to large groups of people and organizations.

So as we get a clearer picture and sort of get an understanding of it, hopefully over the next few years, we'll finally start to stabilize and understand how to control the situation while still getting the benefits of the technology. Agreed.

And with that, thank you so much for listening to this week's episode of Skynet Today's Let's Talk AI podcast. You can find the articles we discussed here today and subscribe to our weekly newsletter with similar ones at skynettoday.com. Subscribe to us wherever you get your podcasts and don't forget to leave us a rating if you like the show. Be sure to tune in next week.