We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Protecting Society From AI Harms: Amnesty International’s Matt Mahmoudi and Damini Satija (Part Two)

Protecting Society From AI Harms: Amnesty International’s Matt Mahmoudi and Damini Satija (Part Two)

2023/9/13
logo of podcast Me, Myself, and AI

Me, Myself, and AI

AI Deep Dive AI Chapters Transcript
People
D
Damini Satija
M
Matt Mahmoudi
Topics
Damini Satija:AI监管的关键在于关注结果而非技术本身。目前的AI监管框架往往紧跟技术热点,缺乏对长期目标的关注,而应更关注防止或促进特定结果,例如保护人权。AI技术在公共部门的快速应用,特别是福利和社会保障领域,带来了紧迫的监管需求。这些“紧缩机器”可能加剧对弱势群体的负面影响,因此需要更强有力的监管来保障社会公平。未来10年,理想的AI发展方向是打破权力结构,赋能弱势群体,利用AI技术改善社会不平等。企业员工有责任在内部倡导关注AI的潜在危害,并参与相关决策。 Matt Mahmoudi:虽然需要全球层面的AI立法,但地方层面的监管在推动变革方面更有效。地方性法规,例如纽约市和波特兰市关于人脸识别技术的禁令,在应对AI带来的危害方面发挥了重要作用。即使AI技术本身的准确性提高,但其在权力不平衡的背景下使用仍然会带来严重风险,例如加剧警察暴力和侵犯公民权利。未来AI技术可能带来的风险不容忽视,需要提前制定相应的规避措施。

Deep Dive

Chapters
Damini Satija discusses the importance of AI regulation focusing on desired outcomes rather than tech hype, emphasizing the need for frameworks that prevent negative outcomes like discrimination and privacy violations, especially in public sector environments.

Shownotes Transcript

Translations:
中文

Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.

What can corporations learn from an activist organization that works to protect people from the harms of AI? Find out on today's episode. I'm Damni Satija. And I'm Matt Mahmoodi from Amnesty International. And you're listening to Me, Myself and AI. Welcome to Me, Myself and AI, a podcast on artificial intelligence and business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, professor of analytics at Boston College.

I'm also the AI and business strategy guest editor at MIT Sloan Management Review.

And I'm Sherven Kodubande, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate. Welcome back, everyone.

On our last episode, Domini and Matt joined us to share a bit about what their organization, Amnesty Tech, is doing to combat troublesome uses of AI. Today, we're picking up on that discussion and sharing more detail about how you can be more aware of the dangers of artificial intelligence and importantly, how you can help.

Dominic, let's pick up where we left off last episode. For our listeners, we recommend you go back first and listen to our last episode, if you haven't yet, to get some more context on Amnesty Tech and the work that Matt and Dominic are doing. Dominic, you and Matt were starting to talk about AI regulation and how they can help us address challenging tech problems like housing algorithms, social work, and facial recognition systems. Let's pick up from there. Can you share more about your perspective on regulating AI?

Regulation is a key part of the toolkit here. We're working really hard on the EU's Artificial Intelligence Act, which is right now one of the most comprehensive frameworks out there for regulating AI. I think what's really important with regulation and what we're really missing right now is regulatory frameworks which really focus on the outcomes that we want to prevent or even promote.

And by that, what I mean is that a lot of the regulation we're seeing, even in the AI Act, which is a very advanced piece of AI regulation, but even in the case of the AI Act, is very often tied to tech hype cycles and the technology that is the hype of the moment. And the way we've seen this really clearly with the AI Act is that in the last few weeks and months, as the conversation has really picked up around generative AI, we've seen policymakers who are deep in the AI Act

which is really in its last phases, not know how to absorb generative AI into the framework. And I think we don't have a very robust regulatory framework if it cannot absorb a new technological development. And that's not what the goal was. In the early days, there was a lot of work done upfront with the AI Act saying we want to quote unquote future-proof this regulation. It will be an instrument that is ready to impose the restrictions and protections we need as the technology

develops. But right now it seems like it's not doing that. And I think that's because the regulation attempt itself is so tied to the technology hype cycle, as I say. And what we need is to be more focused on the outcomes we want to prevent. And so many of those outcomes are embedded in the way we think about human rights. So, you know, the right to non-discrimination, the right to privacy, there are certain outcomes we know we need to get to, to protect human rights, regardless of what the technology we're talking about is. So,

That's what I would add on the regulation front and what I think is really missing right now. I'd also add to the urgency for this, given the rapid pace of technological development, but also slightly tangentially, but algorithmic and AI and technology in general picked up in public sector environments, which is much of my focus, a lot of Matt's focus.

in these restrained environments. They've been called austerity machines for that reason. And given where the world is right now in the latest stages of the pandemic, the global economy seeing multiple shocks, we can't

can very easily anticipate that these austerity machines could become even more commonplace. And that's why this applies to AI in general. But just thinking about the area that my team works very specifically on in the welfare and social protection context, that urgency feels very, very dire right now. And secondly, these efficiency tools are often designed to detect or weed out fraudulent applicants for welfare and public services. So it

These are really punitive tools as well in the name of efficiency. That's where the disproportionate impact happens on low-income groups, communities of color, et cetera. So this entire narrative drives really harmful outcomes. We see that narrative only accelerating given the context that we're in. And so the case and the urgency for that regulation is very strong right now.

So I think that's pretty interesting. Part of my backstory involves, I used to work for the International Atomic Energy Agency in Vienna, and that's a nuclear repository. You can point to a lot of difficulties with that model, but we have not had nuclear explosions since 1945, or we've not had nuclear warfare.

But Matt, you also mentioned local regulation. And this was an idea, even the EU is, let's say, country or groups of country related, but

But it seems like this is going to have to be something at the global level versus that local level. Or where do you see the level of this regulation taking place? With nuclear, it seemed to require at the global level. And it did a nice job, I think, I'm biased, but it did a nice job of pairing positive uses of the technology with limiting the negative uses. What level should this regulation be then?

I mean, there's no counter argument to say that there shouldn't be global legislation on this or there shouldn't be a global level agreements and resolutions in place on this that imposes binding obligations on states when it comes to the deployment and development and the usage of AI, not just in civilian matters, but in the context of warfare as well. However, I think as far as the most progressive and immediate term

term impacts we've seen when it's come to advocacy in terms of regulation. It has been at the local level because constituents are very good at activating their local lawmakers towards taking decisive action at the, for example, city council level. We're seeing

movements at the New York City Council level, just as we speak towards moving for a ban on the usage of facial recognition in residential housing. We've also seen movements that will be introduced later on that will be speaking about it in the context of law enforcement. We've seen in the context of Portland, Oregon, legislation being put into place at a

At a moment that was so critical, it was especially leading up to the Black Lives Matter moment with the murder of George Floyd, where the kinds of racial impacts and racializing impacts of these technologies was becoming even more clear that by allowing the deployment of facial recognition, you're not simply allowing the usage of an experimental tool that has more of a tainted record than it has a record showing sort of positive affordances of any sort.

but you're also enabling the existing sort of institutionalized racism that does exist within police forces to be put on steroids in a great many ways. And of course, a lot of the claims that protesters are making during this moment was against police abuse. And so you can't have a challenge to policing and then also then facial recognition, right? So that's all to say that the local level will drive a lot of the demand, even at the global level for regulation. And I think

is stitching those pieces together and being able to draw these stories that look, there's no instance of XYZ form of technology or AI driven surveillance in any context that has shown us that we can safely just take our hands off the wheel and let it do its thing. That is what's going to galvanize sort of the kinds of regulation we might want to see at that level. And

The kinds of EU effect that certain civil society organizations refer to might be something to look for, the ways in which regulation and regulatory models jump from one space to another. I will also say that there's been processes at the UN that speak specifically to the usage of autonomous weapon systems, which has been a long winded process so far, but which does seek to address issues of AI in the context of warfare.

Most of our podcasts to date have been on how these tools could help dramatic improvements in efficiency and effectiveness and also do positive for the world and for the environment, right? There's another side of this, which Domini and Matt have so eloquently shown a light on, that is...

the power imbalance of the usage and the outcomes. And I think it's an important dialectic that has to happen over time. So as much as I'd like to push more on can technology at least be part of the solution, which I fully believe technology is part of the solution, I think the existential nature of the issue is such that you need to have this dialogue and this discourse

Matt, you talked about image recognition, right? If you look at image recognition improvement over time, it has improved also exponentially. To date, it still creates problems when it's used at such wide scale, right? Obviously, if it's got such a close-up view, it might not. But imagine a world maybe 10 years from now, maybe 20, who knows, where the instrumentation is so far advanced in

and the algorithms are so far advanced and the safeguards are there, that is actually trumps a human. The very people that went and counted those cameras, like trumps their ability to tell the difference between people. I would assume in that world,

You'd be okay with it being used or not. What would be your view, let's say, if you were to project like 10 years from now, if some of these tools just don't make mistakes anymore, and now you only have the bad actor situation, but the tool itself does not make a mistake, would that change your position?

That's terrifying. To me, that's terrifying because it creates a conditions under which institutions which are imperfect and who come of varying positions on an ethical spectrum are suddenly empowered to do things at great scale with great precision. So it's no longer that, you know, you have the NYPD being able to just find whoever they can using facial recognition that was provided to them as sort of a test case is that they can go and target someone.

specifically, and they're able to do so at a massive scale, no longer with the kinds of false positives that we've heard about from Detroit and New Jersey and elsewhere, but like actually enabling them to carry out to fruition the existing. And there's data on this, right? Like there's data to support that, for example, stop and frisk incidents target upwards of 90% black and brown people and that these happen in

mostly black and brown neighborhoods, that is not because black and brown neighborhoods are predominantly full of crime. That is because that is where the targeting happens. And so there's greater visibility. And as it so happens,

most of the cases of the stop and frisk incidents don't actually lead to an arrest. So that again shows you that there is no sort of credence to the idea that these communities are inherently criminal in any form of way. So then imagine a future in which the police isn't empowered to do exactly that. That is to say a digital stop and frisk. Everyone is virtually lined up without their knowledge and consent simply because that's how this institution operates. And now it's given free reign to do so.

at a scale that it hasn't been able to do before. That is terrifying to me. We can't have that. I think that's a very real scenario that we have to consider and that has profound implications for, you know, Americans' First Amendment rights, for the rest of the planet's right to protest, and that becomes harder. And so what do we do when we are faced with, say, for example, a state or a government that is suddenly fallen out of favor

would say it's populist, but it has been equipped with these sort of awe-inspiring technologies of horror, how do you then challenge that government if there aren't protections in place to ensure that those technologies weren't given to them in the first place? For example, that would be my contention as to why this is a terrifying scenario. Yeah, I think you're right. I think you're right. And this is why I actually, I think this was very helpful because

You cannot de-link the technology from the user and the imbalance of power and the fundamental possibility of corruption in certain situations. You mentioned the state and the government entities. You haven't even brought in the corporate world where so much of this is happening in a very concentrated way.

hegemony of power that whatever oversight you may be concerned about with the NYPD is in some of your examples. I think we have that on steroids with the lack of oversight or

ability to control what happens within the darkness of a corporation. And then that's true even when well-intended people there. It's not ascribing necessary malice. It's just describing self-interest. And as I think about the power imbalance, we as individual people have so very little with any of these other collectives. There's exactly zero of these artificial intelligence algorithms or recommendation engines that care about what I want to see.

They care about what the corporation or the advertiser or whoever, their objective function is fundamentally not my objective function. Now, often it's similar enough to where it's okay, but inherently their objective function is not necessarily my objective function. And that's where the power balance seems really difficult. Dominic, I'm going to also ask you 10 years from now, how do you see the future here?

What would be good? What would be terrifying? Matt depicted a very terrifying future. Yeah, I would hope for the non-terrifying future. I think the question to me gets back to this question of power. And we've all mentioned it now and the power imbalances in the current technological trajectories that we see. Yes, corporate power and government power. And those two things are not disconnected either, right? Like where do governments get these technologies actually increasingly less and less commonplace?

developed in-house and procured from somewhere. Government creates demand for these technologies. Companies sell these. It's all connected and it's all part of the system of where does the power sit from start to finish? Like who is investing in what technology? How is that technology being developed? Who decides where it's deployed, where it's sold? Who's buying it?

If I try and envision a future 10 years from now, which is not the terrifying future that you and Matt have discussed, it is the opposite of that. It's one where we're able to dismantle some of those power structures which drive the current trajectory of technological development. It's where we're able to give power to those who typically have not had voice in what technologies are developed and how they're used to their benefit. Because I do believe to an earlier point that

There are ways we can use these technologies and develop these technologies to really lift people out of positions of systemic disadvantage and marginalization in society. But we need to bring out the visions for how that can happen. And right now, there is no way for those visions to exist anymore.

there is no time for those of us working on sort of human rights impacts of tech, but more importantly, those impacted by the use of these technologies to put those visions out there. So I don't have a specific future to give you or outline for you, but rather a way that I would like to move towards defining what that future looks like. And I think in order to do that, it's really important for us to not

always take a position of what are the benefits and what are the risks of each technology and force ourselves into a position of assessing every new technological development we see from a place of this balance, because we've been doing that so far and it's led us to a point at which we have these corporations with huge hegemonic power. Social media is a really good example of this. I think for years now, we forced ourselves to say sort of, yes, there are all these

bad things happening or that we can foresee happening but also think about all these benefits that social media has brought to society yes but there is a reason that we need to focus on those harms there is a reason that we need to shine more light on those risks because if we don't we're not seeing where we need to shift the way that technology is developing and what red lines we need to draw and what we need to change most of our listeners are corporate or government workers

Right now, let's say they buy into the things you're saying. What should they be doing? What should they be thinking about? What should each person be doing right now? I mean, wherever you work, I think you have a responsibility, especially if you are building new technologies, you're contributing to the development of tech, the use of tech, manufacturing.

Matt and I work on some very specific contexts that we've talked about today. But as we've also mentioned in this call, there are so many domains in which tech is used that we don't work on education, health care, others. And these issues sit across all of those domains. So I would just say, think about your position, the power that you do have, the responsibility that you do have to to bring these issues up.

internally, whatever organization or company you are at, I think people forget sometimes the power you can have in bringing these issues to light behind closed doors. Some of our work is very public, but a lot of these conversations and the really important decisions are made behind closed doors. Matt, Dominique, this has been a great and different discussion for us. It seems especially important as consumer AI tools start to proliferate. While we often focus on the positive ways that organizations can use AI,

Positive uses, as we know now, are not the only ways that we can use AI. But the good thing is that we have considerable human agency in how we use these tools. Thanks for taking the time to talk with us. Thank you for having us on. That's a wrap on Me, Myself, and AI, Season 7. We're blown away by how popular the show is, and we greatly appreciate all of our listeners. Please feel free to continue to make suggestions as we continue to grow. We'll be back later this fall with more new episodes.

In the meantime, please consider joining our LinkedIn community and rate and review our show. Also, please suggest it to any friends or colleagues who might benefit from these conversations. We thank you for your support and we'll speak again with you soon.

Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and learn more about AI.

and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes, and we hope to see you there.