The surveillance systems, while extensive, are not foolproof. Mangione used a combination of public transportation, including a bike, taxi, and possibly a train, which were all captured on camera. However, the quality of the images and the lack of a matching photo in the NYPD's database likely hindered immediate identification and tracking.
Facial recognition technology relies on image quality and the presence of a matching photo in the database. In Mangione's case, the images were not high-quality, and since he had no prior arrests, his photo was not in the NYPD's database, preventing an immediate match.
The Domain Awareness System, developed with Microsoft, integrates various data sources, including arrest records, summons data, warrants, and camera feeds. It is designed to help prevent terrorism and solve crimes by providing real-time information to officers, allowing them to respond more effectively to suspicious activities or crime scenes.
Clearview AI scrapes images from the internet without consent, creating a database of billions of images. Critics argue that this violates privacy and could lead to misuse, such as identifying individuals at protests or targeting specific groups. The company has faced legal challenges and bans in several countries.
Despite extensive surveillance footage, it was a McDonald's customer who recognized Mangione and alerted authorities, leading to his arrest. This underscores the reliance on human intervention to identify and locate suspects, highlighting the limitations of technology in real-world scenarios.
Real-time facial recognition could allow the government to track individuals' movements continuously, raising concerns about civil liberties and privacy. It could be used to monitor activities such as attending religious services, accessing abortion providers, or visiting gun stores, leading to a dystopian surveillance state.
The NYPD's budget is notoriously opaque, making it difficult to assess how much is spent on surveillance technology. This lack of transparency hinders public oversight and evaluation of the efficacy and impact of these technologies, particularly in terms of racial bias and misuse.
This is On Point. I'm Meghna Chakrabarty. When UnitedHealthcare CEO Brian Thompson was gunned down in New York City on December 4th, security cameras caught the entire murderous act as it happened. That footage was subsequently released by the New York Police Department. The shooter, wearing a hoodie, mask, and backpack, steps out from behind a parked car, calmly points his gun at Thompson's back, and shoots twice.
As you know, since then, law enforcement officials identified 26-year-old Luigi Mangione as the alleged killer and arrested him in Pennsylvania. But that initial security camera footage isn't the only surveillance NYPD has of the shooter.
Late last week, police released new information on how they believe he got out of New York City. Investigators believe Luigi Mangione rode a bike through Central Park after allegedly killing Brian Thompson, then took a cab to a major bus terminal in Upper Manhattan just to go back downtown, possibly taking the subway to New York's Penn Station. From there, investigators question whether Mangione fled to Pennsylvania by train.
That report from CBS News' Jared Hill. And what's interesting about the report is that each stop Hill identifies, that bike, the taxi cab, Penn Station, are all accompanied by surveillance images of Mangione.
Which makes perfect sense, because New York City is one of the most surveilled cities on the planet. In fact, here's how NYPD describes a part of their surveillance system themselves. Quote, the New York City Police Department has a tool developed with Microsoft that utilizes the largest networks of cameras, license plate readers, and radiological sensors in the world. End quote. End quote.
And yet somehow, even with that tech dragnet, always on and always watching, Mangione somehow managed to make it to Altoona, Pennsylvania, and wasn't arrested until a sharp-eyed McDonald's customer saw him, told a worker, and that worker called 911, all of which happened five days after Thompson's murder.
So what is the purpose of all that mass surveillance in New York? What are its advantages? When does it work? What are its limitations? As shown by that five-day manhunt for Luigi Mangione.
Well, Faiza Patel joins us now. She's senior director of the Liberty and National Security Program at the Brennan Center for Justice. Faiza Patel, welcome to On Point. Hi, Meghna. Thanks for having me. So first of all, can you bring me up to speed insofar as what you think is pertinent or most salient in terms of the information NYPD says it gathered on Mangione through surveillance over the past week or so?
I think what's really salient here is for people to understand that technology is not magic. Sometimes it works, sometimes it doesn't, and that there is a very large human element in how technology is used and how well it operates in particular circumstances.
So let's take the case of the images of Luigi Mangione that have been circulating across the media. You have a photograph from Starbucks. You have a photograph from when he's in and out of a taxi cab. You have a photograph from a hostel. And then you have a number of these kind of blurred images, which...
seem to be taken from street cameras, likely NYPD cameras. So you have a great deal of variation in the kinds of photographs that are available of Mangione.
And if you compare those photos to, for example, how you take a photograph of an individual, right, if you're taking a picture, you take a picture head on, right? You're not—sometimes you take it from the side, but normally you're trying to get a person's sort of full face in it. You know, and these images are not that, right? And that, I think, brings us to kind of one of the first limitations of facial recognition technology, which is what is—
you know, the sort of way in which the police will identify an unknown suspect, right? So here they would take the photographs and they would run them through the NYPD's database, which has, I think,
It has several million images in it, and those are arrest photos and parole photos. Now, you've got sort of two fault points here, right? So one is, is the image quality good enough to be able to match with the database or the library? The second qualifier is, does the library have this guy's photo in it? And if he hasn't been arrested in New York and he hasn't been on parole, it's not going to have that photo, so they're not going to get a match.
That's not the only option, though, that the police have. They have two other options. So one is at least in around 2018, 2019, that time frame, the NYPD was testing a contract with Clearview AI, which you may remember is a well-known company who has come under fire and has been fined and even banned in several countries, which scrapes images from cameras.
the internet, just anybody's images, yours, mine, you know, people who are completely private figures, and has created a database of what it says are 50 billion images. That's a huge amount of images, right? So at least at one point, NYPD could have run his image through Clearview AI's database.
Another option that the NYPD has is that it can run -- it can request the FBI. So the FBI has a very, very large database of images taken from driver's licenses, you know, all kinds of sources. And as of 2020,
which was the last number I've seen, they had something like 640 million images. Still not Clearview AI, but it is a pretty big database. So those are like the three options that the NYPD can do. Can I just jump in here for a second? Because I want to be able to sort of verify
very surgically go through each of these options as you lay them out for us. But before I do that, I want to be clear to listeners that we did definitely reach out to the New York City Police Department to see if someone could speak with us or if they had a comment or a statement to make regarding the issues that we're raising in this hour. NYPD did not respond to our requests, which they had, but they did not.
So first of all, Faiza, let me just back up here for a second, because the very first thing that you said is quite important, that technology does not equal magic. Right.
Right. And the reason why I think this is so fascinating to learn about in concrete detail is that I think the general public has been habituated into thinking that technology, especially while law enforcement uses it, is a kind of magic. Right. Because I was literally recalling an episode of.
law and order SUV from like 15 years ago where there was a terrible crime committed, but they all gathered around a computer. They had like one license plate and a face. And through some like magic system, they were able to trace a credit card. And then that credit card was connected to a MetroCard. And then that MetroCard was pinging in different places. And then, of course, the credit card was also linked to a...
like a tolling mechanism in the suspect's car and like within like half a day they caught the person. Now that's Hollywood.
But I think a lot of people do feel like it should be that fast, which is why this five-day lag in arresting Mangione and that only coming after somebody, an actual just human recognized him, it seems to call into question like, well, what good is all of this surveillance if ultimately at the end you rely on an upstanding citizen to say, I think that guy is the guy I saw on TV. Right.
Right. I mean, I think that that's sort of the third piece of this too, right? You have to identify someone, but then you also have to find them, right? And if somebody is, you know, clever and doesn't use their credit card and doesn't drive a car whose license plate you could recognize, but rather, you know, uses public transport or in this case, I guess, a cab and some kind of public transport, and you can still pay in cash.
in most of these places. So there are ways to kind of avoid being found. So I think that's also a piece of it, right? There's the image, there's the whether or not you have, you know, a digital image that matches that image. And then there's the, you know, even if you identify the individual to actually locate the individual. So those are like three separate phases of
And each of those has fault points. And I think we've seen those, at least it seems from the outside, that we've seen those kind of play out in this story as well. Do you think that those fault points now as laid bare, as you're saying, with the Mangione case, call into question how much money is going into this kind of very high tech law enforcement surveillance?
I think the efficacy of the systems that the NYPD and other police departments have in place has never been properly tested, right? So you do hear, and I feel like this has to be true, that facial recognition has helped the NYPD and other police departments and the FBI investigate crimes and even apprehend suspects. At the same time, you know, the
the question is, well, what is the cost of this technology? And to tell you the truth, we don't know how much the NYPD spends on these technologies, right? The NYPD's budget is notoriously opaque. So it's not that you can sort of draw out and say, well, you know, in 2022, the NYPD spent $1
X hundred million dollars on surveillance technology. You don't have that kind of granular information available from the NYPD. You also have only very limited transparency from the police department about its surveillance technology. Now, in 2021, the New York City Council passed a law called
the POST Act, which requires the NYPD to make annual disclosures about its surveillance technologies and to include also impact statements about like, well, who is this technology most affecting? Because one of the big concerns around many surveillance technologies, including facial recognition, is this issue of bias, right? I mean, there's been a concern that, well, it has been documented that
facial recognition algorithms tend to work better on white or Caucasian faces and on men than they do on people of color and particularly black people. So, you know, if you look at the wrongful arrests that have been made based on facial recognition technology over the last couple of years, which have received a lot of media coverage, there have been six arrests that have been reported
supported, and all six of them are black. So you also have this racial dimension playing into it. So Faiza Patel, hang on for just a moment. When we come back, what I want to do is take a deep dive into each of the known technologies that you went through that NYPD either has currently or has had at one time. So that's what we'll do in just a moment. This is On Point. Support for the On Point podcast comes from Indeed.
Are you hiring? With Indeed, your search is over. Indeed doesn't just help you hire faster. 93% of employers agree that Indeed delivers the highest quality matches compared to other job sites. And listeners get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash on point. That's Indeed.com slash on point. Terms and conditions apply. Need to hire? You need Indeed.
You're back with On Point. I'm Meghna Chakrabarty. And today, Faiza Patel joins us. She's senior director of the Liberty and National Security Program at the Brennan Center for Justice. And we're talking about the vast surveillance system or systems that are in place in New York City, run by the New York City Police Department, and their efficacy and their limits, as shown by the five-day-long manhunt for Luigi Mangione.
By the way, here's a little summation of the route that alleged killer Mangione took on the morning of December 4th when UnitedHealthcare CEO Brian Thompson was murdered. This is a route as compiled by security images obtained by CNN. At 6.17 a.m., police say a camera at a nearby Starbucks shows the suspect buying a bottle of water and two energy bars.
Two minutes after that, at 6.19, a surveillance camera near a deli on West 55th Street appears to show the suspect walking and briefly stopping at a pile of trash. Eleven minutes later, 6.30 a.m., surveillance cameras pick up what appears to be the gunman on the phone. You can see a potential witness walking right behind him. 6.44 a.m., the tragic moment.
UnitedHealthcare CEO Brian Thompson leaves his hotel and crosses the street. He walks towards the Hilton Midtown. You can see the suspect wearing a backpack walk up right behind him. Police say the gunman shot Thompson in the back and leg. Then seconds later, the suspect crossed the street and went through an alleyway between 54th and 55th streets. Police say he then got an electric bike and headed north on 6th Avenue. Four minutes later, police say a camera spots a person believed to be the suspect riding an electric bike in
in Central Park. 12 minutes later, 7 a.m., about 30 blocks away from a Nest camera, more video of what appears to show the suspect riding on West 85th Street, but now without the backpack.
Once again, that's from CNN, from the security camera footage on the day of the murder. OK, so let's go back in time, though, because, Faiza, as you said, there have been there's been quite a bit of development and installation of lots of different kinds of mass surveillance systems in New York City. There's one that you had specifically described.
mentioned back in 2012. So here's a bit of tape from it. This is August 8th, 2012. Then-Mayor Michael Bloomberg announces the launch of a new, quote, real-time crime prevention and counterterrorism technology solution.
Today we're announcing the full launch of the Domain Awareness System. This new system capitalizes on new powerful policing software that allows police officers and other personnel to more quickly access relevant information gathered from existing technology and help them respond even more effectively. In other words, we're finding new ways to leverage already existing cameras
crime data and other tools to support the work of our investigators, making it easier for them to determine if a crime is part of an ongoing pattern. And it will allow the NYPD to better deploy its officers. Then-Mayor Michael Bloomberg in 2012 said,
touting the domain awareness system, which I understand at that time cost NYPD $40 million. So, Faiza, you said a little bit about it. Tell us more. What exactly is the domain awareness system? Who did NYPD contract with or partner with to install it? What was its intended use case?
So the domain awareness system was developed in conjunction with Microsoft, I believe. And what it does is that it pulls in different kinds of information, right? So it's going to pull in arrest data. It's going to pull in summons data. It's going to pull in warrants, outstanding warrants.
And then also if somebody calls in, say, to 911, right, those reports will be pulled in. It will pull in the location of those calls sometimes. And then it will also pull in for individuals, you know, any license plate information, right? So if you're caught going through one of the bridges or tunnels that you have to pay, a
any associated address information, phone number, date of birth, whether or not you have a gun permit. So all of this kind of information is pulled together. Now, the
The theory here is that when you pull all this information together, you're going to be able to do sort of two things, right? So one is, and the camera feeds, of course, which is kind of critical to this, right? This all started with sort of in downtown Manhattan after the 9/11 attacks when a network of cameras
mainly in sort of private businesses, etc., was fed into the NYPD's DAS system. So you have all of this information coming in. And the theory is that, you know, this can be used to do two things. So one is that it can be used to prevent terrorism. So, you know, presumably if an NYPD officer is monitoring the system,
and he or she sees something that causes alarm bells to go off, they can then pull in additional data if they can identify the individual who's doing something of concern and quickly, you know, sort of find out who they are and what they've been up to if they're in the system, right? The second thing that it's supposed to do is to help solve crimes. So the idea being that, you know, if you have, you know, if you have a potential suspect, say you have a
a photograph, for example, right? That you could then, you know, if you can come up with that person's identity or you can sort of correlate them with crime complaints, etc. So the idea is that all of this data is going to help you solve crimes, you know, when you don't know who's carried out the crime or the suspected crime. So that's sort of the theory of this.
Can we just pause there? Maybe you're about to answer the question I'm going to ask. Because, OK, so critically, you're saying in theory, that is how the domain awareness system is supposed to work, right? Just to underscore and to be sure I heard you correctly, to help solve crimes when you're not exactly sure who did it.
Is that right? Yeah. OK, good. That's how I mean, I haven't seen it in operation. So it's actually unfortunate that the NYPD didn't send somebody because they could probably explain it better than I could, because I'm looking at documents and reports. One hundred percent. Once again, they just simply did not. Again, for total transparency to listeners, they did. NYPD did not respond to our requests. But so so in this case, though, I guess where I was going is that.
You had said earlier that the efficacy of these systems has never properly been tested even. Well...
So I think when you look at efficacy, right, there are it is a very complicated issue, right? How do you test efficacy? So the place where, you know, system efficacy has been most tested has been with facial recognition software. Right. So the National Institute of Science and Technology, which sits in the Commerce Department and is sort of the premier institute,
testing place in the United States has conducted a number of tests of facial recognition technology starting, I think they did a big report in 2013. They did another one in 2019 and then did some follow-ups in 2022 and then in 2024.
And what that has showed systematically, particularly between 2013 and 2019, that facial recognition technology has improved dramatically, right? So that the kinds of error rates that you're looking for, for the best algorithms, right, for the best tool on the market, are, you know, below 1 or 2%. So it has become more and more accurate over time.
At the same time, you know, it has to be -- you have to acknowledge, you know, the limits of the kind of testing that NIST performs, right? So for one thing, it's voluntary testing, right? So you decide as a company whether you want your algorithm to be tested. And the system that NYPD reportedly uses, which is something called DataWorks, I believe, was not among the list of tested systems.
cannot tell you sitting here, you know, does the NYPD have a state-of-the-art facial recognition system or does it not, right? There's a huge variation in vendors as to accuracy. Yeah. So I think that's one thing. On that point, one of the reasons why we don't know, and this is where I want to talk with you about the post-
Right. Is according to the Surveillance Technology Oversight Project, that definitely looks very, very closely at surveillance, electronic surveillance, specifically in New York. They're New York based surveillance.
They gathered a lot of FOIA data essentially and say that NYPD up until 2020 purchased nearly $3 billion in secret surveillance equipment that they say had been previously hidden from the public because NYPD were –
filing those expenses under or those purchases under a quote special expenses program, which avoided scrutiny. And that was one of the things changed by the by the Public Oversight of Surveillance Technology Act.
No, I don't think the POST Act does not look at financial disclosures. The POST Act is a kind of basic transparency measure that says, you know, tell us what technology you're using. You know, tell us what your standards are. Tell us what rules you have in place.
to make sure the technology isn't abused and tell us what, it also actually does require efficacy, but I don't think the NYPD has ever really provided answers on that and impact, right? Like who is it impacting? Because racial bias is a huge concern when it comes to surveillance technology, just as it is with policing generally.
Okay. Well, so then in that case, the POST Act is asking for a certain amount of oversight, but to be clear, you're saying that NYPD is simply not complying with that? Well, I think two things are happening, right? So the NYPD is...
It's putting out its post-act required statements, et cetera, but the statements themselves are often inadequate, right? And they're not sufficient to allow any kind of real oversight of this technology. So the NYPD inspector general, for example, I think this was last year, you know, did an audit and they found that, you know, really the NYPD was not
did not have, was not providing the kind of data that they needed in order to evaluate its use of technology. So you have a huge transparency gap over here. Now, you know, when the POST Act was being passed, I remember the NYPD, I don't think it was the police commissioner, it was their chief of counterterrorism, I believe, went on
MSNBC and said, well, you know, with this POST Act, you're just going to give terrorists the tools they need to avoid surveillance, which is kind of a joke because, you know, the amount of transparency we're getting out of the NYPD on—based on the POST Act is quite limited. We have an overview of what kinds of, you know, roughly the technology they use, and there are—I have to say there's some important things in there.
But, you know, with technology, the devil is in the details, as we were talking about with facial recognition, right? It all sounds so easy when you see it on TV, but the reality is much more messy. And so without having a better understanding of exactly what technology, for example, you know, the NYPD uses in a particular scenario, it is difficult to evaluate either its efficacy or, you know, whether the PD has enough
safeguards in place to prevent its abuse, and three, what its racial impact is. So all of these things really do come down to digging into the details of the technology. I'm Meghna Chakrabarty. This is On Point.
Now, Faiza, you've actually pointed out something which is extremely important, and that is New York City, right? It's one of the most vibrant, biggest, most active diverse cities in the world, right? And it's also...
It was also the target of the worst terrorist act on U.S. soil in American history. So counterterrorism is a major part also of what the NYPD has to work on to prevent terrorist attacks. And to that effect, I understand that a lot of this technology that's being used was, for example, some of it was developed for Iraq and Afghanistan. And so I wonder if...
There's an argument to be made that these technologies have been tested for efficacy, just not on U.S. soil. So I think that I would make the opposite point, actually, which is that, you know, technology that may be appropriate for a battlefield is not appropriate for landfills.
for an American city. And that's because the standards generally in a battlefield are much lower. So, you know, take, for example, use of force, right? You know, the police and civilian police departments have a much higher threshold, at least in theory, for using force than the military does, right? So you don't have the same constraints. On a battlefield, you're not...
You're not operating within a framework of a constitution, right, and people's privacy rights, people's right to First Amendment rights to gather and not be picked up and targeted on the basis of participating in protests. You have...
you have particular laws that apply, right? Civil rights laws, for example. So you have a framework within the United States that is not the framework that applies
you know, in the context of a war. You have the laws of war that apply in that context, but they are not nearly as constraining as the legal framework that's applicable on domestic soil. So I would say that. The second, and that kind of relates to your efficacy point, because you're looking for efficacy in a particular context, right? You're not looking for sort of efficacy writ large. And I think that is, in fact, one of the big
concerns even about the NIST studies and facial recognition technology. And there was a report earlier this year by the U.S. Civil Rights Commission in which they pointed out that NIST had done these trials, but that NIST did not
in any way replicate the real-world conditions in which facial recognition technology is deployed, right, which are very diverse. So, you know, if you are testing a facial recognition algorithm using mugshots, right, straight-on profile, right, like you saw in the newspaper of Luigi Mangione, right, that's one thing, right? The algorithm's going to be much better
better at those kinds of shots. Similarly, you know, if you look at, you know, when you try to open your phone using facial recognition technology, that's going to be pretty accurate. It's just trying to do a one-to-one match.
But what police departments are usually doing is taking photographs that are not particularly good, whether they're taken from an ATM or a Starbucks or a hotel camera, right? They're not the kind of full frontal photographs that you're seeing. So efficacy has to be tested in the context in which the algorithm is going to be used for it to be truly, you know,
useful in assessing the technology. Okay. Well, as we head towards the next break, there's one more moment from back under the Bloomberg administration in New York, where he's talking about the domain awareness system. This is former Mayor Michael Bloomberg.
Those systems include a network of cameras, many provided by private businesses in finance, banking, telecommunications and other industries that are programmed to sound an alarm if they spot anything suspicious, such as an unattended package at the entrance of a building. And most of those cameras are in lower Manhattan and midtown Manhattan.
The center also includes 2,600 radiation detectors that have been distributed to NYPD officers on patrol, as well as more than 100 license plate readers that are in place at bridges, tunnels, and streets. And several dozen mobile license plate readers are also deployed on the city's police cars, allowing suspected automobiles to be tracked in real time. We'll be back. This is On Point.
You're back with On Point. I'm Meghna Chakrabarty. And today we're joined by Faiza Patel, who's senior director of the Liberty and National Security Program at the Brennan Center for Justice. And we're talking about how the manhunt and arrest of Luigi Mangione, how that case shows the
limitations of the mass surveillance systems that are run in New York City by the New York City Police Department. And to repeat once again, we did reach out to the NYPD to see if someone from the department could join us or if they would answer questions that we had or even provide a statement. We did not hear back from the NYPD.
Faisal, I want to talk a little bit more. We talked about the domain awareness system from back in 2012. But moving forward in time, you had mentioned Clearview, right? Because facial recognition comes up a lot in this conversation. And I want to spend a minute talking about Clearview in more detail. So remind us, Clearview AI is essentially a company whose technology was pretty widely embraced by law enforcement agencies in multiple areas.
And as you said, they were scraping just people's images just anywhere from the Internet.
Yeah, pretty much. Venmo, I didn't even know Venmo had pictures, but Facebook, Instagram, all the social media platforms, you know, they were basically scraping images without the consent of the individuals whose images were put into their database. And their database has grown very dramatically over the last few years, right? I remember...
I believe it was there was an article in The New York Times that first talked about it maybe three, four years ago, and they had three billion images. And then the next number I saw was 30 billion images. And now the most recent number I've seen is that they have 50 billion images in their database. And according to Clearview, I think some 3,000 people.
police departments use its technology. That's about one in six out of all police departments in the United States. So it has a very large presence in this country. And the promise of the technology or the alleged promise is that, well, if you are looking for a particular person who, let's say, a surveillance camera caught at the scene of a crime,
But you don't have that person's face on file already. And the FBI and their comparative meager 640 million images doesn't have it either. You could do a quick search with Clearview and voila, identify the person. Mm-hmm.
Yeah, that's the promise. Okay. Well, just to describe to listeners how problematic this was, there was a huge case against Clearview, right? And I believe just this past summer, a proposed settlement was made public against
in June that would pay damages to a class of members in this class action lawsuit that said that their privacy essentially was violated. But just a couple of days ago, I'm seeing here, Reuters reported on December 13th that 22 U.S. states and the District of Columbia are telling a judge that they oppose the settlement and do not think that the privacy issues have been resolved by the settlement. So that's still going on.
With Clearview AI, do we know if it's still in use by NYPD? So NYPD has said on the record that it does not use Clearview AI. On the other hand, you know, there were FOIA documents that were released that showed that the NYPD certainly trialed Clearview.
Clearview AI back in 2018, 2019, and even had, you know, officers had it on their phones and they could just run someone's face through that. And I think they conducted some 5,000 searches. So as far as we know, based on their public statements,
They don't have Clearview AI, but the FBI has access to Clearview AI. So they could, in theory, go through the FBI and get access to that database as well. So I think there's a lot of ways to get around the fact that they don't have access to Clearview AI. But I think one thing that's sort of interesting to also, maybe for your viewers to understand, is that when you run someone's face against a
a database, right? It's not going to give you the answer, right? It's going to give you a list of options. So, you know, up to, I think the FBI one, for example, generates 50 options. So then you have an officer who has to actually look through those and decide, you know, which one is the right one. So again, it's not magic. It always has this human component. But we do know that the NYPD has used, you know,
like other police departments, has also used facial recognition technology and even drones to monitor protests and the like. So that, I think, is something that's really kind of worth thinking about because we've spent all this time talking about, you know, what are the limitations of facial recognition technology, which I think the UnitedHealthcare case illustrates very well. But then there's, I think, the whole kind of
I think in some cases, even scarier issue of like, what about if facial recognition technology works really, really well all the time and we have it everywhere? And I think that piece of it is also really important for us to think about. But again, we don't know because there's still limited transparency in terms of how, why, when and to what effect NYPD is using these technologies.
True, but we do know that facial recognition technology is getting better and better, right? We know that it's becoming more and more ubiquitous. You know, more and more police departments are using it. We also know that it is not regulated in the United States. You know, there are a couple of jurisdictions that have banned it.
There is no federal law that regulates how, when, where facial recognition can be used. There's certainly no law in New York that would constrain the police department. So basically you have a kind of wild west of facial recognition technology where you have an incredibly potent and powerful technology which can be used to solve crimes but can also be used in ways that are really antithetical to a democratic society.
Can I go back to what could be argued as a catch-22 that police departments find themselves in and maintaining our focus on NYPD because of the counterterrorism part here? I mean, you kind of brushed off earlier the argument that NYPD makes that, well, if we talk too much about how well this stuff works, it's going to give potential terrorists insight into how to skirt the system. I don't actually think that that is an overblown argument.
Right. Because this is one of those situations in which.
A single failure is a catastrophic one, right, as we saw on 9-11. And so it seems actually quite understandable that law enforcement would be reluctant to give too much information to lawmakers or to the public about how these technologies work and when and how they're utilized. I mean, there is some justification to that argument.
I mean, yes and no, right? I mean, certainly, you know, you don't want operational details from the NYPD, right? So you don't necessarily need to know how they're conducting their operations. But you do, I think, in a democratic society, need to understand, you know, the capabilities. And you also really importantly need to understand the safeguards, right? So
you know, when we talk about facial recognition technology, I mentioned that, you know, it's not just a question of, you know, you put something in the computer, you get an answer, you get, you know, a series of options that an officer then has to evaluate, right? So it's important then, for example, that an officer has special training on how to actually utilize facial recognition results and has training on how to avoid
the well-known phenomenon of automation bias in which, you know, by which, you know, you're like, oh, the computer said it, it must be right. So all of these kinds of things are really important. And then, you know, the NYPD says that it does have like specially trained folks to look at FRT results, sorry, facial recognition results. But the Clearview AI documents show that, you know, that
that officers were just using it and they weren't just the specially trained officers. They were, you know, people just had it on their phones. So you do need to have, you know, some understanding of how it's being used. And I think it's really important when we sort of think about, you know, when we think about the justifications for the technology to also spend some time thinking about, well, what are the sort of, you know,
use cases of the technology, which are clearly abusive and that we as a society don't want to
to see take place and how do we prevent them? So, you know, I mentioned before that, you know, protests, right? The use of facial, of drones and facial recognition technology to identify people at protests. This is something that has been done by the NYPD, by other police departments, by at least six federal agencies have done that. And now what, what law enforcement will say is that, well, we've done it because sometimes at protests, there are, there's,
criminal activity taking place, right? January 6th is another example. And so we're using that to identify suspects, which is
you know, it seems reasonable. At the same time, there's literally nothing on the books that's going to prevent the same agencies from, you know, using that technology simply to identify people who are at a protest and then surveil them or harass them thereafter. And you can easily imagine that kind of situation happening, right? Similarly, right, you know, we know that, you
China, for example, uses facial recognition technology that purports to be able to identify individuals who are Uyghurs. We all know about the way that the Chinese treat the Uyghurs.
Imagine like some tech CEO is like, wow, this administration is going to do a mass deportation effort. Why don't I sell them a piece of software that can identify individuals who appear Hispanic because they might be illegal immigrants? You got to think about like, well, what are the rules around this technology? How would we prevent that from happening? And right now we don't have any of those rules. Right.
You know, the scenarios that you just laid out, let's add a little thought experiment to this, right? Let's presume for a moment that these technologies are actually really excellent at what you said, right? Identifying people at a protest or, you know, maintaining databases on folks like that, etc. But at least, you know, here we have an example of a very high profile crime in which these surveillance systems were, as far as we know,
not good at helping law enforcement very quickly track down the perpetrator of a high-profile murder. You know, and I'm...
I'm saying this sitting here. Our home studio is in Boston, Massachusetts. And, you know, back in 2013, there was that terrible Boston Marathon bombing. And, OK, that was more than a decade ago now. So technology has gotten a lot better since then. But very quickly thereafter, there was surveillance footage of the bombers at the location where the bombs went off. And it still took a full week.
You know, their identities were released to the public, etc. It still took a full week before they were found. And they were only found after they shot up another town. OK. And then law enforcement and state officials still took it upon themselves to lock down a million people in eastern Massachusetts.
in the name of finding these guys. And they ended up finding them after the lockdown was lifted and a homeowner in the same town where the gunfight took place early that morning found them.
Jihar Sirnaev bleeding in his boat. OK, not that. So so I only raise that as a as an example, because it's a very visceral one to me, having lived through it, that the entire apparatus of law enforcement and the surveillance systems that were available at the time were totally ineffective for the highest profile case in this region. It does call into question how good even 11 years later, all of these things are at, you know, court.
quickly tracking down a criminal or alleged criminal, let alone preventing crime? I mean, I think those are valid questions. And it's something that really does need to be, you know, to be studied. And unfortunately, what happens here is that law enforcement tends to be quite...
resistance to having evaluations of its systems done in any sort of independent manner. That's why I think, for example, the NYPD IG is such a valuable part of our oversight system because they, at least again in theory, do have the ability to kind of dig into these systems and to make public the sort of, you know, the
the good and the bad and all of that. But I also think, you know, that we have to recognize sort of to some extent, you know, how much surveillance we want, right? So, for example, you know, you talked about the Boston Marathon bomber, right? So in one part, and I think this is true also in the case of Mangione, in one sense, right, that the
some of the technology worked well, right? Like we had a picture of this guy up pretty quickly, right? And so we had the photo and it was everywhere. You couldn't miss it. We had lots of photos. So that's one piece of it. The second thing is like, you know, to be able to track somebody on an ongoing basis. Well, you know, if you're a police officer and, you know, you get a call, right, about Mangione, for example, that there's been a
A shooting in Midtown outside the Hilton. There's this guy. This is the description. You know, he's wearing a backpack, et cetera, et cetera. Now you're looking at all of the surveillance cameras in the area and you're trying to identify, pick out this one individual. Faiza, hang on for just a second. I just have to quickly say I'm Meghna Chakrabarty. This is on point. But go ahead. So you've got to pick out someone wearing a backpack in New York City. Okay.
Right. Right. I mean, you know, and the technology helps you do that. Right. But it isn't going to do it all for you. So I think there has to be some, you know, and I'm someone who spends her life being kind of critical of surveillance technologies and law enforcement uses of them. But I think that I think expectations are also sometimes a little unwise.
higher than they need to be, given that, again, it's not magic. It requires a lot of footwork and a lot of grunt work as well. So I think it's important to keep that in mind. So we have about a minute left. And I'm wondering if, given AI itself as an area of technology that's advancing in just enormous leaps and bounds virtually every minute,
How much more, I don't know what the right word is, comprehensive, even invasive, do you think these kinds of law enforcement technologies could get?
I think the scary piece, particularly with facial recognition, is real-time facial recognition, right? So right now they can sort of basically track you through a city like they did with Luigi Mangione, but they don't have real-time facial recognition. So it's not like China where the camera sees you, the camera recognizes you, and the camera knows who you are.
on the minute, you know. Now, that might be really helpful in a case like this, right? But at the same time, that I think is really a dystopian future, which puts way too much power in the hands of the government to track our everyday movements, to see, you know, whether we go to the mosque or the synagogue, if we go to an abortion provider, if we go to a gun store, everybody's civil liberties are at stake when the government is tracking you consistently and persistently.
Well, Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center for Justice, thank you so much for being with us. Thanks for having me. I'm Meghna Chakrabarty. This is On Point.