This episode is sponsored by Indeed. You just realized your business needed to hire someone yesterday. How can you find amazing candidates fast? Easy, just use Indeed. When it comes to hiring, Indeed is all you need. Stop struggling to get your job posts seen on other job sites. Indeed's sponsored jobs helps you stand out and hire fast.
With sponsored jobs, your post jumps to the top of the page for your relevant candidates so you can reach the people you want faster. And it makes a huge difference. According to Indeed data, sponsored jobs posted directly on Indeed have 45% more applications than non-sponsored jobs.
When we recently used Indeed for a job vacancy, the response was incredible. With such a high level of potential candidates, it was so much easier to hire fast and hire well. Plus, with Indeed's sponsored jobs, there are no monthly subscriptions, no long-term contracts, and you only pay for results. How fast is Indeed? In the minute I've been talking to you, 23 hires were made on Indeed, according to Indeed data worldwide.
There's no need to wait any longer. Speed up your hiring right now with Indeed. And listeners of this show will get a $75 sponsored job credit to get your jobs more visibility at indeed.com slash intelligence squared.
Yeah, sure thing.
Hey, you sold that car yet? Yeah, sold it to Carvana. Oh, I thought you were selling to that guy. The guy who wanted to pay me in foreign currency, no interest over 36 months? Yeah, no. Carvana gave me an offer in minutes, picked it up and paid me on the spot. It was so convenient. Just like that? Yeah. No hassle? None. That is super convenient. Sell your car to Carvana and swap hassle for convenience. Pick up fees may apply.
Welcome to Intelligence Squared, where great minds meet. I'm producer Mia Carenti. On today's episode, we'll be exploring how AI is changing the face of war.
Autonomous weapons, facial recognition technologies and industrial-scale disinformation systems are already being deployed in the Middle East and Ukraine. As countries race to harness AI's military potential, we are faced with profound moral dilemmas and tough regulatory questions about the place of AI in modern conflict. To discuss this issue, Intelligence Squared hosted a live panel at Web Summit in Lisbon.
Researcher and Senior Policy Advisor Adam McCauley was joined by Agnet Kalimard, Secretary General of Amnesty International, and Kenneth Kukie, Deputy Executive Editor at The Economist, to explore how AI is changing the face of warfare in the 21st century. Let's join our host, Adam McCauley, with more. Hi everybody. Welcome.
This is the Intelligence Squared podcast, How AI is Changing Warfare, and we are live from the Web Summit here in Lisbon, and I am your host, Adam McCauley. So this week at the summit, we've explored the near endless frontier of technological development, from robots folding laundry to advanced applications that leverage artificial intelligence to improve personal and professional life.
Conversations and presentations this week have also taken us to the intersection of technology and politics. Recognition that the way technologies are used has profound societal and social implications. Which brings us to our discussion today: AI and its influence in warfare. We live in interesting and uncertain times. And in our so-called moment of increasing geopolitical and great power competition,
has emerged, or this era has emerged in parallel with the rise in rhetoric around transformational artificial intelligence. In this context, the pursuit of world-changing AI capabilities has increasingly been viewed through the military and political lens, fueling a narrative that posits that the winner of this next great arms race, as it were, may just rule this world.
But the heart of this puzzle is actually how AI technology collides with human institutions and the decision makers in those institutions to understand how these tools already influence how we see, understand, and act in the world. So today I'm delighted to be joined by my panelists.
To my right, Kenneth Kukie is the Deputy Executive Editor at The Economist and a New York Times bestselling author of books on technology and society, including Big Data, A Revolution That Will Transform How We Work, Live, and Think, and Framers, Human Advantage in the Age of AI.
To his right, I have the honor of welcoming Agnes Calamard, the Secretary General of Amnesty International and a lifelong human rights advocate. She is the former UN Special Rapporteur on extrajudicial arbitrary killings, the former Executive Director of the organization Article 19, and the former Director of Global Freedom of Expression at Columbia University in New York. And it's a pleasure to have you both here today.
So I'm going to open this up initially. I think the purpose of a conversation like this is to ground such a big and profound topic in the world as we see it. So to our panelists and maybe Agnes first, I wonder if you can take us into the story of AI as it's unfolding, perhaps in Ukraine or the Middle East as it were.
Thank you very much. It's a pleasure to be here and to participate in a superbly important discussion. So I think the war in Ukraine has confirmed the transformation of warfare caused by drones. And I'm going to focus on the use of drones in particular, demonstrating that the war of the future
will be about maximum drones and minimum humans, but not the victims of course. The victims will remain humans. So I worked on drones when I was a special reporter and I won't
against the proliferation of drones then. It was six years ago, five years ago. Since then, we have seen an acceleration of the use of drones, more and more sophisticated, including swarming,
drones which will move towards lethal swarming drones. We know from what we are hearing about the situation in Ukraine that it is not because you're using drones that you are not targeting civilians and that you are not targeting civilian infrastructures. And I think we need to remember
Constantly, as we are dealing with the artificial intelligence, we need to recall what are wars for and what is in the nature of war. And we need to recall that the vast majority of wars have not been only fought
on and through the bodies of soldiers. They have been fought predominantly through the bodies of civilians, men, women, and children who had nothing to do with the conduct of warfare. We need to remember
that wars over the years of our so-called human civilization have been directed at exterminating people because of their ethnicity, because of their race, because of their political dissident, and a range of other ethnic markers. And we need to remember that artificial intelligence will make that kind of war more effective.
It will not make it more human. It will not make it... Well, it may make it more deliberate and it may make it more accurate, but not in terms of targeting what could be described as lawful target. It will make it more accurate in terms of targeting who has been targeted over the vast numbers of years of warfare in our human civilization.
So let's not be fooled by those who are going to tell you, "Oh, but through artificial intelligence we will be more accurate." Accurate for what? Let's not ever forget what our war is for. And they are not about soldiers only. World War I was a vast exception to our human history. So drones in Ukraine demonstrate that.
Drones in Libya demonstrated that. And the use of facial recognition in Gaza are the clear demonstration of how artificial intelligence is being used, so-called, to target individuals on the basis of a set of data which are very problematic.
and which are not transparent and for which there is very little accountability.
We are extremely at Amnesty International concerned with those developments, the ones that we are aware of. I have attended a number of conferences for the last year which are demonstrating to me that armies around the world, military thinkers around the world are focusing on Ukraine and Gaza at the moment as a way of learning.
what the next wars are going to be. So Agnes, thank you for that. And it's an important reminder that perhaps the nature of war hasn't changed. But what we're here to talk about perhaps is how the character of war has changed. And on that, I'd love to bring in Kenneth just to get your observations about what technological advancements, especially when they're applied in this space, look like. What do you think about what should we be thinking and talking about perhaps in the context of an event like this? Sure.
Let's all first agree that war is heinous and we should devote all of our energies to avoiding war. That's not a platitude for those of us whose very families have lived through the bloodlust and the terrible destruction of World War II and in Europe, wars in Europe and anywhere in the world, they know that in their DNA and I know I do.
If we are going to have war, it is just a fact of life that militaries are going to try to get a technical advantage among others. And AI is the next iteration of that. The doctrine of the Pentagon is that there should be a human on the loop or a human in the loop. And they use both terms, which suggests to me that there's not only just a vagueness there, that it's actually probably a short-lived doctrine.
Because if you think about how the technology of AI works, the whole purpose is to replace human cognition in the same way that you don't have a PhD in librarian science look at every single Google query and accept it and then serve it up to the person who actually made that request. So to the idea that you're always going to have a human in the loop in a case of lethality,
doesn't make much sense. So then the question becomes, how can we constrain or channel these technologies so that it is effective in the way that the military uses
officers believe it needs to be effective while also minimizes the casualties or the unintended consequences of it that don't give you a military advantage. I actually agree 100% with Agnes that the shift that actually is a recent one of the last 100 years, 150 years, in which war went from being military to military to military to civilian is one that we should try to eradicate and
in a very disgusting way, you actually would prefer a world in which militaries are fighting militaries and they're not bombing civilian populations for obvious reasons. But if you think about it to the second order effect still, we will go to a world in which it'll be M2M combat, which is not military to military, but machine to machine combat. And that then becomes a little bit more interesting because first it will avoid the bloodshed that is so horrific for both sides.
But also, if you're going to have a conflict, at least it will be one that will be-- it could have the potential to be more humane. And although that sounds somewhat absurd, let us remember that over the last 150 years,
We've been creating the Geneva Protocols in all of its different iterations so that there could be something of the laws of war. And indeed, look at conflicts, whether it is Vietnam or Afghanistan, Iraq, or now in Ukraine, and the use of chemical and biological weapons and nuclear weapons have not been used. Chemical weapons have been used in Syria, sadly, against domestic, in a civil war.
but it is to say that there is a modicum of restraint that is being applied in the case of Ukraine. It is not all out war.
That gives us short succor, if you will, because the point is that the nature of war is changing. It is hitting the Moore's Law moment and we are going to have a digitization of all weaponry and things that are big like tanks and planes can be brought down by things that are small like drones and that's going to change the nature of militaries, procurement, what it means to be a soldier, and the nature of conflict itself.
This episode is sponsored by NetSuite. What does the future hold for business? Ask nine experts and you'll get ten answers. Inflations up or down, rates will rise or fall. Can someone please just invent a crystal ball? Well, until then, over 40,000 businesses have future-proofed their business with NetSuite by Oracle.
The number one cloud ERP brings accounting, financial management, inventory and HR into one fluid platform. With one unified business management suite, there's one source of truth giving you the visibility and control you need to make quick decisions. With real-time insights and forecasting, you're peering into the future with actionable data.
If I had needed this product, it's what I'd use. NetSuite by Oracle is just a really smart tool for businesses looking to make proactive moves and make the best out of future opportunities. Whether your company is earning millions or even hundreds of millions, NetSuite helps you respond to immediate challenges and seize your biggest opportunities.
And speaking of opportunity, download the CFO's Guide to AI and Machine Learning at netsuite.com. This guide is free to you at netsuite.com. That's netsuite.com. Would you wear the same shoes for every occasion? Or rock the same outfit seven days a week?
Of course not. Your style is better with options. Your investments could be too. SIBO index options give you access to various contract sizes, global trading hours, and potential tax advantages. That's a good look for any portfolio. If you're ready to invest in style, head to betterwithoptions.com. There are risks associated with SIBO company products. Review the disclosures and disclaimers at SIBO.com slash US underscore disclaimers.
Shop now at the Home Depot and get up to 35% off select appliances. Plus, save up to an extra $400 on select appliances like Frigidaire. From the stainless steel French door refrigerator with crisp sealed drawers to keep produce fresh longer, to the five burner electric range that lets you cook more at once and get meals on the table faster.
Right now, get up to 35% off select appliances like Frigidaire. The Home Depot. How doers get more done. Pricing by January 9th through January 29th. U.S. only. See store online for details and minimum purchase required.
This episode is brought to you by Amazon. Sometimes the most painful part of getting sick is the getting better part. Waiting on hold for an appointment, sitting in crowded waiting rooms, standing in line at the pharmacy, that's painful. Amazon One Medical and Amazon Pharmacy remove those painful parts of getting better with things like 24-7 virtual visits and prescriptions delivered to your door. Thanks to Amazon Pharmacy and Amazon One Medical, healthcare just got less painful.
I mean, as you can see, the implications of something like this are wide-ranging, but it's also, it poses a challenge for most of us in trying to understand what the implications of these changes look like.
So perhaps the metaphor that's useful in this context is to think about it in terms of both hardware and software. You know, Agnes' example here of the use of drones or the proliferation of drones and how those drones navigate the world that they encounter is in some ways a function of the hardware innovations that we've seen in technology.
But there's an important software aspect of this, and I mean this software in kind of a wider sense. The ways in which we as humans within institutions with developed processes try and make sense of what these tools enable, what they allow us to see, how they shape what we see, and where we find ourselves still, hopefully, with meaningful control over how they're used. In this case,
I mean, I think it's important to distinguish and I think here Agnes would have a lot to say in terms of the use of, let's say, algorithms or machine learning programs in terms of targeting or identifying potential targets, what that means.
This gets us into a very wide-ranging conversation on things like distinction and proportionality in conflict, perhaps something we wouldn't have time to talk about today in full. But the idea here is that there's something important, not necessarily about what militaries are doing, but how they're doing it, and whether or not they're doing it with the same due regard, whether or not the moral and ethical implications and responsibilities are clear enough for those that hold this power,
But I'd love, starting with Agnès anyway, to maybe unpack some of the more tangible examples of the way in which we see these tools being deployed in the world and what the implications, at least at first glance, appear to be. Thank you very much. Well, we have seen the use of those so-called precise targeted killings throughout the world ever since the beginning of the war against terrorism.
I would like to highlight a few things. First, yes, the direct target may have been someone who was, you know, who may have been a criminal in quotation marks. That does not make their deliberate killing lawful.
Particularly if there is no war being declared or no war being waged. So for the last 20 or some years, we have seen the use of deliberate targeting of individuals through the use of drones extraterritorially in cases and in situations where there were no wars. This is completely unlawful.
So point number one. Point number two: the question of accuracy of those weapons is largely open to question, in fact. For most of us monitoring these weapons, what we have found is that they are in fact less accurate than other
more mainstream weapons that are being used. We have seen a very large number of civilians, people killed in so-called collateral damage. We have seen many mistakes because the data upon which the targeted killing and the drones are relying are not
accurate themselves. The targeting can only be accurate if the data is accurate, if the data is collected in real time, if there has been no changes, if the basis upon which the data is collected is good enough. If you collect data on the basis of age, gender, and occupation, you are very likely to target individuals that have committed no crimes.
I will not mention the fact that constant drones on top of people's head is an enormous source of stress, mental disease, and so on and so forth. So that is the reality of drones, and we are making those weapons much more effective now. We are moving towards them becoming more autonomous, possibly even autonomous
autonomous from human control, something that we at Amnesty opposed with great favor and luckily a large number of countries
are taking a stand against lethally autonomous weapon system. That is weapon system that can target and determine when and how to target in a completely autonomous way from any kind of human control. For us, this is an absolute red lines for a range of reasons. I could also speak about
the use of facial recognition technology that has been used, for instance, in Gaza to target journalists.
through targeted killings, to target a large number of 15,000 possibly young men, supposedly because of their age, because of their gender and so on, they must be affiliated with Hamas. These are just not the way we should pursue so-called military to military warfare. So I'm going to bring in Kenneth here to respond. Yeah, Agnes, let me ask a question to you.
Are you opposed to AI weapons or are you opposed to weapons? I am opposed to weapons that are being used unlawfully and in violation of international law, point number one. And I am opposed to weapons that are being publicized and put out as so-called more accurate when in fact the evidence puts that they are not only not more accurate, they are more lethal. So if the AI weapon is more accurate,
Let's leave lethality to the side for a second. If the AI weapon is more accurate and there's a declaration of war, then it's fine. No.
Because what do we mean by accuracy? If you use artificial intelligence to target people on the basis of their ethnicity, of the basis on their race, of the basis on their gender, which is what war has always been about, then I'm sorry, but AI-powered weapons are extremely dangerous for our entire humanity. So I'm going to give Kenneth an opportunity to respond. He was asking me questions, so I reply. I question whether
war is always about racial and gender related matters. Sometimes it's about territory, it could just be about nationalist passions. But
I'm worried that the conversation is moving into a direction in which we wear an area that we would all agree that war is bad and regrettable. The idea that we need a declaration of war to make a conflict something somehow above the board is strange because there are, I don't think there's, declarations of war don't happen anymore. It's a very sort of 19th century diplomatic protocol that doesn't really take place in the 21st century anymore.
AI weaponry is going to be a fact of life. So the question is going to be, can we make AI weaponry more humane, as heinous as that sounds, in any way that we can define it? And one way we would probably define it is to say that the weapon should be more accurate rather than less, so it actually attacks the target and not something that was not the target.
So many people probably remember one of the early leaks of WikiLeaks, the Apache helicopter that shot up what appeared to be militants in Afghanistan, but in fact was journalists. And it was thought to be an armament, was actually a camera. There were children there and you can hear the voice recordings.
of the individual saying, well, that's what happens when you bring a child into the theater of war. Never occurring to the people that actually this was a complete misidentification and a tragedy is taking place.
In a situation like that, you can sort of conceptually think what would happen if there was a little AI sentinel, we'll call it a co-commander, that is watching what's going on. The humans are still making the decisions and they can exercise mercy. They have to have some form of judgment. They have moral responsibility.
but it can say, "Check this. Take a second. Take a beat and consider this might be a mistaken identity." You could imagine that the pixels of the camera being better than the human eye will say, "Hey, this is a camera.
This is not a Kalashnikov. You could even imagine facial recognition saying, "Whoa, this is a member of the press. We have him accredited. He goes to our briefings. This is not a militant. He might be with militants, but it looks like, no, no, these are civilians. These are civilians. Back off. Stand down." That I think a lot of us would say, "Hey, that is, if we're going to have the regrettable environment of war, that is something that we probably want to have to reduce the casualties of war."
So I'm gonna jump off Kenneth's point and give Agnes a chance to respond here because I think the question is that if there's an inevitability to conflict, right, if it is in some important but unhelpful way, ingrained in the way we as human societies engage each other, the idea that we could do this with greater accuracy does seem to offer some degree of hope.
I think you're about to tell us, however, how, let's say, the models in their present form and the use of facial recognition in the present day may provide counter arguments and evidence to that. But I wonder if you could speak to the specifics of what we've found, where we see these types of technologies deployed, the degree to which it helps us gain a sense of how much we can know about an environment as war is that is preternaturally uncertain.
Well, I can tell you that outside war, facial recognition technology has been used disproportionately against racialized communities, against poor people, and there is absolutely no evidence of facial recognition technology in general, inherently not violating the right to privacy.
If you're talking about the one that is most in use right now, it is predicated on the scrapping of millions, trillions of data of your faces. I'm sure this is happening as we speak.
you have not given them your authorization, this is how the technology is being developed. So it is inherently a violation of human rights by design, by the way it's being created, by the way it's being applied,
every implementation that we at Amnesty International have monitored has highlighted the racial dimension of its implementation, the unlawful use of the technology to back unlawful use of force, and so on and so forth. We have documented the use of the technology
in occupied territories of the West Bank and of Gaza. And in all those cases, we have seen that it is being used to violate the rights of Palestinians, including freedom of movement in the West Bank. And it is currently used in Gaza to violate the right to life of individuals that are being targeted simply because they are men of a certain age and therefore seem seemingly being associated with a so-called terrorist organization.
There is nothing about facial recognition technology that can be made to respect human rights because by their very design, this is not respectful of human rights. So this massive surveillance technology for Amnesty International are opening the door to vast human rights violations.
you know, the accuracy issue, the ability to pick one individual on the basis of their face, notwithstanding the question that has been raised repeatedly about the accuracy. I'm again coming back to the nature of war. I mean, yes, in an ideal world, we could have machine against machine, where right now in the world,
Where do we have such wars in the future? Even in Russia, Ukraine, which is probably the most technologically advanced so-called war, we have Ukrainians dying every day, civilians being killed every day. So, you know, wars may be waged for territories. So what does it mean? You're not going to kill the civilian that are on those territories? Of course, you're going to do that.
Unless we change human nature, which is highly unlikely, to put this weapon in the hands of our humanity is to kill the humanity. We're already killing humanity with our wars. So the question is, if we have a technology that is going to operate far quicker than at human speed,
that will have unknown effects at the outer edge of its abilities, will actually be able to play the game of strategy more complex and far, not just faster, but far longer out than we do. In the same way, if you were to play a game of chess, a grandmaster can think maybe five moves out of all the different pieces, but if this can play 50 moves out, it gets a farther advantage out into the future.
If we're going to have these tools, and of course I've not even mentioned the lethality, the lethality will be vastly larger, although actually AI doesn't change the lethality. The lethality is a different question. AI simply changes the delivery of when, where, and how of that lethality.
If we're going to have that technology... If a drone can carry mass weapons, it does make a difference. It's still a piece of infrastructure. It's flying. Whether it flies because a human operator brings it there or AI brings it there, it's flying and it drops its payload. Except that there are some
The point is that if we are going to have these-- Yes, there's many of them. There's going to be 200 next year. There's going to be 300 the year after that. They don't have super big planes. They have drones. I beg your pardon. The broader point is that if we are going to have these weapons,
then we're going to have to find a way to govern them responsibly, knowing that in some ways, in a weird way, they're smarter than we are. They do things better, faster, et cetera, than more complicated than we do. So what are those rules? Because I think that simply saying war is bad brings us into a dead end because we are still going to have war, yet we're going to have war that is going to have AI weaponry and we are going to be unprepared for managing it.
Now I am certain that a human in the loop can't be the total answer of that because of course the communications won't always be there. You'll still have what's called loitering munition. It will still see a piece of infrastructure or a person or a uniformed soldier that is on its target list and it'll want to swoop down and take action against it. A euphemism, it'll want to kill it or blow it up.
No euphemisms, let's talk about the real thing. So if we were going to have this heightened capability and our adversary is creating weapons as well and we feel the need, because it would be irresponsible not to, to match that capability so that we can have a parity of force, hopefully to avoid conflict,
We're now in a conundrum, and the conundrum is we don't have a doctrine with which to understand AI weaponry, and it looks very different than what happened in the 20th century with nuclear weapons. Nuclear weapons were big, required a state to make. They were big, and therefore they were visible. And the first...
The first thing you did in a nuclear arms negotiation treaty is you counted and you said, "I'll show you mine if you show me yours," and you saw the megatonnage. And then you started talking about, well, what form of megatonnage, what is the distance, how long does it take, and how many do you have? And you start trying to limit them. By disclosing your capability, you can actually try to temper down the threat of war.
but with ai it's different it's invisible they're created it's created on software it's created in on people's laptops as a model and then it's used in a data center and to disclose your capability is to give away an advantage unlike in the nuclear arms negotiation world so
How do we respond to a world of AI weaponry knowing that we're going to need to match our adversary because we don't want to become a victim of them? We're seeing a victim and an aggressor in the Ukraine war as an example. So we are in a conundrum. What could a doctrine look like for an age of AI war? The NFL playoffs are better with FanDuel because right now new customers can bet $5 and get $200 in bonus bets. Guaranteed. That's $200 in bonus bets. Win or lose.
Fanduel, an official Sportsbook partner of the NFL. 21 plus and present in select states. First online real money wager only. $5 first deposit required. Bonus issued as non-withdrawable bonus bets which expire seven days after receipt. Restrictions apply. See terms at sportsbook.fanduel.com. Gambling problem? Call 1-800-GAMBLER.
If you need a new appliance, now's the time to buy. Industry-wide price increases are coming, but Bray & Scarf has you covered right now with last-second savings throughout all their stores. With next-day and two-day delivery on in-stock bestsellers. Save big on all in-stock KitchenAid dishwashers and Whirlpool washers and dryers. Shopping for something else? Don't stress. You can still get it at the lowest price with Bray & Scarf's best price guarantee. Visit any of our convenient locations or shop online at BrayAndScarf.com. Where it doesn't cost more to get more.
One of the ways in which I think we can bring this conversation down to earth a little bit and let's say tease out the ways in which the immediate effects of these technologies are going to have an implication in the context of conflict is to actually understand how they change the way that we as human decision makers actually engage with these questions. And so we've already talked a little bit about the speed and the complexity of the anticipated future environment. These are very much the driving forces behind the military's thinking about what comes next.
The idea that we can't not compete, what that competition looks like, I think is still yet to be determined. But we recognize that it's going to have an incredibly complex technological element. But that also brings in the question too, that especially in a setting like this, where we talk more and more about how much better the AI today was than yesterday, and just how close we might be to succeeding the capabilities tomorrow,
What does that mean for the human relationship when we try and understand a complexity in this future, a degree of data that humans we know are limited in processing because it really brings to mind or at least
It centers a series of questions about what it means to be meaningfully engaged in decisions around conflict if we think conflict and the speed of conflict is, in fact, increasing. So I want to start with Kenneth there because I think it'll take us eventually to what this means more generally in the international system. Right. It's an essential question. It's one that I think a lot of people don't get. And so I'll try to simplify it. I'm brief.
There is something called a computer. And a computer is something you program and you give it instructions and you define what you want and it produces the output. And you can actually go back into the code because you wrote the code to see why it's doing this because you asked it to do that. It's instructions. It's explicit. But with AI, it's making a prediction. It's an inference. And therefore, the data goes in. It creates the rules itself, the weightings, and the output becomes explicit.
something that's implicit. It's not an explicit instruction. It's an inference that it makes. It's a prediction that this face is me and not you, that this handwriting says it's a check for $100 and not $90, et cetera. So if we're in a world in which we have these outcomes,
What happens when we have a lot more data? Well, the machine learning has evolved and deep learning to such a degree that it can spot things that we can't spot and therefore know. Lots of research are there, but one example is if you take retina scans of eyes and you put it into a machine learning algorithm to identify the incidence of not eye disease with the retina, but actually of a heart disease like a cardiac arrest,
It turns out just from the eye itself, it can identify the likelihood that someone is going to have a heart attack or not. Now, what is it looking at? It's going to be hard to tease out that explainability, but it can do that and can do that with incredible accuracy. So in some instances, it's been able to identify things that we actually have no idea
causal knowledge of why it's identifying and doing it, yet in some instances it's nearly 100% accurate, it's 95% accurate. So if you will, AI has already, in certain domains, surpassed our capacity to know.
We can validate it that it's accurate, but we don't know why. So that's why it can beat the world's best Go player and it can play a game, a move, move 38, that seems to be preposterous and quote unquote wrong, that turns out 15, 25 moves later was actually the linchpin move that enabled it to win. And that was, if you will, a holy shit moment in the world of Go, but also in the world of AI.
Every military planner has looked at that incident and realized, okay, well, what happens if just as in the game of chess, you have a sacrifice, you're queen, but you win the game. What happens if I have to sacrifice an entire battalion because the AI system is telling me I'm going to win the war? Do I do it or not? Well, the pressure as a commander is on one hand humane. I wouldn't want to sacrifice an entire battalion of my fellow comrades.
On the other hand, if the AI system's right, then doing that sacrifice by getting all the attention of the adversary there, allowing me to do something else, but it might be so intricate and far away that I don't see it, but the machine does, I'm able to win. There's a lot of pressure for me to do that.
well, if the AI is wrong, I've got a bigger problem on my hands. But that is the conundrum that we face. A technology that exceeds our ability to understand how it works, that in many instances, indeed is accurate and is correct. So on this note, though, I think I'll bring back Agnes. I think one of the challenges here is that
model design as it were or understanding precisely what it is we think we know about the social space and then designing algorithms to navigate that with perhaps greater dexterity than we as individuals might have also complicates our understanding of maybe the moral and ethical requirements we have around questions of the use of violence and harm and where violence and harm emerge in our society and more generally around the world.
And this also takes us, I guess, obliquely into this question around autonomy as well. So talk to us a little bit about, at least from your perspective, how we should understand this perhaps augmented decision-making capability that we might gain through technology and where that fits. What kind of questions we ought to be asking about when it gets used and what the consequences of using it might be? So the first thing we need to recall is
that AI technology has been repeatedly found to develop its own computer model. So it has the inherent ability to modify themselves
without humans quite understanding how and why the so-called black box phenomena. So we should never forget that we have a technology that we don't quite control, potentially or in fact in actuality. Use, include, now put that technology at the heart of a weaponry system and I think we need to be fully aware
of the fact that there is huge potential for a weapon simply powered by AI with complete autonomy to take a life of its own. So the most crucial element is human control for, you know,
If we're going to go into that direction, there needs to be meaningful human control over this kind of AI-powered weapon. What does that mean? There is a lot of discussion over what meaningful human control means.
So for Amnesty International, we have embraced in many ways the two-tiered system that has been advocated by the International Red Cross. It means that autonomous system which cannot be designed to allow meaningful human control must be completely prohibited. So that's the first point.
That includes systems that are designed or used in a manner such that their effect cannot be sufficiently understood, predicted and explained. So that's the first tier I will say. Second, the autonomous weapon system
must be regulated even if they have meaningful human control. So it's not an open door to everything and that regulation must be of course predicated on international human rights law, humanitarian law and on whatever doctrine is gonna be developed itself in respect of international law. And thirdly there must be a positive obligation for state to maintain
meaningful human control over the use of force in all circumstances. So that is where
most of the thinking goes right now, where all of us working in that field are going. This is also what we are demanding underpin a treaty, which is currently under conversation. There are 129 governments that have accepted the idea of the treaty, the idea of meaningful human control. There are still a lot of conversation over what meaningful mean.
I would like to insist again that there can only be one path, which is around the preservation of human dignity and human autonomy, including over the most lethal weapon that can ever be used.
It may protect us, or we may have the notion of protection if indeed those weapons are in the hands of government that we trust. We need to understand again that we live in an environment where we cannot predict who will be our next prime minister.
where we cannot predict who will have access to those weapons. We need to understand that the drone technology is now in the hands of many armed groups that we would not want to be in our own territories, and that the drone technology can be easily exported, multiplied, and
used by individuals, individuals, not just groups or armies. They will not be impacted by the treaty that I'm talking about. This is why regulation now is absolutely key.
It's absolutely key. It cannot be a far West where everybody can do whatever they want. So in light of how much time we have left, and I do wish we had two more days perhaps to unpack so many of these questions,
I do want to turn, especially with the audience that we have here, to the question of the public and private partnership aspect to this problem. We talk a lot about regulation. Enforcement in the international system is notoriously difficult to do. It's even harder when the number of actors increases. And as we know here, in some sense, technology has democratized the ability to participate in violence or to use violence as you see it.
So, for the panelists in the unfortunately two minutes we have left, I wonder if we could just talk through what are the major obstacles to regulation? What are the types of things we have to see happen, for instance, before we make meaningful change in this domain? And then I'm going to open it up to the audience for any questions that remain.
Yeah, I'll start. So at a basic level, you need both sides to recognize that they're going to develop the weapons, but they need to talk to each other and to share some degree of information about capability and what's called doctrine usage. Doctrine is really important. The Cold War was...
The Cold War always remained the Cold War. It never went hot, in large part because of this idea of doctrine. And what that means is that both armies had a strategy, and it was documented, and all the officers were trained on it. They always knew what it was. And so it was like a waltz, and these were the rules, and it was tit for tat. And so when one sub would go into someone else's territorial waters, another sub would do something else. And it was managed in a very polite way. It was a Cold War. So...
that degree of
of the 20th century, it's hard to imagine that existing today in our current democracies in which we're verbally shouting at each other whether it's America, China, et cetera. However, we still need the militaries at a certain level to be able to sit down with each other and say this is what we have and this is the cases in which we want to use it and we might actually disagree vehemently one another but
we, at a certain element, both sides have an interest in peace. And so there is a muscle flexing that takes place among militaries, but there's also that small degree of military, of sharing of information, so that when a surveillance plane in America hugs the coast of China, and when a weather balloon, that is a surveillance balloon from Beijing, hovers over the continental United States, it doesn't escalate into something radically different.
terrible. So just that sort of state to state communication is essential. America famously did try to install a hotline with the Chinese and actually tried to test it and the Chinese wouldn't pick it up. So that's a big problem. If you're if the person who is if the entity that is your adversary doesn't even have that parlay, as it's called in diplomatic speak, you know, you can't even begin that sort of tempering down of emotions that is going to be needed in a crisis.
So Agnes, I'll leave you with the last comment and then we'll go to questions. So you ask what are the difficulties in terms of finding a way forward. I mean the first has been highlighted before. It is the fact that we live in a world who dominates, who controls, who understands AI dominate the world. So there are very little incentives right now
to put an end to any kind of AI powered research, including in relation to weaponry. There are very little incentives because whoever seek to regulate themselves may face the risk of falling behind vis-a-vis other competitors, including other governments. So in an era where the technology is evolving so quickly,
every six weeks there is almost a new technology on the market, it is very difficult to find a convincing, to convince governments and private actors to seek regulation. Point number one. Second problem is the fact that while there may be, at least in the Western world, a way of understanding and controlling, including through democratic processes where they still exist,
what's happening in the militaries, there are far less control over private actors, many of which are extremely powerful and very rich, so rich that they can send people to Mars. So this
AI power, this development by private actors must be regulated. The European Union is trying to do that, but so far has not been very successful. The US has over the last three or four years
three years in fact, attempted to at least regulate one aspect of the technology of this industry which is that concern with spyware and surveillance and has been relatively successful. But we don't know what's going to happen in a few months.
So there is a wild west right now as far as private companies are concerned and the development of surveillance technologies, spyware, weaponry and so on and so forth. That's the second major problem. But governments should, in my view, and maybe it's naive, but you know,
at the end of the day, we need to be also animated by some form of self-preservation. It's one way to want to control and dominate the world. If you end up dominating a big black hole, there is nothing. So I will hope that people and
companies and government at some point in time sooner rather than later will understand that the self-preservation must take over and there is only one way forward it's regulation regulation regulation all right thank you so i'd like to thank our panelists for the conversation today and to the audience for all your questions and your attention thanks again to intelligence squared and to the web summit for having us and please enjoy the rest of your summit
Thanks for listening to Intelligence Squared. This episode was produced by myself, Mia Sorrenti, and Leila Ismail. It was edited by Bea Duncan. You've been listening to Intelligence Squared. Thanks for joining us.
But I'll tell you a little secret. It doesn't have to be. Let me point something out. You're listening to a podcast right now, and it's great. You love the host. You seek it out and download it. You listen to it while driving, working out, cooking, even going to the bathroom. Podcasts are a pretty close companion.
And this is a podcast ad. Did I get your attention? You can reach great listeners like yourself with podcast advertising from Libsyn Ads. Choose from hundreds of top podcasts offering host endorsements or run a pre-produced ad like this one across thousands of shows to reach your target audience in their favorite podcasts with Libsyn Ads. Go to LibsynAds.com. That's L-I-B-S-Y-N ads.com today.