We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Technocolonialism: when technology for good is harmful

Technocolonialism: when technology for good is harmful

2024/12/5
logo of podcast LSE: Public lectures and events

LSE: Public lectures and events

AI Deep Dive AI Insights AI Chapters Transcript
People
M
Mirka Mediano
Topics
Mirka Mediano: 数字创新,如生物识别技术和聊天机器人,不仅加剧了暴力,还巩固了全球南方和北方之间的权力不对称。这些技术在人道主义行动中被广泛应用,但其背后隐藏着殖民主义的延续,尤其是在全球南方的难民和受灾人群中,数字技术的使用往往缺乏透明度和有意义的同意,导致新的不平等和结构性暴力。

Deep Dive

Key Insights

What is technocolonialism and how does it manifest in humanitarian operations?

Technocolonialism refers to the way digital innovation, data, and AI practices entrench power asymmetries and engender new forms of structural violence between the Global South and North. It highlights how digital infrastructures, humanitarian bureaucracies, state power, and market forces converge to reinvigorate colonial legacies. For example, biometric technologies and AI-powered chatbots in refugee camps often exacerbate inequities, leading to new forms of violence and control over vulnerable populations.

Why are biometric technologies controversial in refugee camps?

Biometric technologies are controversial because they often codify existing forms of discrimination and impose Western-centric frameworks. They require refugees to submit biometric data (e.g., facial recognition, iris scans) to access basic necessities like food, healthcare, and shelter. This raises concerns about consent, as refugees often have no alternative but to comply, risking their data being shared with governments or other entities, potentially endangering their safety. Additionally, biometric systems have higher error rates for non-white bodies, leading to daily humiliations and exclusions.

How do AI-powered chatbots contribute to epistemic violence in humanitarian settings?

AI-powered chatbots often impose Eurocentric values and invalidate local systems of knowledge. For example, mental health chatbots designed to address post-traumatic stress disorder may not account for the cultural specificity of emotions or the ongoing trauma of war and displacement. These chatbots, trained on English-language data sets, reflect Western perspectives, leading to a form of epistemic violence that marginalizes local knowledge and experiences.

What are the key logics driving digital interventions in humanitarian operations?

Six key logics drive digital interventions in humanitarian operations: 1) Humanitarian accountability, where digital technologies are seen as correcting deficiencies in aid delivery; 2) Audit, driven by the need for metrics and efficiency; 3) Capitalism, with private companies entering humanitarian spaces through public-private partnerships; 4) Technological solutionism, where technological fixes are prioritized over addressing root causes; 5) Securitization, where states use technologies to control populations; and 6) Resistance, where affected communities challenge datafication and automation.

What is surreptitious experimentation in humanitarian contexts?

Surreptitious experimentation refers to the implementation of digital pilots or experiments in humanitarian settings without formal announcement, clear boundaries, or meaningful consent. For example, the Building Blocks program, a blockchain-based cash assistance system, was rolled out in Jordan without being framed as a pilot, leaving refugees with no alternative method to access aid. This lack of transparency and accountability allows for the normalization of technological experiments among vulnerable populations.

How does technocolonialism reinforce structural violence?

Technocolonialism reinforces structural violence by systematically excluding and marginalizing groups based on race, gender, class, and other factors. For instance, algorithmic decision-making in aid distribution often leads to arbitrary exclusions, with no clear accountability for errors. The permanence and shareability of digital records amplify these risks, making structural violence more pervasive and diffused, particularly in refugee camps and disaster zones.

What role do donors play in perpetuating technocolonialism?

Donors play a significant role in perpetuating technocolonialism by driving the demand for digital interventions through funding requirements. The logic of audit and securitization, which prioritize metrics and control, is often imposed by donors. This leads to the adoption of technologies like biometrics and AI, which entrench power asymmetries and reinforce colonial legacies. Donors' influence is crucial in shaping the humanitarian landscape, often at the expense of local autonomy and justice.

What are some examples of resistance to technocolonialism in humanitarian settings?

Resistance to technocolonialism often takes the form of mundane resistance, where affected individuals engage in small, everyday acts of defiance. For example, refugees may refuse to use chatbots or feedback platforms, or creatively repurpose technologies like humanitarian radio for music and community bonding. Open protests, such as the Rohingya strike against biometric registrations, also highlight resistance, though such overt dissent is often risky and costly in highly asymmetrical settings.

How can humanitarian organizations address the harms of technocolonialism?

Humanitarian organizations can address the harms of technocolonialism by prioritizing meaningful consent, offering alternative methods for accessing aid, and listening to the needs of affected communities. Engaging in participatory action research, where refugees and disaster-affected individuals help design digital systems, can also ensure that technologies align with local values and priorities. Additionally, organizations must critically examine the logics driving digital interventions and challenge the structural inequities they reinforce.

Shownotes Transcript

Translations:
中文

Welcome to the LSE Events podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences. Good evening, everyone. I'm really pleased to welcome you to this evening's hybrid event, Technocolonialism, when technology for good is harmful. My name is Alison Powell.

and I'm an Associate Professor in the Department of Media and Communications here at the LSE. And I'm very pleased to welcome Professor Mirka Mediano to both our online audience and to all of us who are here in the Old Theatre. Mirka is Professor in the Department of Media, Communications and Cultural Studies and Co-Director of the Migrant Futures Institute at Goldsmiths University of London.

Her research focuses on the social consequences of communication technologies, infrastructures and artificial intelligence in a Global South context. It especially looks at migration and humanitarian emergencies.

She's currently principal investigator on a British Academy grant on digital identity programs in refugee camps in Thailand. And her new book, which we're here to celebrate, is Technocolonialism: When Technology for Good is Harmful.

Based on her book, this evening Mirko Merkitt-Madiana will argue that digital innovations such as biometrics and chatbots engender new forms of violence and entrench power asymmetries between the global south and the north. For those who are using Twitter or X in the audience, the hashtags for tonight's event is #LSEevents.

If you are in the room with us, I would ask you to please put your phones on silent so as not to disrupt the event, which is being recorded and will be hopefully made available as a podcast as long as we haven't had any technical difficulties. As usual, there will be an opportunity after Mirka speaks for you to put your questions to her. For our online audience, you can submit the questions via the Q&A feature, and I would love if you could put your name and affiliation.

For those of you here in the theatre, I'll let you know when the floor opens for questions. Please raise your hand and wait for the stewards who will have a roving microphone. And also please let us know your name and affiliation. I will try to ensure a range of questions from both the online audience and the audience here. So, after all of the important preliminaries, I would really like to welcome Mirka to introduce us to her new book. Thank you very much.

Thank you. Thank you, Alison, for the introduction. Thanks to the Department of Media Communications for the invitation. And thanks to all of you here today and those joining remotely. I look forward to our conversations. It's a great pleasure actually to be back at the LSE and back to the old theatre. I have fond memories from this space as a student.

and it's one of the parts of the LSE that hasn't changed much so it's really nice to be in this room.

so the book starts with an example and i'll share that with you today because um it sets the tone for what i'm trying to capture with this work so between 2017 and 2018 over 1 million rohingya refugees arrived in bangladesh fleeing genocide in myanmar in 2018 the united nations agency for refugees unhcr together with the bangladesh government and a private contractor

Organized the digital identity program aiming to control to collect the biometric data of all refugees now biometric registrations are standard in humanitarian operations and one of the first things that happens when a refugee comes into contact with UNHCR is to be asked to give their biometric data now the Rohingya registration was controversial and

led to protests in the camps and the activists were protesting for two reasons. The first is that the system, the digital identity program was not referring to them with their preferred name, Rohingya, which was perceived as a form of symbolic erasure. But second, the Rohingya were also concerned that their data could be shared with the Myanmar government, thus endangering them in the future.

In November 2018, the protests culminated in a strike, a sign of resistance. The Bangladesh government responded quite forcefully, sent in the police who clashed with the protesters, and the next day the biometric registrations continued as normal. Now the above example encapsulates both the pervasiveness of, but also the violence associated with digital technologies in the humanitarian sector.

with over 100 million displaced people globally and over 300 million people in need of assistance and that's these are figures from uh

2023, the sector is facing significant challenges. As humanitarian emergencies and climate disasters become more common, digital innovation and artificial intelligence are championed as solutions to the complex problems of the sector. Biometric technologies are not just used to register refugees. There are sort of

myriad uses of biometric technology. So in a typical camp, a refugee will need to use facial recognition in order to receive rations, you know, in order to collect food, but actually also basic resources like charcoal.

Refugees may also need to scan their iris in order to access a hospital and to see a doctor or a health professional. Blockchain technology underpins virtual cash transfers while cryptocurrencies are being piloted for cash disbursement. And feedback channels are increasingly digitized while AI-powered chatbots are used to communicate with affected people but also to offer services like psychotherapy.

Algorithms are employed to decide who is eligible for aid and who is not, while AI is being explored to predict future crises and future refugee flows. And the recent pandemic, the COVID-19 pandemic, accelerated the implementation of all these digital interventions as the need to deliver aid remotely became more pressing.

So several of all these innovations I've just mentioned are part of the wider phenomenon of tech for good or AI for social good. These may be terms you have heard, but humanitarian interventions, also interventions in the international development sector are often seen as part of this AI for good or technology for social good phenomenon.

So in order to make sense of these practices, I have been developing the term "technocolonialism" over the last few years. And this is the title of the book that I've just published. I drew here on ten years of research with humanitarian workers, donors, digital developers, entrepreneurs, volunteers,

and affected communities themselves. And the book synthesizes two research projects through which I have followed the trails of data from crisis affected people and refugees to humanitarian organizations, technology companies and donors. So following the trails of data illuminated the power relationships at stake. So this is the approach I took.

I won't say too much about the projects because I want to focus on the book's arguments, but I'm happy to elaborate more in the discussion.

so my argument is that digital innovation data and ai practices entrench power asymmetries and engender new inequities between what we call the global north and the global south or the global minority and global majority worlds and i prefer the latter terms because they recenter people in the global south as a majority but i'll use both sets of terms in my talk so what

What do I mean by technocolonialism? Technocolonialism refers to how digital innovation, data and AI practices entrench power asymmetries and engender new forms of structural violence and new inequities between people in the global south and north. Technocolonialism illuminates the convergence of digital developments with humanitarian structures, state power and market forces and the extent to which they reinvigorate and rework colonial relations.

So through the term techno-colonialism, I recognize that phenomena like displacement, migration, humanitarianism, but actually technology itself are steeped in colonial relations. So let's look at humanitarianism, right? Humanitarianism is deeply entangled with colonial legacies. It emerged in the colonial expansion of the 90th and 20th centuries.

And although contemporary humanitarianism is popularly understood as the imperative to reduce suffering, and I'm referring here to Craig Calhoun's definition, the structural asymmetry between donors, humanitarian officers, and aid recipients reproduces the unequal social orders that shaped colonialism and empire. The emphasis on doing good occludes the fact that aid

including international development, is part of a wider liberal agenda that ultimately benefits countries in the north.

And more broadly, humanitarianism reproduces relationships of inequity between Western saviors and suffering former colonial subjects. That's attesting to the tenacity of colonialism. So although modern humanitarianism doesn't look like the images here in the slide, in some ways the inequity, the asymmetry is still there, right? So it's something that remains very present.

Now technology and science are also part of colonial genealogies. Science was integral to the civilizing mission of colonialism, a tool used to justify colonial rule. Science was the prime tool used to mold colonial subjectivity. So it's not just a tool, it actually produces subjectivity. It's constitutive of colonialism.

And AI is part of these larger genealogies of enumeration, measurement and classification that were originally developed by imperial powers to control colonial subjects. It's no coincidence that biometrics was first used in India as part of the British Empire's effort to control other bodies.

colonial subjects and that biometrics is still used today to manage and control other bodies, right, is I think evidence of the durability of these colonial legacies. Now infrastructures, and it's a term that I will come back to in my talk,

infrastructures are also steeped in colonial relations. The typical example here is the telegraph, right? Which mapped onto the geography of the British Empire and then provided the footprint for the internet.

Some of these observations echo the work of various scholars, including Anibal Quijano and his notion of coloniality of power, a term that he developed to explain the ongoing subjugation of the colonized well after direct colonial rule. Stoller also reminds us that empires leave behind debris and these ruins are durable and are reworked

reactivated under different conditions, often in very oblique ways. My thinking here is also informed by the argument about colonialism as an ongoing structure which helps explain how the legacies of settler colonialism persist and help shape contemporary formations of race, gender and class.

It's important here to clarify that the emphasis is not on a single sovereign empire. I'm not really referring to any one empire, right? But on an enduring structure of domination evidenced in persistent practices of othering, the epistemic violence of Eurocentric systems and their claims to universality, and the codification of racial forms of discrimination. So it's a much more diffused process that I'm really trying to capture here.

So let me be very clear here. When I use the term colonialism, I don't use it to refer to a radically new phase of colonialism. I do not use it as a historical prism through which to understand the present. I argue that colonialism and coloniality have never gone away. Empires collapsed, but their legacies and logics have survived and permeate processes like humanitarianism, migration, displacement, and technology itself.

and technology at the same time also reworks these processes. Likewise, I do not argue that colonialism is, you know, I'm not really using colonialism as a metaphor and it wouldn't even be appropriate to do so given the violence associated with colonialism. To use colonialism as a metaphor would be to depoliticize the term and the phenomena that it describes.

We need colonialism as a framework in order to explain why technological experiments take place in refugee camps, typically in global majority worlds. We need colonialism as a framework to understand why there is no meaningful consent in a refugee camp, where refusing to submit your data, your biometric data, amounts to refusing to receive aid and shelter when there are no alternatives for survival.

At the same time, this isn't an argument about neocolonialism, although neocolonialism is obviously a related term, a cognate term. Technocolonialism shifts the attention to the constitutive role of data and digital practices in entrenching inequities, not only between refugees and humanitarian agencies, but also in the global context. So there's something about how technology gives sort of new impetus to these inequities and produces something new, and that's what I'm trying to capture with this work.

And to be clear, technocolonialism is about violence. It is a form of structural violence, violence linked to structural inequities and marginalization which affect whole groups of people on the basis of their gender, their race, their ethnicity, their class, their disability, their age and where they live in the world.

But as the term suggests, structural violence is systematic, but it's experienced indirectly. And I think that's what is really important to remember about structural violence, that it's a very systematic form of violence, but it's a very diffused indirect form. It's experienced in an indirect form. It's experienced through everyday humiliations or everyday cruelties.

But techno-colonialism is actually also about physical violence, occasionally, or more often than occasionally, as was evident in the Rohingya example that I mentioned at the very beginning of my talk. Incidentally, a couple of years ago, it was revealed that these records, the Rohingya biometric records, were actually shared with Myanmar governments.

potentially risking further persecutions, this time supported with biometric data. So we can discern here the different actors involved. Humanitarian organizations are very key to what I'm trying to explain here. Humanitarian organizations could include UN agencies but also large NGOs. The other important

after here is nation states either states host nations that host refugees or at donor governments and which fund aid organizations and of course here we have private companies as well including tech companies and which are part of this and said

field but it's important here to emphasize that techno-colonialism is not just about digital capitalism it's about the convergence of humanitarian bureaucracies state power market forces and also imaginaries about technology i have identified six logics

which explained the push for digital interventions in humanitarian operations. And I'm gonna go through these very quickly because they provide sort of an analytical framework for what I want to share with you later in this talk.

So, first of all, the logic of humanitarian accountability is behind the assumption that the interactive nature of technologies, digital technologies, will correct the deficiencies of humanitarianism. There have been long-standing criticisms of humanitarianism as a form of neocolonialism. There is this argument that

Humanitarianism creates new dependencies. And so there has been some enthusiasm with digital technologies for sort of being part of the reform of humanitarianism, democratizing the project.

By contrast, we have the logic of audit, which stems from humanitarianism's enormous success and the fact that it has grown as a space in recent years. So the logic of audit stems from the constant demand for metrics which humanitarian organisations must submit to donors in order to secure more funding.

So digital technologies generate metrics and are associated with robust audit trails and efficiencies. For example, an often used argument is that biometric technologies will reduce the amount of low-level fraud that occurs when people, for example, claim aid twice. So this logic of audit is pushing a lot of these or driving a lot of these digital interventions.

Then we have the logic of capitalism, which explains the dynamic entry of the private sector in the humanitarian space through what are now ubiquitous public-private partnerships, such as the one between Palantir and the World Food Programme, the UN's largest agency,

and this received a lot of attention when it was signed and rightly so given Palantir's track record but this is just the tip of a much larger phenomenon with their thousands and thousands of such partnerships across the sector. A key observation here is that by turning themselves into agencies for humanitarianism corporations reframe political problems in line with their business objectives and

and in so doing, depoliticize crisis, right? So the idea that there can be, and this is just an example of this logic of capitalism. Now, the idea that there can be technological fixes to complex problems

problems exemplifies the logic of technological solutionism. Solutionism puts problems for technological, sorry, solutionism puts solutions before actual problems. So the emphasis is about finding problems for technological solutions rather than the other way around. And this is an example

one of my interlocutors from the humanitarian space, is referring here to how blockchain at the height of its hype in 2018 was really driving a lot of these digital interventions. So it was finding problems for blockchain rather than having a problem and then trying to see what might be a way to address it.

So, coupled with the logic of capitalism, solutionism leads to the normalisation of technological pilots or experiments among vulnerable people.

And then we have the logic of securitization. And at first glance, this primarily concerns the role of the state and its desire to make populations legible and protect borders. But of course, securitization is big business. So this logic is also linked to the logic of capitalism. In fact, all the logics intersect, right?

But securitization is particularly relevant to the response to refugee crisis and the use of biometric technologies. And we know from scholars like Katia Lindskov-Jakobsen that

donor states often push for biometrics because of their securitization agendas. So as always, structures of control produce contestation. So a sixth logic is resistance, which refers to the extent to which people challenge practices of datafication and automation. And it's really important to remember this is an inextricable part of what I mean by techno-colonialism. And the Rohingya protests were a good example of this.

The history of colonialism has always been a story of resistance and struggle. And this can range from open protest like the Rohingya protests here, but it can also be encountered in the sort of micro politics of everyday life, which is, I think, key to understand when we're trying to make sense of settings where power inequities are very central. So I will talk--

later about the micro politics of resistance, what I call mundane resistance. So these distinctions

among the logics are analytical. In practice they intersect. It's hard to separate them in practice, but I think analytically it's important to see what is driving this phenomenon that I'm trying to unpack here. So let me now summarize the ways in which digital technologies rework colonial genealogies. And they do so in a number of ways.

by data extraction, you know, extracting data of refugees and other vulnerable people or crisis affected people for the benefit of stakeholders often in the global north, by extracting value from experimentation with untested technologies, through epistemic violence which is produced by imposing Eurocentric frameworks of knowledge and invalidating local knowledge,

By discrimination, by imposing a Western gaze on other bodies, reinforcing and occluding power relations through the enchantment of technology, dehumanizing suffering and obliterating accountability through daily humiliations and injustices that arise from what I call infrastructural violence. I hope I'll be able to explain that in a minute. And finally, by justifying some of these interventions under the pretext of, or under, you know, the notion of emergencies,

And I explore these different themes in the chapters of the book. And you have, I don't know if you can see it, this is a table of contents. I'm not going to talk about all of the chapters. I've decided to speak about biometric infrastructures and then the notion of experimentation and finally resistance. So I'm choosing three themes. But they bring together a lot of these ways in which these colonial genealogies are being reworked.

So I'm going to start with biometric infrastructures, and the reason for this is because biometric technologies are very ubiquitous in the sector. I've already hinted at that. And to understand the ubiquity of biometric technologies, I think we can take an example. So here's the example of one of my interlocutors. Let's call her Nomin. She's a Karen refugee in a camp in Thailand. So Nomin will need to authenticate herself every time she goes outside.

to the grocery store, she will have to scan her face in order to buy, let's say, a kilo of rice. Similarly, she will have to scan her face in order to collect charcoal to cook her meal or to warm up her house. In order to see a doctor or a nurse, again, she will have to scan her iris. So, you know, everyday life is sort of full of these kind of encounters with the machine.

So biometric technologies are used to underpin basic everyday practices. Now, rather than being the perfect identification technologies, as they're often described, biometric systems codify existing forms of discrimination. And we know from the work of scholars like Simone Brown and Shoshana Magnet and others that biometric technologies privilege whiteness.

with significantly higher margins of error when measuring or verifying other bodies, whether it's in terms of race, ethnicity, gender, class, disability and age. An authentication error can have quite significant consequences in situations of extreme precarity. But at the same time, we can see how errors also produce subjectivities, right? They are productive even if they don't work.

One of my interlocutors who has struggled to get the machine to read her face - she always gets an error message - told me that there must be something wrong with my face. So instead of really identifying the machine as the problem,

we see here how the machine imposes a particular gaze on the body. And this is a typical example of what Fanon was talking, was referring to when he developed his notion of epidermalization, right? This internalization of inferiority as a result of racism and what Simone Brown later developed as digital epidermalization, the imposition of race on the body through digital means. Now,

These problems are compounded by the lack of meaningful consent in refugee biometric registrations. Even when, and this is a big when, refugees are asked if they consent to their data being processed, they're not offered alternatives to access aid. And yet the presence of a consent tick box often legitimates, normalizes some of these practices. I think it's also important to say here that the introduction of

Some of these systems especially the systems that involve blockchain technology as well where refugees can actually shop using their digital wallets in refugee camps is actually

giving people more choice rather than receiving a ration they can go in a grocery store and collect whatever they want. But in practice a lot of these grocery stores are not exactly full of goods, you know, so the goods people buy tend to be very similar to the goods that they were given when they were receiving aid in kind. I mean, I realize this may vary from setting to setting but there is a

It's a legitimate question to question the argument around choice. Additionally, I think the other important dimension here is that because of inflation, people tend to get a lot less. So in the past, you would get your 25 kilos of rice per month, and that would probably last you for a good number of days or last the whole family a good part of that month. But now that people have to buy rice,

food with their aid stipend, people can actually buy a lot less with inflation. So I don't think it's surprising that in many settings there is a desire to return to the old system of ration books. Now the permanent records afforded by digital technologies mean that biometric technologies enable

enable the surveillance of refugees in perpetuity. In that sense, encampment becomes a diffused process enacted through permanent and retrievable digital records. And again, important to remember here that it is an official policy of UN agencies not to delete data. It's not the same with other organizations. ICRC will have a different policy on this. They do have a different policy.

UNHCR, which is the main body dealing with refugee data, has a sort of forever policy at the moment. As Biometrics underpins a range of essential practices, from receiving your shark call to seeing a doctor to collecting food,

we begin to see that it changes the nature of aid. And we observe here that it's not just simply a matter of datafication or digitization of aid, but I think something more fundamental is happening. And I make sense of that as an infrastructure of aid. Now, infrastructure refers to the fact that humanitarian systems depend on privatized or government networks and systems. And I think that's really important to understand.

aid is no longer something that is contained within humanitarian agencies but is actually much more porous because you have a private contractor providing the hardware or the software system that is being used at the same time you may have states that require access to the data so biometric systems um yeah are are um you know really enhance or engender this porosity of

humanitarianism and this has profound implications for the character of humanitarianism and it's often referred to as sacrosanct principles of humanity and neutrality, independence and impartiality. So how can you claim to have independence if there is a private company operating for profit that is actually providing the infrastructure through which aid is being delivered. And similarly

The principle of humanity means that everyone is deserving of aid with no preconditions. But if submitting one's biometric data becomes a precondition, in this situation of conditionality, how can we then speak of the principle of humanity? So what we're seeing here is that there is this shifting, this change, this reworking of what is humanitarianism.

And as digital infrastructures converge with humanitarian bureaucracies, they both inherit and amplify each other's limitations and change the nature of aid. Now, having explained what I mean by infrastructure here, I'm going to move on to talk about experimentation because infrastructure is very much linked to experimentation and especially this notion of surreptitious experimentation I want to share with you. So we often hear that refugee camps are...

Treated as laboratories where new technologies are piloted by private companies or even by humanitarian organizations or in partnership both private in public-private partnerships.

Several of my interlocutors in the humanitarian sector have told me that we're witnessing a true pilotitis. And what they mean by that is kind of this series of different sort of technological experiments that are taking place globally. So we've seen experiments using or involving mental health chatbots for refugees or pilots involving algorithmic decision making in terms of who will receive aid and who will not. This is just to show you

I'll say a few words about this in a minute. So contemporary technological experimentation follows a similar geographic distribution to the medical and pharmacological experiments of the 20th century, which reveals

the persistence of the global geometries of power and why people in the majority world are considered ready subjects for experimentation. So just to show you these kind of uneven flows,

A survey of over 1,000 AI for social good applications between 2008 and 2020 found that the majority of the projects were US-based. And my own survey of 49 AI applications in the humanitarian sector in 2021 also revealed that no project was led by a country in what we call the Global South. All projects were led by North American and European organizations.

And this is evident also in hackathons, right? Which are key sites for designing new applications. Most of these hackathons take place in cities in the global north. Here's the hackathon that took place in New York City.

And later what happened here is that this hackathon developed a chatbot that was rolled out in Kakuma refugee camp in Kenya. And I think this is quite typical, that you would have this kind of gap between those who design the technologies and those who are then at the receiving end. In my fieldwork, I went to hackathons. I went to 10 hackathons between 2016 and 2021. And

I very rarely met refugees there involved in the design process. So the geographic unevenness of technological pilots matters because it reveals the power dynamics involved. Computation depends on classifications, which are inherently political and reflect the dominant values of the environment. And because classifications are embedded in infrastructures, they become invisible, which renders them even more powerful.

And this is evident when we consider the question of language.

And when we talk about language, the example that comes to mind is really chatbots. We have seen several chatbots involved-- rolled out in recent years, especially in relation to refugees. So most of these chatbots are low to medium level chatbots, which means that they run on artificial-- sorry, on natural language processing. But essentially,

They mainly handle answers to a list of predetermined questions. So they're not what we would, you know, when we hear chatbots we think chat GPT. This is not there yet. So the uses of chatbots vary widely. There are informational chatbots which would answer questions like where can I go to, how can I apply for asylum.

Then there are educational chatbots that purport to help people develop new skills. And we also have feedback chatbots that ask refugees to evaluate aid programmes. And then mental health chatbots which aim to provide some form of psychotherapy. Now the language in which chatbots operate is absolutely crucial, right? And we know from the work of decolonial writers like Gungi Hwa-Tiong

that language constitutes the body of values through which we perceive ourselves and our place in the world. So even if a chatbot is translated in a local language, if the algorithms and the classifications on which it is based are essentially trained in English, then

you still have a very sort of anglo-centric perspective. And this is particularly problematic in the case of mental health chatbots, which, where the cultural specificity of emotions is really key, right? Post-colonial scholars have long questioned psychiatric classifications for being Eurocentric, but things

But think of this, right? Producing a mental health chatbot to deal with post-traumatic stress disorder when people are fleeing war and are being displaced speaks volumes about how far removed some of these chatbots are from people's realities and how harmful they can be. There is no post-trauma if you are fleeing war, right? So these are some of the tensions that we see here. And this is a type of epistemic violence.

So it's not just a matter of data extraction when we look at experimentation, it's not just a matter of data extraction. Of course, a data extraction happens and companies that develop some of these chatbots benefit from the publicity that is generated. So here's a good example of how a mental health chatbot got good headlines for a Silicon Valley startup.

Ultimately, I think what is really troubling here is that chatbots and other AI for good solutions constitute a form of epistemic violence, imposing Eurocentric values and invalidating often local systems of knowledge. Now the harm of experimentation is evident when we consider what I term surreptitious experimentation.

This type of experimentation is only possible because digital technologies increasingly underpin aid operations. So it's the argument around infrastructure. So surreptitious experiments take place without a formal announcement, without clear boundaries, without consent or without accountability. And let me illustrate this with an example which is the largest pilot study I encountered in my fieldwork. And it's building blocks.

which I shared that slide earlier. So this is a

a blockchain-based cash assistance program launched by the World Food Programme. BuildingBlocks authenticates refugees through biometric technologies, drawing on the vast biometric databases of UNHCR and the World Food Programme, and then releases their monthly aid allowance through blockchain technology. So the way this works is that refugees shop in designated stores, they would scan their iris, and then the system releases payments to the merchants.

Now, Building Blocks, as I said, has been one of the largest pilots in the humanitarian sector. It launched in 2017, and at pilot stage, it included 10,000 people before scaling up to over a million people. And it's obviously no longer a pilot. It has graduated from the pilot phase, so it's now being used in Bangladesh and Jordan, and more recently in Ukraine.

So, but I think what is important to say here is that when it was introduced in Jordan among 10,000 people, this became the new system through which people received aid. It wasn't announced as a pilot, which is still at a better stage where, you know, it is being...

tested, there wasn't really an alternative method to collect aid. And I think this is really what I'm trying to capture here through this notion of surreptitious experimentation. Often when some of these pilots take place, they're not really framed as pilots. They're framed as a new way of trying things out and that makes them quite invisible. So when experiments take advantage of digital infrastructures,

which are invisible, they too become invisible. And that makes them quite diffused, right, without clear boundaries. Now, the invisibility of infrastructure, sorry, the invisibility of pilots is a key aspect of surreptitious experimentation. Because pilots are not framed as such, they're not really registered as experiments.

And this is linked to the lack of meaningful consent, and I've mentioned this many times before, but it's such an important issue, right? Because we know that consent, in order to be meaningful, needs to have at least two dimensions. You need to understand what the system is doing and how your data are going to be used. And then you also need to be, you know, be allowed to not to participate, to refuse to participate without detriment. And neither of these...

conditions were met here. So even if you may have, and it doesn't always happen, you may have a tick box that says, yes, I consent to my data being used. If there isn't really an alternative, and if there is no full understanding, then I think we can't really speak of meaningful consent. And that means that also accountability, you know, in this kind of diffused

nature of experimentation without real consent, we really have a lack of accountability when or if something goes wrong. So the lack of accountability is a theme also in other interventions like algorithmic decision making which is again used increasingly to make decisions for example about who is eligible for aid or not.

An example is, maybe I should say here that aid has always been selective. This is not a new thing. It's not really linked to technologies. Aid has always been selective because demand outstrips supply. And increasingly, agencies are trying to make these decisions using algorithms. So a system that is often used is proxy means testing.

It has been used in a number of different settings, and it decides, it essentially makes recommendations about which households would be included in the distribution and which would not. So according to a UN commission study, the exclusion rate in Kakuma camp in Kenya, northern Kenya, was 4.3%.

Given the population of the camp is over 200,000 people, we can see immediately that potentially thousands of people could be excluded wrongfully. Now the problem here is that there is an opacity about how these algorithms work, right? And so people who are excluded experience this in a very arbitrary way. And frontline staff, you know, the people who are there helping communities in need,

often feel very frustrated because they can't explain why a decision has been taken because they too don't know why the algorithm has produced a particular result. And I heard this from, you know, my interlocutors were telling me, sharing their levels of frustration. One of my interlocutors described this process as random and that opacity is

possibly intentional because to reveal how the algorithm works means that then recipients could try to game the system. And the whole point here is to really try to not allow that. Even though computation and algorithms are probabilities, their outcomes are treated as unambiguous and objective. And because AI is cast as scientific and neutral, it accentuates the already unequal power geometries in humanitarian settings.

Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy. LSE IQ asks social scientists and other experts to answer one intelligent question, like why do people believe in conspiracy theories or can we afford the super rich? Come check us out. Just search for LSE IQ wherever you get your podcasts. Now back to the event. Now

The labyrinthine system of supply chains and automated decision making obliterates responsibility to affected communities. When someone is excluded from a distribution list due to an algorithmic error, whose fault it is? Is it the NGO? Is it the designer of the algorithm? Is it the government that provided an incomplete data set? Who is responsible?

Because algorithms depend on classifications that are structurally determined, their outcomes affect whole groups of people. This is an example of structural violence, right? When people are excluded because of their race or their gender or their class or their sexuality or religion or age, this is something that affects whole groups of people, but it's also experienced in a very individual way.

And this example of algorithmic decision making, as well as the example on surreptitious experimentation, illustrate what I mean by infrastructural violence.

Digital infrastructures permeate many aspects of everyday life in the refugee camp or in the disaster zone, from cash assistance to grocery shopping. As infrastructures become the ambient environment of everyday life, and I'm using Brian Larkin's phrase here, the harms are normalized. The coloniality of power implied in these interventions produces subjects. The errors of automation can be grave.

when an algorithm decides wrongly that a family should be excluded from aid or when an algorithm refuses to authenticate you or

insist that you are someone who you are not, right? But there are some errors which are much less spectacular, but equally disturbing. They involve daily humiliations, right? The technology that doesn't work, the chatbot that doesn't include your problem in the drop-down list, or being given an e-voucher system in an area that doesn't have shops, designated shops where you can actually cash it.

These are all examples of infrastructural violence. As infrastructures transcend institutional boundaries and humanitarian systems become interoperable with those of private companies or states, the opportunities for structural violence are multiplied. The shareability of data and the permanence of records amplify the risks to individuals.

so infrastructural violence shares all the characteristics of structural violence but it is even more diffused and multiplied as data infrastructures become ubiquitous the violence become more present becomes more present and more diffused almost like a form of ambient violence and what is also diffused is or almost obliterated is accountability now

As always, oppression and violence are met with resistance. Colonialism has always been a story of struggle. And the example of the Rohingya, with which I started the talk, reminds us that despite power asymmetries, in that case state power, techno-colonialism is contested. Most contestation in humanitarian settings is not overt. And I think there is a

a bias in Anglo-centric literatures of protest that privileges open dissent, right? Which is largely tolerated in Western liberal democracies, but it's not really an option in many parts of the world where the cost of outright rebellion is prohibitive.

So in order to make sense of resistance in asymmetrical settings, I developed this notion of mundane resistance. And I draw here on authors like authors from the black radical tradition, like Orlando Patterson, Cedric Robinson, and Anton de Combe.

Patterson and Robinson observed that in very asymmetrical settings like slavery, where acts of outright defiance were impossible, resistance was often taking passive forms, including deliberate evasion or refusal to go to work or satire.

And I prefer the term mundane resistance to passive resistance because I don't think there's anything passive about refusing, for example, to use a chatbot or refusing to give your biometric data. It's actually, it can be very brave in some cases. So mundane resistance takes place below the radar through small, ordinary acts. For example, in my field work in

In the aftermath of Typhoon Haiyan in the Philippines, we observed how our interlocutors simply refused to use some of the platforms that had been put forward by NGOs. Refusing to engage with a chatbot or a feedback platform can be a political statement in a different voice.

But refusal is not an option for refugees, right, when they're asked to give their biometric data. In my current research, my interlocutors tell me that even if they are worried about the safety of their data, even if they're worried about even the bodily intervention, they do not feel able to raise these questions with NGO workers because they fear they will be excluded from aid.

So there are hierarchies of exclusion and agency within humanitarian settings and structures. There are other forms of resistance here. I should mention the everyday uses of platforms which are always very creative. Charlie Hill, who has written a wonderful ethnography of Mela camp in Thailand, has shown how encamped youth

produce rap music and upload it on YouTube in order to share with the world their stories and experiences. In the Typhoon Haiyan fieldwork, we found that people appropriated humanitarian radio, a medium typically used for very top-down information dissemination. I don't know if you've ever come across it. It's a very dry medium with kind of a series of announcements. But there was a two-hour music slot every Sunday

And that's what people really wanted from this radio station. You know, this is also, you have to realize there were no media left. You know, everything was destroyed by the typhoon and the storm surge. So this was the only kind of radio station, the only broadcast medium in Asia.

of Tacloban, so that tour music slot was really important and people would send their text messages to ask for requests for songs and dedicate the songs to their loved ones, a way of rebuilding the bonds that had been ruptured after so much death and destruction. And there are so many other examples, how refugees will use their mobile phones to witness and document the injustices they face.

People's agency that not does not cancel structural violence the same way that structural violence doesn't obliterate human agency. They're both co-constitutive. I don't think I should even say that but sometimes people feel that oh you should take sides. It should be one or the other. Mundane resistance doesn't reverse the relations of power but unless we recognize people's agency how can we ever hope for any social and political change to take place?

So even though the micro politics of everyday life don't reverse the power relations that I was describing earlier, they can plant the seeds for the colonial struggles to come. Now I don't wish to suggest that the onus is on people to reverse the harms of techno-colonialism. It's not the onus on refugees or people affected by disaster. The priority should be to abolish the conditions, right, that lead to these interventions and this is a political project.

I end the book with some reflections on this question: what can be done? And I've struggled with this question for a long time. It was the hardest section of the book to write in an unstable world ravaged by conflict and war and displacement.

the book could not simply end with critique i took inspiration from ruha benjamin's work who encourages us to practice hope as well as critique the injustice that surrounds us of course hope first and foremost comes through people who are affected you know by displacement and disaster those who work in refugee camps know how these encounters can be very life-affirming people do so much with so little

they can also be very heartbreaking because of the injustices that people face, of course. So there are no recipes, but I end the book with some thoughts, some reflections, just as a means of opening up the conversation, right? And I will only end with two reflections here because I don't want to take up more time. So I'll end with this notion of reimagining infrastructures. This is something that's very much in my mind at the moment because of my current work.

So can technology for good ever be good?

AI will always depend on classifications that are subjective and mirror the values of its designers. As long as algorithms and large language models are trained in English, it will be impossible to decolonize AI. There's been a trend calling for that to happen, but we have a structural obstacle here. A narrow definition of good implied in the "for good" phrase is inherently problematic because it is tied with the UN sustainable goals.

which makes it sort of paternalistic and Eurocentric. So rather than for good, we need to design in the pluriverse, and I'm using here Arturo Escobar's term, for a world in which many worlds fit. We need to prioritize justice rather than ethics.

In my current project with Charlie Hill and Hazel Vako, we develop a collaborative approach using participatory action research. We ask our participants if they were to design a digital identity system, what would it look like? And what would it allow them to do? And we're at the very beginning of this research.

So, I can't share too much yet, but I'm already blown away with the clarity with which our interlocutors express what matters to them and the values that they prioritize. If tech is animated by suspicion for refugees, then the outcomes will never be good for refugees, right? Because any error in the system is just going to be interpreted as proof of guilt. But

But if technology is centered around the values of the community, then it can start to look different. Although I realize that can sound utopian in the context of camp securitization. The next thing to really reimagine here is solidarities.

Structural and infrastructural violence affects whole groups of people, but is experienced individually. The onus is on the individual to seek redress, to try to prove to the machine, to the bureaucracy, that they are actually entitled to receive aid, for example. By seeking others, those affected can become stronger. Now, solidarity, of course, is not going to...

reverse the grievances or solve the grievances of colonialism, but collective action is the only way to begin addressing some of the harm of humanitarian practices. So the most optimistic moments in the book come from these encounters, you know, refugees organizing amongst themselves or people organizing amongst themselves in disaster affected set-- in disaster settings or actually human rights activists

collaborating with refugees and NGO workers. So this is a new form of political care, and here I'm using Miriam Tickton's phrase to refer to new ways of being together at a global scale, grounded on participation and labor, duty and obligation, and shared common resources. And a few such spaces have emerged in recent years. I'll share with you an example because it is just

It highlights a few issues here. So during the war in Ukraine, a group of international NGO staff campaigned to refuse to collect biometric data as part of their response. And this was driven by the Ukrainian government. They didn't want the data of their citizens to be handled by international organizations.

But Ukrainian civil society organizations also expressed concerns. And also GDPR was brought into the frame, GDPR, the General Data Protection Legislation. Because Ukrainian refugees reside in EU countries like Romania and Poland, GDPR legislation applies. And therefore they thought that it could not really... To use biometric data was against the...

the legislation. GDPR has special provisions regarding foundational data like biometrics. And the campaign was successful. The biometric registration stopped in July 2023, and it confirms something that many activists have been saying for a long time, that actually these aid distributions can happen without biometrics. You don't actually need biometrics to distribute aid. You can do it with a spreadsheet.

But it also confirms that GDPR, the legal framework that we have in Europe is a luxury reserved for countries in the minority world, not something that applies to the majority world. But we can still see here that collective action can pay off and that techno-colonialism is not a given and its trajectory can be thwarted.

So techno-colonialism is structural. Its harms will not be reversed if we simply tweak the machine or improve the algorithms of automated decision making. It is only through political and collective action that we can address the violence of the machine or the violence that the machine helps to produce and imagine a more hopeful future. Thank you very much. So much Mirko, what a fantastic conclusion.

We'll now open the floor to questions from the audience, both in the theatre and online. So if you're online, please type your questions into the Q&A box and we will try to answer as many as possible. And if possible, include your name and affiliation. For those here, I'm happy to address any hands up. There will be roving mics. So we'll start with a question in the middle.

If you can let us know your name and affiliation, that would be great. Thank you. Hello. Hi. Brilliant. Hi, I'm Vanrika. I'm from UCL. I'm studying international relations, and my current focus is on Global South politics.

I had a question about data training. Like you mentioned a lot about chatbots and AI and how data is currently coming up as a major cause of concern because AI is being trained on Western data sets that have a heavy bias towards democracy, towards specific conditions of what is defined as human rights or humanitarian intervention or intervention laws. And these are all trained according to, again, data sets which are produced primarily in the global north.

What information do you think will be left accessible if most of the data goes online and in fact people are using that data set to learn rather than, I mean this is especially specifically true for students who are like currently into research. If the information that we are using is sourced out of this AI,

What information will be left accessible to the rest of the world? And also, what information will be systematically erased? How do you see solutions around that? Do we have other questions? Yes. And we've then got one in the back. So I'll take three. Then, Merica, you can address three, and then we'll do another round. Hello, my name's Anna Maria. Thank you so much for the talk. That was so interesting. I'm also from UCL, from the same department, doing international relations with a focus on East Asia.

So I was just really interested in what you said about consent and sort of like how people can give their consent. Because I think if most of us think about how consent is sort of given in the West in like a very sort of like simple everyday way, like the apps, for example, most of us sort of just click through. We don't really look at what our...

what the impact of giving our data is. So in like a refugee camp, for example, what would that look like? Especially when you have barriers like language and if people are in a very desperate situation and also if you have things like time affecting that, what would giving consent look like to make sure that the individual understands what they're actually giving?

Great, thank you. We have one in the back. Hi, I'm Mike. I'm doing global investing and interested what's your take on a more kind of pragmatic or kind of game theoretic point of view of comparing different aid agencies and different types of countries' governance systems and even FDI as a way of uplifting some of these frontier emerging markets?

In other words, instead of coming from quite such a sort of utopian sort of criticizing everything is imperfect, as possibly more contrasting the different kinds of aid actors and nations and potential commercial activity, which is all kind of using each other, but maybe some of it is worth it. Thank you.

I'll start with the question of consent because this is something I'm really thinking about a lot at the moment. In fact, in this participatory action research project, we are trying to work with our interlocutors to say what would consent look for you? What kind of questions would you like to ask? And I think that is really key here is that people don't feel that there is space for them to ask these questions. People often have concerns that range from

an invasion of the body, like people have told me, and I know this is the case in other settings too, that they've worried that when the machine was scanning their eye that they could go blind or there could be health issues, which is not true. But the fact that they have these serious concerns and they can't actually ask those simple questions, I think, speaks volumes about the power asymmetry. So I would say here, the first thing to do

create space where these questions can be asked and where a discussion can be had rather than an announcement that this is the system that is taking place and this is a good system for you.

because this is how it's usually presented. The other thing is to say that you need, you know, what GDPR does when it comes to consent is that you need to be given an alternative if you have concerns. To be given an alternative would immediately also make it better practice than what we're currently seeing. It wouldn't reverse power asymmetries, but these could be significant changes that one could make.

propose and I can share more with you once I finish this new project because that's one of the questions we're asking what would consent look from the point of view of refugees but you're right also that the question of consent is doesn't just affect the settings you know a lot of scholars have been writing about consent talking about coercion rather than consent it's just that I think it's much more accentuated when we're looking at the asymmetries in humanitarian situations

The question on the... I mean, refugees and people in need, people in sort of disaster settings, post-disaster settings, they are interested in entrepreneurial activities. I think we tend to forget sometimes, though, that the needs are very pressing and urgent. So when, you know, rather than be given aid in cryptocurrencies, you are probably... You need to have...

food to survive and you need to have to propoline to seal the tent so that it doesn't leak. The needs are very basic and I think a lot of innovators in some of these industry events or hackathons sometimes

I think misunderstand what the real concerns are and it was one of the issues that I was trying to capture in my talk is that there is often this gap between what innovators think is good and what people actually need and but it's not to suggest that there is no entrepreneurship in these settings there is but it's very often at a much smaller scale and it doesn't need to involve you know

cryptocurrencies. And I say this because there is a trend to deliver aid using these methods. I mean, to give you an example from my work, I've encountered in the Philippines, it was a

mobile phone company and microfinance company, they wanted to basically encourage saving. You know, this was in a post-disaster setting where saving is not your first priority. Your first priority is, again, to meet basic needs. And that, you could say, is an example of how there was no congruence between what these organizations wanted to achieve. You know, roll out a product and people on the ground needed something very different.

So I would say the first thing you need to do is listen to what do people want and they will tell you. In relation to your question, I wasn't sure whether you were referring to humanitarian settings in particular or in general about kind of the language, large language models because it was in general. Yeah, I mean this is a really good question and many people have been writing about this that these AI systems are

even though they present themselves as scientific, they are often drawing on very incomplete data sets that are very skewed towards English language, the reading of white faces and so on. So obviously one

you know, we would want to correct that. But I think there is also the danger of thinking that there could be sort of this, the fantasy of the complete data set ever. You know, the internet is a very asymmetrical place. And I don't think there is an easy solution to that. So, because it depends on these classifications as well, which, you know, are often based on

what is the dominant culture or the value of the coders. So it is a very hard question that I think extends beyond the remit of the book because I'm not really addressing AI in general. I'm looking at AI in a very specific setting, which is its uses in humanitarian emergencies and to some extent also international development, which is very related.

Yes, next round of questions. We have one here in purple, one here on the beige, and one in the front in the gray sweater. Hello. Hi. I'm Robin Mansell, LSE. Congratulations for the book. Thank you. I bought it. I haven't read it yet, so forgive me.

I have two questions really. The first one is recently I've been in some settings where people who are developing these kinds of applications, not necessarily for humanitarian purposes, they will pull out data that they say they will argue that although yes there are biases and false positives etc. when using these systems in the context you're describing, they will try to show that they have data to say that the

people who are excluded are far smaller than would have been excluded under the old spreadsheet systems and all of the peculiarities of those systems. And so the question that I have asked in those settings is where do you get your data from? That second set of data. And I'm just wondering whether you've come across any instances of that in your context because usually the way they can get it from is by sending some students out to ask people

Were you disadvantaged under the old system or not, etc. So it becomes very anecdotal. So my question is whether you have evidence of that. And the second one has more to do with the human rights aspect of this because recently I've come across a lot of criticism

coming from people in the global majority context saying that the imposition of a kind of a universalized notion of what their rights are or should be is a double kind of imposition which just feeds into the colonialization phenomenon and that they would rather that not be the argument that is made for resisting some of these systems. Thanks. Yeah, thank you.

Hi, I'm Leah. I'm studying development and humanitarian emergencies. So my question would be regarding your argument of the infrastructuring of humanitarian organizations, which I found very compelling, that the humanitarian system is ever more, yeah, it's becoming more porous through that privatization and, yeah, that increasing, you know,

of private organizations and the nation states as well. And I was just wondering because you really stress the agency of the local people on the ground in resisting those developments, but I was wondering

Also wondering, because I think that might not be enough if the humanitarian system is ever more instrumentalized to control the people, if they are within the humanitarian systems also more critical engagements with that whole development. Because as you said, a lot of people try, and I guess it's the same in humanitarian organizations, they try to use those new innovations and everything is framed in a very

solution is positive way harnessing AI to do this, to do that. And I was just wondering, do you think there's enough of a critical engagement? Or how do you see the prospect within the system? Thanks. Shall we? There's quite a lot. All right. Can I just see how many more questions there are? I'm going to take one more in this round in the front. Next to you. Next to you. Oh, was he before? OK. All right.

I tried to be short in a way that I like the idea of this data infrastructure. This means you don't need technologies, data they are proving the process at the end. Now if you add to this data infrastructure the concept of community data infrastructure, then it's something that I think bring to the community. You mentioned about higher community refugees for example, no?

What about the data with the data infrastructure in using the data to build innovation to create a job in the economy? So the concept of infrastructure, I do like it, but I do not see the involvement of people in driving the process. This is something I was involved with, I am a commission refugee, for example, myself. And one of this issue, how you create for the refugee jobs? And that is the question I'm asking.

Through data, you mean? I'm asking about community involvement in using the data. Yeah, right. Lots of great questions. So, back to... I'll start from the beginning. So, Robin's question about exclusion. I don't have comparative data from, as it were, the same response, but I can compare my fieldwork in the aftermath of Typhoon Haiyan

where distributions were happening through spreadsheets and through the data that were provided by local government. And there was a lot of angst amongst communities that they were being excluded unfairly. And I remember, just to show you that there was a...

unfairness in the process. So the system, the distribution would follow the way households were defined in the Philippine census, which would mean there is a male head of house. So if you're an unmarried woman, you would be excluded. Of course, that's unfair. And it was an example of structural violence.

But what was really interesting then is that my interlocutors felt that they knew why this injustice was happening and why, you know, where this came from. And therefore they could go to the barangay leader, the local community leader of their

neighborhood and say this is unfair. They could actually knock on somebody's door and argue with them. When it's an algorithm, it's very diffused. You don't know which part of the algorithm excluded you. You don't know how the algorithm came to be. And that's why I said there is this obliteration of accountability. So I can't quantify and say there were fewer exclusions and there are more exclusions, but what I say is something qualitatively different.

In the past you had this structural violence where you would be excluded systematically because of a particular characteristic. Now you might be excluded because of characteristics but it's very unclear where these come from and that's why it's much more diffused and that's why I feel this notion of infrastructural violence is helping capture that. So that was the first question. Human rights, I mean I take a more justice based approach because I feel

It's about justice and redress rather than rights, which I can see that can be more contested, although there is very great value in that approach too. But I feel that a justice approach in humanitarianism is something that is very contested. Humanitarians have always tried to distance themselves from anything political, but in fact to stay neutral is also political.

We know that very well. And maybe, and this is one of the things I discuss in the book towards the end, this opening up to these questions of justice is something that perhaps needs to be embraced rather than considered as such anathema, especially given that this idea of emergency is

The reality in refugee camps is they've been around for decades in most cases. And you have people arriving in refugee camps now, but a lot of the camps have existed for decades. And therefore, you can't have the imaginary of emergency determine the response when people have been in a place for 70 years or 40 years or whatever. So I think the question of justice rather than ethics and perhaps...

justice over rights is my way forward here. Thank you for your question because I really want to say that there is a lot of critique in humanitarianism. It is an extremely reflexive field and a lot of the thoughts I shared with you today come from my own interlocutors within the sector. They have shared some of these thoughts with me and I am

grateful for that and you know i have learned a lot from them so i do feel that it is important to acknowledge that humanitarian humanitarians are extremely reflexive but it's also important to acknowledge that humanitarian is a very hierarchical space and and there are different experiences in the humanitarian sector so if you look at frontline workers who are often from local communities

and who are often very critical about what is happening, have very little room to maneuver because they're often given a policy top-down and they have to implement it. And they can't deviate from that. In fact, they have to demonstrate audit trails and metrics and everything. So we have to recognize that there is this differentiation within the sector. And the other thing I was trying to say here is that

what I'm trying to describe is not just about humanitarians. It's the private companies, they are driven by their own logics. Governments become part of this field, they have their own logics and this produces ultimately what I call the machine in chapter 5.

often happens is that even though you have this reflexivity the machine acquires its own logic and somehow absorbs all these critiques and nothing really changes, right? So you have attempts to reform but somehow these are being absorbed and we see a repetition of some of the kind of issues time and again. So, but I really want to emphasize that a lot of my critique is

borrowed from the critique of my interlocutors. And I'm trying to remember the question. Oh yeah, the question on using the data. I mean as I said before it is really, I mean people in humanitarian settings are very interested in livelihoods, they're interested in

you know they want to be respected they want to find jobs and so this is absolutely key and I'm a bit skeptical about necessarily using data in that process I have come across examples where tech entrepreneurs come and say let's get people to code you know annotate data sets so we can develop AI and they

bring this as a sort of form of innovation, giving people a job, actually paying them very little to essentially build the AI systems that we use globally. So that could be exploitation, right? Exploiting people who are sort of trapped in a situation as cheap labor. Unfortunately, we've seen that happen.

if you're suggesting something more radical, which is to train people to try to code and try to do something much more innovative with data, I think, you know, yes, that could be something that you can do, but I don't want to prioritize data as

the issue here. You know, it has to come from what people need. And sometimes people, what people need is, you know, in my recent project, you know, people would say, when I say, what kind of digital identity system, what would it look, what would be a just system for you? You know, they say, I want a system that allows me to get out of the campsite and go play football in the nearby village because there's no space here where I can do this. Or I just want to go and have a job and be respected because I'm earning some money. It's sometimes a lot simpler things than

you know doing the coding or something that is more technically sophisticated so I think listening to what people want would be for me the first step before moving to well I think they know much more than we think but you know you have to listen to what they want rather than go and say you know think of it for us too you know we want to choose what we want and I think starting from there is important

Thank you so much from all the questioners so far. We have five minutes left, so I'm going to collect all the remaining questions and then I'll leave Mirka to make some final remarks based on what's in the room. We have online questions as well? Okay, so Loan, would you like to bring the online questions? And we also have a working question.

How does the speaker address the very nature of blockchain technologies being anti-government, that reduce state power, that has brought immense freedom to especially women in such difficult locations? Okay. Do we have others from online? There's a bit more, but... I think that seems fine. I'd like to just... I would like to hear from Mark. I think, Mark, you would like to ask a question. Yes. Who also works on blockchain and contributed to this discussion. So in the front, and then can you also, everyone else who's got a question, can you put your hands up?

Okay, so we have three more. Thank you so much for your book. Go on. It's already a touchstone for everyone working in this area. Thank you for covering so much ground and clarifying concepts and logics. Yeah, I just had a question about the donor communities, the likes of the rich Western countries, EU, World Bank, wealthy philanthropists and so on.

It seems like an important part of the picture, reimagining these infrastructures moving away from techno-colonialist projects

we're to ask donors not to fund them in the first place. And I'd love to hear more about your interactions with donor communities during your 10 years of research. To what extent are they engaging with this localization agenda, which is talked about a lot, but we don't always hear of examples where power is actually being handed over to local organizations, local tech companies.

with cultural knowledge to make decisions about infrastructures for themselves. How much are you hearing about that kind of coming through? And yeah, just impressions about where we're at with the donors. Yeah. Wait, I'm going to collect two final questions. There's one in the back and one in the middle in beige. Yeah, right there. Yeah. And then in the middle. There was one here too. Oh, sorry. I'm sorry. We'll have to speak outside if we didn't manage to get all the questions.

Hi, thank you so much for the lecture. My name is Dana, I'm an alum and I work in tech policy, so I'm very much a techno-optimist. Despite learning a lot about military technologies, I'm from Beirut in Lebanon and I saw that on one of the slides there was the blockchain-based aid example from the post-Beirut blast.

I would struggle to see it as a violent type of AIDS because it was in a context where the state was absent and the nonprofit sector was very much there to support people. And I actually wrote my thesis about how nonprofit sector really helped in that scenario.

So from that aspect, yeah, very much a techno-optimist. And like you said, there was like short-term humanitarian needs. That's what the people want. The people don't want to know where their data is going in that moment. They want to, they want aid basically. So that's just by way of background and how I'm an optimist in that sense.

And, you know, digitization is, we're seeing it across all sectors, government, economic, and I'm not surprised that it has infiltrated the humanitarian sector, where I also worked back home as well. So I think my question will be very straightforward because I work in policy now. If you were in front of aid organizations like UNHCR or WFP,

What would be your, besides resistance of course, what would be your policy proposals or recommendations? And of course you mentioned for example listening to the beneficiaries. But what other, if you had like two key policy proposals to put forward to improve and to have aid that's more efficient and ethical, of course.

Because it will not go back. Digitization will remain there. So how can we deal with it properly? Thanks. I'm so sorry to all the other questioners. We're going to have to end with those questions. Mirka, I will give you the last word. And then following her response, you're very welcome to join us outside for some drinks and for some book purchase and book signing. Thank you. You have a question?

about donors is absolutely key because they are extremely powerful in this context and to some extent a lot of these interventions are driven by donor demands and that's what I was trying to capture through the logic of audit and the logic of securitization. I feel that donors also are perhaps not as easy to engage with as humanitarians but I have sensed a

a slight change in the tone of language used in the last few months or years. And I do feel that the field has changed from when I started looking at these interventions where there was this huge techno-optimism to a more guarded approach today and a desire to really try to be a bit more

Yeah, a bit more careful, but I haven't seen a major transformation of course and I think any change will have to engage donors because as I said they are absolutely key because they give the funding, right? So without that funding no organisation, no humanitarian organisation would actually exist.

In terms of the policy proposals, just to clarify, the examples that I had about chatbots were to show you the extent of chatbots, because I said there are informational chatbots which are more about where can you go to get clean water or where can you go and find a hospital, ranging to psychotherapy chatbots which are doing something quite different. So I wasn't speaking in that particular slide about that very chatbot, but the truth is that

all chatbots will distribute will have that kind of problem which is that often you know people need a human person you know a human voice to interact with and not necessarily a drop-down list that doesn't include their problem in that context and i saw that very much in earlier iterations of chatbots which were used as feedback mechanisms where you know people were asked to to comment on uh how the intervention

how they experienced the intervention. And essentially, the chatbot was already offering them answers. So this was almost pre-written feedback. So even the most simple chatbots that are often out there can be harmful in the sense that they're not necessarily participatory as they claim to be. On the question of policy, it's a really great question.

I think that some of the issues we already mentioned, including thinking of consent in a more meaningful way of allowing people to receive aid through alternative methods if they have concerns,

or to allow people to ask questions and have a dialogue rather than be given something with no opportunity to contest it. That would be a fantastic, I think, improvement. It wouldn't reverse the power relations I'm talking about, but it would make a difference and we can think of consent more broadly. I think listening to people and what they want is also absolutely key here rather than assuming that this is

a program that's good for them is to really allow them to decide, you know, give them space to express their agency. So these would be simple things that I would propose, but they're not really simple because they're extremely costly and I think that's where we then get into these issues around efficiencies, audit trails and all the pressures that humanitarian organizations face. So

I think we're going to conclude with those very thoughtful remarks on policy. I would like to thank everyone for attending. Thank you very much, Mirka, for your comments and will join us outside for a drink. Welcome to the LSE Events podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences.