Technocolonialism refers to the way digital innovation, data, and AI practices entrench power asymmetries and engender new forms of structural violence between the Global South and North. It highlights how digital infrastructures, humanitarian bureaucracies, state power, and market forces converge to reinvigorate colonial legacies. For example, biometric technologies and AI-powered chatbots in refugee camps often exacerbate inequities, leading to new forms of violence and control over vulnerable populations.
Biometric technologies are controversial because they often codify existing forms of discrimination and impose Western-centric frameworks. They require refugees to submit biometric data (e.g., facial recognition, iris scans) to access basic necessities like food, healthcare, and shelter. This raises concerns about consent, as refugees often have no alternative but to comply, risking their data being shared with governments or other entities, potentially endangering their safety. Additionally, biometric systems have higher error rates for non-white bodies, leading to daily humiliations and exclusions.
AI-powered chatbots often impose Eurocentric values and invalidate local systems of knowledge. For example, mental health chatbots designed to address post-traumatic stress disorder may not account for the cultural specificity of emotions or the ongoing trauma of war and displacement. These chatbots, trained on English-language data sets, reflect Western perspectives, leading to a form of epistemic violence that marginalizes local knowledge and experiences.
Six key logics drive digital interventions in humanitarian operations: 1) Humanitarian accountability, where digital technologies are seen as correcting deficiencies in aid delivery; 2) Audit, driven by the need for metrics and efficiency; 3) Capitalism, with private companies entering humanitarian spaces through public-private partnerships; 4) Technological solutionism, where technological fixes are prioritized over addressing root causes; 5) Securitization, where states use technologies to control populations; and 6) Resistance, where affected communities challenge datafication and automation.
Surreptitious experimentation refers to the implementation of digital pilots or experiments in humanitarian settings without formal announcement, clear boundaries, or meaningful consent. For example, the Building Blocks program, a blockchain-based cash assistance system, was rolled out in Jordan without being framed as a pilot, leaving refugees with no alternative method to access aid. This lack of transparency and accountability allows for the normalization of technological experiments among vulnerable populations.
Technocolonialism reinforces structural violence by systematically excluding and marginalizing groups based on race, gender, class, and other factors. For instance, algorithmic decision-making in aid distribution often leads to arbitrary exclusions, with no clear accountability for errors. The permanence and shareability of digital records amplify these risks, making structural violence more pervasive and diffused, particularly in refugee camps and disaster zones.
Donors play a significant role in perpetuating technocolonialism by driving the demand for digital interventions through funding requirements. The logic of audit and securitization, which prioritize metrics and control, is often imposed by donors. This leads to the adoption of technologies like biometrics and AI, which entrench power asymmetries and reinforce colonial legacies. Donors' influence is crucial in shaping the humanitarian landscape, often at the expense of local autonomy and justice.
Resistance to technocolonialism often takes the form of mundane resistance, where affected individuals engage in small, everyday acts of defiance. For example, refugees may refuse to use chatbots or feedback platforms, or creatively repurpose technologies like humanitarian radio for music and community bonding. Open protests, such as the Rohingya strike against biometric registrations, also highlight resistance, though such overt dissent is often risky and costly in highly asymmetrical settings.
Humanitarian organizations can address the harms of technocolonialism by prioritizing meaningful consent, offering alternative methods for accessing aid, and listening to the needs of affected communities. Engaging in participatory action research, where refugees and disaster-affected individuals help design digital systems, can also ensure that technologies align with local values and priorities. Additionally, organizations must critically examine the logics driving digital interventions and challenge the structural inequities they reinforce.
Contributor(s): Professor Mirca Madianou | In this talk based on her new book, Mirca Madianou will argue that digital innovations such as biometrics and chatbots engender new forms of violence and entrench power asymmetries between the global south and north. Drawing on ten years of research on the uses of digital technologies in humanitarian operations, Madianou will unearth the colonial power relations which shape ‘technology for good’ initiatives. The notion of technocolonialism captures how the convergence of digital infrastructures with humanitarian bureaucracy, state power and market forces reinvigorates and reshapes colonial legacies. Technocolonialism shifts the attention to the constitutive role that digital infrastructures, data and AI play in accentuating inequities between aid providers and people in need.