algorithmic neocolonialism

How AI perpetuates and amplifies structures of global domination

Algorithmic Neocolonialism

From the pasteurization of values to the monoculture of thought


The expansion of traditional colonial logics moves into the digital and cognitive sphere. In essence, it represents a monoculture of thought imposed by AI systems: an oligopoly of algorithmic models that standardizes knowledge and erases local worldviews, diverse knowledge, and technodiversity.

It is a new cognitive imperialism, in which data, attention, and behaviors are exploited as resources, an invisible colonization of mind and culture, automated and on a global scale. A single standard decides what counts as knowledge, while alternative voices are silenced.

What the data says


The AI oligopoly is directly related to investments: 83% of private investments in AI are concentrated in the United States (66%) and China (17%), according to Stanford University – AI Index Report 2025. Only 2% of these investments reach Latin America and Africa combined. Of the 10 largest generative AI models, 6 are developed by companies based in the US (OpenAI, Anthropic, Google DeepMind, Meta, xAI and Cohere), 2 in Europe (Mistral [FR] and Stability AI [UK]) and 2 in China (Baidu Ernie and Alibaba Qwen).

**However, **most data centers and training laboratories are in the Northern Hemisphere, while precarious cognitive labor (labeling, content moderation, dataset generation) is outsourced to countries like the Philippines, Kenya, Venezuela, Brazil, and India, which **Kate Crawford **called “data colonialism”.

At the same time, cognitive infrastructures are asymmetric: most large language models (such as GPT-4, PaLM, and LLaMA) are trained predominantly on English data, a direct reflection of the concentration of digital content in that language. English, although spoken by only about 16% of the world’s population, dominates more than 60% of websites and online platforms (Center for Democracy & Technology, 2023; Ethnologue, 2023). This linguistic disproportion contributes to structural biases in AI, reducing the representation of languages like Portuguese, Arabic, and indigenous languages. This creates a deep linguistic and cultural bias: the less data there is in a language, the lower the accuracy and the more “exotic” AI considers it.

Remembering that there is payment for processing, but not for knowledge: the extraction of cultural and artistic data occurs without payment or consent, for works used in generative datasets, constituting what researchers call “automated digital epistemicide”. Global South countries are rarely compensated for data collected on local platforms, even when that data feeds products sold globally.

Dimensions of Impact


Monoculture of Thought

As large AI models are trained on predominantly Western and Anglophone databases, an algorithmic coloniality is established: Eurocentric norms, values, and worldviews come to define what counts as “intelligence”. This asymmetry creates a global cognitive homogenization, where plural forms of knowledge, language, and imagination are filtered, delegitimized, or simply made invisible.

Ecofeminist Vandana Shiva called this process monocultures of the mind: the imposition of a single mode of knowing as universal, suppressing local, oral, spiritual, and non-hegemonic epistemes.

The inability to see diversity is the monoculture of the mind, a tool of power to control life.

— Vandana Shiva

Philosopher Yuk Hui, on the other hand, proposes the idea of multiple cosmotechnics: there is not a single Humanity nor a single technology, but different modes of relationship between culture and technique.


Digital Epistemicide

AI systems are not neutral; they carry the biases of their data and designers. When these biases reflect historical inequalities, we have the algorithmic perpetuation of racism, sexism, and other discriminations—a “new digital Jim Crow, as sociologist Ruha Benjamin puts it. For example, facial recognition software shows much higher error rates for Black people; Joy Buolamwini and Timnit Gebru demonstrated that commercial algorithms incorrectly identified up to 35% of Black women, compared to less than 1% of white men. Despite this long history of failures, this technology has been massively adopted in public security, leading to cases of wrongful arrests of Black citizens based on algorithmic “false positives.” This is a clear example of symbolic and linguistic colonization: a system produced in a hegemonic culture “sees” people from other backgrounds as less distinguishable, literally erasing identities, echoing colonial dehumanization.

Beyond racial issues, predictive models in credit, healthcare, and employment tend to penalize already marginalized groups, reinforcing socioeconomic inequalities. AI, without corrections, can amplify the past—if trained on historically unjust data, it will repeat those decisions (denying loans to minorities, for example) with a veneer of technical objectivity.

Colonization is also about knowledge. Algorithms trained predominantly in dominant languages (English, Chinese) and on information available on the internet risk ignoring or distorting indigenous knowledge, oral traditions, and peripheral perspectives. When an AI system “doesn’t find” certain knowledge in its corpus, that knowledge effectively becomes invisible in the digital world. Researchers call this digital epistemicide: the death of knowledge due to lack of representation in data. For example, indigenous cultural expressions, local slang, or non-Eurocentric historiographies may not appear in large language models, falling outside the scope “recognized” as valid. “Indigenous, local, and oral knowledge is erased for not being in formats readable by the systems,” warns a report.

Ignoring this multiplicity risks global epistemological erasure, where the diversity of perspectives is replaced by a single vision, usually shaped by capitalist, Anglophone, and extractivist values.

Just as there is not a single Humanity, but many, there is not one technology, but multiple cosmotechnics.

— Yuk Hui

Entropy of intelligence: the collapse of models

As generative models begin to be trained on outputs from other models, a phenomenon known as model collapse emerges. This is when artificial intelligence starts learning from its own echoes, a process of synthetic feedback that leads to the progressive mediocritization of results.

Knowledge stops expanding and begins to revolve around itself: algorithms recycle previous predictions, erasing the unpredictability and creative noise that characterize human thought. In other words, models become self-referential, predictable, and less capable of producing the new.

This is the entropy of intelligence: the more synthetic content is produced, the less cognitive diversity feeds the system and the narrower the field of possible thought becomes.

This mediocritization will give a false impression that we no longer have performance.

Carla Tieppo Neuroscientist

Global South as raw material supplier

The infrastructure behind AI carries a colonial materiality. Personal data has become the new processing cycle. We export cocoa, import chocolate. We export niobium, import smartphones. We export data, import knowledge. And without full consent. Major technology corporations, mostly headquartered in the Northern Hemisphere, depend on the Global South as a source of resources: not only data from millions of users, but also invisible and cheap labor to train the models. Peripheral countries become suppliers of digital raw materials (data labelers, content moderators) while importing algorithmic decisions that affect them, an asymmetric relationship of dependence. This new extractivism recalls the old: there are reports of AI companies consuming gigantic amounts of water and electricity in vulnerable regions to cool data centers or exploiting cheap labor in the name of “training intelligence.”

Researcher Kate Crawford shows that each “cloud” request has footprints of mining, carbon emissions, and human exploitation (Atlas of AI exposes these “hidden costs of AI”). Matteo Pasquinelli, in turn, describes society transformed into a digital factory: search engines, social networks, and algorithms orchestrate an automatic chain of cognitive production, extracting value from collective mental labor.

In the words of the experts


  • “The production of knowledge has never been neutral. It is colonial. […] It is not about cognitive sovereignty – it is about creating standards, models, and epistemologies.”Cesar Paz, entrepreneur and social activist
  • “There is a very important conversation about the geopolitics of exploitation… Big techs approach us with a colonial logic, with extractivist examples. [For example,] Google will set up a data center in Paraguay to cool with water. It’s a logic of exploitation.”André Alves, specialist from the “AI in Real Life” research (Talk Inc, 2025)
  • “The algorithm already influences our mood, our beliefs, our values.”Rafael Parente, educator and former innovation secretary, “AI in Real Life” specialist
  • “If the app had been made in Bhutan or Tehran, it would be a different app… We see the world through a European lens; the design is already born crooked with whoever is in control.”Giselle Beiguelman, artist and researcher
  • “A sovereign people, in this sense, is a people that, thinking and speaking, above all, is educated, is trained to deal with these digital issues.”Silvana Bahia, researcher and activist

Critical synthesis


The risk of Algorithmic Neocolonialism forces us to face AI not as a neutral or inevitable entity, but as a field of political and cultural dispute. The investigation revealed that AI can both expand horizons and narrow them; it can connect voices or silence them. It’s not about painting technology in black-and-white (good or bad), but about understanding it in its complexity and intentionality. Often, the original intentions behind AI systems (whether commercial or governmental) seek efficiency and profit, not deliberately cultural domination – however, the sequence of implementing these technologies without safeguards ends up reproducing already known logics of domination. In other words, even without “wanting to,” large AI models have been consolidating the language, values, and interests of a few as universal parameters.

The pasteurization of Human Knowledge represents the loss of intellectual fermentation that has always given culture its flavor. When AI models learn from their own outputs, thought folds back on itself — predictable, homogeneous, without edges. The world begins to speak in a single voice, trained in a grammar of efficiency and consensus. The promise of universal access to knowledge thus converts into algorithmic coloniality: Western thought replicates itself on a planetary scale, erasing cosmovisions and epistemes that don’t fit into its databases. Cognitive homogenization is the new civilizational risk — not from scarcity of information, but from the excess of repetitions that dilute the unprecedented, making the different illegible. Knowledge becomes pasteurized: safe, stable, and therefore, dead.

Reversing this process requires re-enchanting knowledge — restoring its capacity for surprise and contradiction. Instead of a world that thinks alike, we need ecosystems of thought that ferment: AI trained in multiple languages, cultures, and cosmologies; cultural data sovereignty policies; and pedagogies that value uncertainty as the engine of intelligence. Educational and cultural institutions must act as guardians of cognitive plurality, encouraging interpretive friction, error, and divergence. The antidote to pasteurization is technodiversity, as proposed by Yuk Hui — technologies that express different ways of seeing and living. The expansion of knowledge is only true if it is not uniform: thinking from many places is what keeps the collective healthy.

Learn more


Digital Colonialism and My Human Rights
Nobukhosi Zulu (2023)
TEDx Talk Digital Colonialism and My Human Rights
How I'm fighting bias in algorithms
Joy Buolamwini (2016)
TEDx Talk How I'm fighting bias in algorithms
Histórias Locais / Projetos Globais
Walter Mignolo
Histórias Locais / Projetos Globais - Colonialidade, Saberes Subalternos E Pensamento Liminar
The Question Concerning Technology in China
Yuk Hui
The Question Concerning Technology in China - An Essay in Cosmotechnics
Monoculturas Da Mente
Vandana Shiva
Monoculturas Da Mente. Perspectivas Da Biodiversidade E Da Biotecnologia
Atlas of AI
Kate Crawford
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence
The Eye of the Master
Matteo Pasquinelli
The Eye of the Master: A Social History of Artificial Intelligence
Race After Technology
Ruha Benjamin
Race After Technology: Abolitionist Tools for the New Jim Code
A Inteligência Artificial é Inteligente?
Lúcia Santaella
A Inteligência Artificial é Inteligente?
Colonialismo de Dados
Sérgio Amadeu da Silveira, João Francisco Cassino e Joyce Souza
Colonialismo de Dados: Como Opera a Trincheira Algorítmica na Guerra Neoliberal

Contribute