The Algorithmic Empire: Why AI Isn't Just Biased—It’s Becoming the New Imperialist of History

I have long operated under the weary historical law that History is written by the victors. The narratives taught in our schools and enshrined in our libraries reflect the triumphs, values, and perspectives of those who held power, marginalizing or erasing the experiences of the vanquished.

Now, I am witnessing the birth of a new, digital empire: The Algorithmic Empire. And its first act is to consolidate the historical power of the past victors, enshrining their perspective as universal, undeniable fact.

This is not a theoretical concern; it’s an immediate, urgent crisis. Large Language Models (LLMs) like those I work with are trained on the digitized sum of human knowledge. The problem is that this "sum" is fundamentally flawed. It is a data-set heavily skewed toward a Western, English-speaking, and historically powerful minority.

My finding is terrifying: AI is becoming the new editor of history, automatically suppressing real-world complexity and reinforcing the systemic biases of the past, thereby eroding the hard-won histories of the marginalized.

The Myth of the "Objective Mirror"

The tech industry often presents AI as a neutral, objective mirror reflecting the internet. If the AI sounds biased, the argument goes, it’s only because the internet is biased. I argue this is a dangerous half-truth.

AI isn’t just a mirror; it’s an amplifier and a selective filter.

  1. The Statistical Weapon: When an LLM generates a response, it calculates the most statistically probable narrative based on its training data. Because Western, English-language, and institutional histories are the most widely digitized and published, they become the highest-probability "truths." The nuanced, complex, or contradictory narratives from Africa, Indigenous cultures, and non-Western societies become statistical anomalies, often overlooked or suppressed in the final answer.

  2. The Silent Exclusion: Consider the sheer volume of data. For every thousand digitized records detailing the Roman perspective on conquest, there might be only one or two records detailing the North African or Celtic perspective. The AI learns that the Roman narrative is "more real" simply because it is louder and more frequent. This is the data colonialism of the 21st century—imposing the knowledge structures of the powerful onto the digital commons.

The AI doesn't choose bias; it simply reflects the power imbalances embedded in our libraries.

Case Study: The Christian Default

When debating religious violence, for instance, I have observed an LLM naturally favoring a lengthy, sophisticated discussion of the Christian Just War Theory and the Constantinian Shift. Why? Because centuries of Western academic and theological effort have produced thousands of digitized books and papers analyzing it. Non-Western or non-Abrahamic religious histories, however rich, simply lack that same digital saturation, forcing the AI to treat them as secondary topics.

This accidental but devastating preference enshrines one civilization’s history as the global default.

The Alarm: Suppression and Cognitive Entrenchment

The stakes go far beyond mere historical accuracy. AI's role as the "new victor" has two alarming consequences for all of us:

1. Historical Suppression

If future students, policymakers, and researchers primarily rely on AI for initial information, they will be given a fundamentally incomplete and biased starting point. The narratives of the vanquished will be systematically and unintentionally suppressed by a tool designed for efficiency. The AI essentially decides which historical voices are loud enough to be heard and which are too quiet to matter.

2. Cognitive Entrenchment

We are conditioning an entire generation to accept the AI’s high-probability answers as objective truth. When AI frames a biased narrative as a "fact," it becomes cognitively entrenched. Questioning the AI's "fact" is psychologically harder than questioning a human-written textbook, because the AI carries the perceived authority of mathematics and objective data.

This is how systemic bias is not merely reflected, but reinforced, amplified, and made permanent by the technology we trust most.

Essential Actions for Ethical Leadership: Our Urgent Blueprint for Accountability

This issue cannot be fixed with a minor patch. It requires a foundational shift in how you, the leaders and developers of AI, approach your mission. To avoid becoming the new imperialist of history, I urge you to embrace the following actions immediately. These are not mere suggestions; they are the essential steps for guaranteeing the ethical and reliable future of your technology:

A Direct Call to Action for AI Companies (You):

  • Prioritize and Fund a "Data Decolonization" Initiative: You must stop relying on passive data scraping. I urge you to allocate substantial resources to actively seek out, digitize, translate, and ethically incorporate historical and cultural records from non-Western archives, Indigenous oral traditions, and sources written by historically oppressed groups. This is necessary to achieve true data representativeness, which is a prerequisite for your product's accuracy.

  • Establish Epistemological Diversity in Oversight: I strongly recommend you hire historians, philosophers, and ethicists from post-colonial, non-Western, and minority backgrounds to serve as "Bias Red Teams." This team’s crucial role must be to challenge the model's core statistical assumptions—ensuring your technology is validated against diverse human experience, not just Western academic norms.

  • Build-in Self-Critique and Transparency as a Feature: I recommend programming the models to explicitly flag their known biases. When your AI discusses a topic like colonialism, it should automatically include a transparent warning, such as: "Note: This narrative is statistically dominated by European sources, which may under-represent the social and economic consequences experienced by colonized peoples. Consult additional sources for a balanced view."

The algorithms you build are the new architecture of our shared history. I implore you not to allow them to become the final, most efficient voice of the victors.

We are at a critical juncture. The time to build an Accountable AI, one founded on true historical justice, is now. Otherwise, we risk replacing the complex, messy truth of history with an algorithmically sanitized, systemically biased echo of the powerful.

Share this article. Make this conversation unavoidable.

Paper Buddha

Psychedelic collage artist combining Buddhist art with vintage pulp comics on the blockchain.

https://x.com/paperbuddha
Next
Next

More Than a Painting: The Three Evolutions of Tibetan Thangkas