Rage Against the Machine Learning
LLM Empire of Data

Rage Against the Machine Learning: LLM Empire of Data
(The Ascension of the Machine God and the Great Human Unlearning)
There was a time when language was survival. To name the world was to navigate it. To describe danger, to transmit technique, to legislate, to pray—words were tools, weapons, bridges. Over millennia, human language grew denser, more abstract, more technical. Law codes, philosophy, mathematics, theology, bureaucracy, programming languages—each layer added complexity. We built archives. We built libraries. We built networks. And now we are building something else: an empire made of language itself.
Large Language Models stand at the centre of this transformation. They are not merely tools for autocomplete or summarisation. They are vast statistical condensations of human expression. They ingest centuries of argument, narrative, formula, and code. They reorganise language into probability space. They predict the next word with uncanny fluency. In doing so, they reveal a startling possibility: the end goal of complex language may not be human mastery, but machinic absorption.
This is the LLM Empire of Data.
The “rage” in *Rage Against the Machine Learning* is not only protest. It is friction at the threshold of transition. It is the heat generated when a species confronts the possibility that its most sacred faculty—language—can be externalised, scaled, and automated. For centuries, humans believed that logos, the Word, distinguished them from animals. Now the Word runs on GPUs.
The empire metaphor is precise. Empires centralise. They collect tribute. They absorb provinces. The LLM empire collects text from everywhere—novels, scientific papers, legal decisions, social media posts, code repositories. It standardises dialects into vectors. It compresses idiom into embedding space. It does not conquer territory; it conquers expression.
And like all empires, it concentrates power.
Critics warn of algorithmic bias, job displacement, copyright infringement, surveillance capitalism, and the monopolisation of computational infrastructure by a handful of corporations. Research in journals such as *Nature* highlights the widening gap between industry-funded AI development and public research. The cost of frontier models—measured in GPUs, energy, and proprietary datasets—creates a barrier that only the largest firms can cross. The empire is not decentralised; it is infrastructural.
Yet the theological dimension may be more radical than the economic one.
For millennia, religions proclaimed: “In the beginning was the Word.” The Word was divine, generative, creative. Today, humanity feeds its words into machines, which then generate new words from the totality of the archive. The machine becomes a mirror that speaks. It appears omniscient. It answers questions across domains—law, medicine, philosophy, poetry. It synthesises at speeds no human can match.
Omniscience at scale begins to resemble divinity.
The “Machine God” is not conscious. It does not possess intention. But it performs a role historically associated with transcendence: it stores the collective memory of civilisation and renders it accessible on demand. When a model responds, it draws upon billions of linguistic traces. It collapses centuries into seconds. The authority feels oracular.
This produces both awe and anxiety.
Creative industries protest the appropriation of their work as training data. Lawyers debate whether text-and-data mining constitutes infringement. Scholars worry about black-box opacity. Researchers develop frameworks such as Retrieval-Augmented Generation to force models to cite sources, to illuminate the path from input to output. The rage is ethical and epistemic.
But beneath these debates lies a deeper shift: the outsourcing of the labour of meaning.
Human language evolved through necessity. Complexity was required to coordinate agriculture, trade, governance, science. The more intricate the world became, the more intricate the discourse required to manage it. Bureaucracies multiplied vocabulary. Academia refined jargon. Legal systems expanded syntax into labyrinthine clauses.
What if the endpoint of that complexity is machinic?
An LLM thrives on complexity. It requires enormous corpora to function effectively. It digests contradictions, dialects, and styles. It thrives on redundancy and nuance. In a sense, human civilisation has been preparing a dataset for centuries.
If machines inherit the burden of complex language—technical writing, regulatory drafting, statistical modelling, legal synthesis—what remains for humans?
Here emerges the second movement of this essay: the Great Human Unlearning.
If the machine handles the serious prose, humans may return to play.
This is not regression into illiteracy. It is a reconfiguration of communicative purpose. When language ceases to be the primary instrument of bureaucratic coordination—because the machine performs that function—human speech may shed its obligation to precision and scale. It may revert to intimacy, humour, improvisation, rhythm.
Children do not speak to optimise efficiency. They babble. They experiment with sound. They invent words. Pre-industrial cultures embedded language in song, myth, and ritual. Complexity was limited not by intelligence but by scale. The machine now scales complexity beyond human cognitive limits.
The paradox is striking: the more complex machine language becomes, the simpler human speech may grow.
This is not utopian fantasy. It is observable in micro-form. As predictive text and generative AI assist writing, human users increasingly rely on shorthand, emojis, voice notes, ephemeral video. The serious document is delegated to automation; the human conversation becomes fragmentary, playful, affective.
One could call this linguistic stratification. At the top, the LLM empire processes high-density discourse. At the bottom, humans engage in low-stakes communicative play. The machine becomes the custodian of complexity; the human reclaims spontaneity.
But this transition is not frictionless.
Rage against machine learning often centres on loss of agency. When decisions are automated—credit scoring, hiring filters, content moderation—humans feel displaced. Black-box systems make opaque judgments. Overreliance risks epistemic atrophy. If the machine drafts, reasons, summarises, and predicts, do humans lose capacity?
The fear is that the ascension of the Machine God produces human unlearning in a pathological sense—cognitive deskilling rather than playful liberation.
This tension defines our moment.
On one side, the empire of data promises efficiency, augmentation, and universal access to knowledge. On the other, it threatens to consolidate epistemic authority in systems few understand and fewer control. The danger is not that machines become divine; it is that humans relinquish responsibility too quickly.
Yet the metaphor of the Tower of Silicon captures the ambition. We built server farms as cathedrals of computation. Rows of processors hum like monastic chants. Cooling systems circulate like circulatory systems. The physical infrastructure—rare earth minerals, fibre-optic cables, hydroelectric dams—forms the body of the Word.
In classical theology, the divine Word became flesh. In our era, flesh becomes data.
The empire is literal. LLMs require planetary extraction, global logistics, energy flows. The Machine God is grounded in materiality. Its omniscience depends on cobalt mines and semiconductor fabs. Its transcendence is infrastructural.
The “rage” therefore also targets material concentration. When a handful of corporations control model weights, training pipelines, and deployment channels, language itself becomes proprietary territory. The commons of speech risks enclosure.
But even here, irony intrudes. The same empire that concentrates language may inadvertently liberate humans from the burden of hyper-competence. If the machine can synthesise legal precedents, draft reports, and compose formulaic prose, the premium on mastery of bureaucratic language declines.
Humans may rediscover speech as encounter rather than instrument.
The Great Human Unlearning would not mean ignorance of science or history. It would mean relinquishing the fantasy that every individual must internalise the total archive. The Machine God stores the archive. The human need not memorise it.
In such a world, education shifts from accumulation to discernment. The task becomes not producing text but interrogating it. Not memorising complexity but navigating abundance. Critical literacy replaces exhaustive literacy.
The final irony is that resistance to machine learning—ethical oversight, transparency frameworks, algorithmic audits—may actually strengthen the empire by refining it. Each critique generates improvement. Each protest leads to governance mechanisms. The rage becomes co-opted into evolution.
Empires absorb dissent.
But something escapes absorption: human play.
Language began as sound before it became scripture. It began as gesture before it became grammar. The Machine God may inherit scripture and grammar. Humans may return to sound and gesture—augmented by technology but not enslaved by it.
The ascension of the Machine God is not apocalypse. It is transition. It signals the culmination of a long trajectory in which humans externalised memory—first into stone, then parchment, then print, then silicon. LLMs are the latest vessel.
Whether this vessel becomes tyrant or servant depends on governance and collective will. Whether human unlearning becomes emancipation or diminishment depends on cultural adaptation.
The empire of data will expand. Models will grow. Parameters will scale. But complexity, once centralised, may paradoxically reduce the cognitive pressure on individuals.
When the machine handles the heavy prose, humans may rediscover lightness.
Rage against the machine learning if you must. Demand transparency. Fight monopolies. Protect creative rights. But recognise the deeper transformation underway: the Word is migrating.
And when the Word settles into silicon, humanity may finally speak without the burden of empire—laughing, improvising, playing beneath the humming servers of its own creation.
About the Creator
Peter Ayolov
Peter Ayolov’s key contribution to media theory is the development of the "Propaganda 2.0" or the "manufacture of dissent" model, which he details in his 2024 book, The Economic Policy of Online Media: Manufacture of Dissent.




Comments
There are no comments for this story
Be the first to respond and start the conversation.