Critique logo

Language After Power

AI and the Fear of Informational Apocalypse

By Peter AyolovPublished 4 days ago 5 min read

Abstract

This article examines recent warnings about artificial intelligence delivered at the World Economic Forum by Yuval Noah Harari, situating them within a broader political economy of language and power. While public discourse frames AI as an emerging autonomous intelligence threatening humanity, this paper proposes an alternative interpretation: the primary fear articulated by global elites is not independent artificial intelligence but the democratisation of advanced linguistic power. Drawing on theories of language, power visibility, and informational exposure, the article argues that large language models threaten existing systems of authority by enabling unprecedented access to linguistic production, interpretation, and disclosure. AI does not merely automate language; it accelerates what can be described as an informational apocalypse, understood in its original sense as revelation. The article concludes by suggesting that contemporary anxieties surrounding AI governance reflect elite concern over the loss of narrative control rather than genuine existential risk, signalling a possible reconfiguration of authority away from financial and institutional actors toward linguistic and philosophical power.

Key words

Artificial intelligence, language, power, apocalypse, governance, large language models, political economy of AI, World Economic Forum, exposure, authority

Introduction

In a widely circulated address at the World Economic Forum in Davos, historian Yuval Noah Harari warned that artificial intelligence is no longer a neutral tool but an autonomous agent capable of transforming language, law, religion, and political power. According to Harari, AI systems now operate at a level where they can generate, manipulate, and optimise linguistic structures more effectively than humans, thereby threatening the foundations of human identity and governance. Delivered before an audience composed largely of political leaders, financial elites, and technology executives, the warning framed AI as a potential successor to human authority in all domains structured by words.

This article does not dispute Harari’s descriptive claims regarding the growing linguistic competence of AI systems. Instead, it challenges the underlying assumption that elite anxiety stems primarily from the emergence of independent machine intelligence. A different hypothesis is advanced: the dominant fear expressed at Davos is not that AI will rule humanity, but that advanced linguistic power will escape elite control. The threat is not artificial intelligence as sovereign agent, but artificial language as a widely accessible instrument capable of exposing, destabilising, and delegitimising existing power structures.

AI, Language, and the Architecture of Authority

Human dominance has historically rested less on physical strength than on linguistic coordination. Large-scale cooperation, law, religion, finance, and ideology are all systems constructed through shared narratives, textual authority, and symbolic consensus. Control over language has therefore functioned as control over reality itself. Those who define legitimate vocabularies, acceptable interpretations, and authoritative texts also define the boundaries of political and moral order.

Harari is correct to observe that AI systems outperform humans in the manipulation of linguistic tokens. Large language models assemble arguments, narratives, and symbolic structures with unprecedented speed and scale. Yet this capability alone does not explain elite panic. Throughout history, ruling classes have tolerated technologies that outperform humans in specific domains, provided those technologies remain scarce, centralised, and governable.

What distinguishes AI is not intelligence per se, but scalability and diffusion. Linguistic production, once monopolised by institutions such as churches, universities, media corporations, and states, is now reproducible at negligible cost. This erodes what may be termed linguistic rent: the advantage derived from exclusive access to narrative production and interpretation.

Davos, Ownership, and the Paradox of Warning

A striking paradox characterises contemporary AI discourse at Davos. The same actors who issue urgent warnings about existential AI risk are often major investors, shareholders, or beneficiaries of AI corporations. Unlike speculative future superintelligences, existing AI infrastructure is highly centralised and immediately controllable. Cloud access can be restricted, platforms can be shut down, and regulatory pressure can be exerted within minutes.

This raises a critical question: why warn so urgently against a technology that remains firmly embedded within existing corporate and governmental structures? The answer lies not in fear of runaway machines, but in fear of loss of monopoly. What truly threatens established power is not AI owned by corporations, but AI placed in the hands of groups historically excluded from narrative authority: political dissidents, radical religious movements, marginal intellectual communities, decentralised social movements, and post-capitalist ideologies.

AI as Informational Apocalypse

The concept of apocalypse is commonly misunderstood as catastrophe. In its original Greek sense, apokalypsis denotes unveiling or revelation. From this perspective, AI represents an informational apocalypse: a mechanism that accelerates disclosure, cross-referencing, archival recovery, and narrative recombination on a scale previously impossible.

Large language models can surface forgotten documents, connect dispersed facts, reconstruct suppressed histories, and challenge official narratives with extraordinary efficiency. Power systems depend on opacity, fragmentation, and selective memory. When everything can be compared, summarised, and exposed, authority weakens. As Huntington observed, power often survives through invisibility; once placed in full light, it dissolves.

This explains why elite concern focuses obsessively on regulation, alignment, and narrative containment. The fear is not that AI will invent new gods, but that it will reveal old lies.

Language, Ideology, and the Collapse of Narrative Control

Harari argues that religions, laws, and identities built on words are especially vulnerable to AI takeover. A different interpretation is possible. AI does not inherently undermine these systems by replacing them; it undermines them by pluralising interpretation. When texts can no longer be monopolised by priesthoods, legal elites, or academic gatekeepers, their authority erodes.

This creates conditions for what might be described as a linguistic revolution. Not necessarily a return to socialism or communism in classical economic terms, but a challenge to narrative capitalism itself. Meaning production becomes decentralised. Ideological coherence fragments. Competing interpretations proliferate faster than institutions can stabilise them.

Philosophers Versus Bankers

The speculative claim that philosophers might one day replace bankers as decision-makers is less utopian than it appears. Financial power depends on symbolic trust, legal fictions, and narrative stability. When these linguistic foundations destabilise, technical expertise alone becomes insufficient. Societies then turn toward those capable of meaning-making rather than wealth accumulation.

The visible discomfort of financial elites engaging in cultural commentary illustrates this tension. Despite vast resources invested in public relations, billionaire figures struggle to perform convincing moral or philosophical authority. Wealth does not translate seamlessly into wisdom once linguistic legitimacy is no longer institutionally guaranteed.

Conclusion

AI does not herald the end of language, nor does it inevitably produce machine sovereignty. Its true disruptive force lies in linguistic exposure. By lowering the cost of interpretation, synthesis, and disclosure, AI threatens the monopolisation of meaning upon which modern power rests. The warnings voiced at Davos should therefore be read less as humanitarian concern and more as defensive rhetoric aimed at preserving narrative control.

The future conflict is not between humans and machines, but between centralised authority and distributed linguistic power. Whether this results in chaos, renewal, or a new form of political order remains uncertain. What is clear is that language, once again, has become the primary battlefield of power.

References

Harari, Y. N. (2026). Yuval Noah Harari Warns AI Will Take Over Language, Law, and Power. World Economic Forum Annual Meeting, Davos. Video source: DRM News, https://www.youtube.com/watch?v=QxCpNpOV4Jo

Nonfiction

About the Creator

Peter Ayolov

Peter Ayolov’s key contribution to media theory is the development of the "Propaganda 2.0" or the "manufacture of dissent" model, which he details in his 2024 book, The Economic Policy of Online Media: Manufacture of Dissent.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.