Education logo

How AI Threatens Democracy

Threatens Democracy

By Global UpdatePublished about a year ago 3 min read
How AI Threatens Democracy
Photo by History in HD on Unsplash

But generative AI can disrupt politics in unparalleled ways. The sudden rise is already blowing up journalism, finance, and medicine. Such simple acts as asking a chatbot to get around the complexities of some thickheaded bureaucracy or to help with drafting a letter to an elected official would further bolster civic engagement. It is exactly this technology that threatens democratic representation, democratic accountability, and social and political trust because it has the very potential to proliferate disinformation and misinformation en masse. This essay examines the scope of the threat in each of these spheres and considers possible guardrails for these misuses, including neural networks for identifying generated content, self-regulation by generative-AI platforms, and increased digital literacy on the part of the public and elites alike.

Just a month after its introduction, ChatGPT, the generative artificial intelligence (AI) chatbot hit 100-million monthly users, making it the fastest-growing application in history. By way of comparison, the video-streaming service Netflix, today a household name, took three-and-a-half years to reach one-million monthly users. But unlike Netflix, this rocket-like ascent and ChatGPT's good or ill provoked debate at unprecedented levels: students could use the tool-better said, misuse it-to research and write term papers, and what have you; it will replace journalists and coders, and so on. Would it "hijack democracy," as one New York Times op-ed put it, by making mass, sham inputs to possibly misshape democratic representation?

1 And most fundamentally-and apocalyptically-could artificial intelligence actually become an existential threat to human life itself? 2 But the most insidious feature of generative AI is that it hides in plain sight: It will be producing enormous volumes of content able to flood the media landscape, the internet, and political communication with senseless drivel at best and misinformation at worst. To the government officials, this upends efforts to understand constituent sentiment and thus threatens the quality of democratic representation. The problem that such information overload presents-to the voter wanting to maintain surveillance of elected officers' actions, and subsequent responsibility-is that the democratic ways of accountability dissolve.

As a reasonable mental prophylaxis within such a mediascape-a disinfection tool-the belief would be for nothing. There is then nihilism adverse to energetic democracy and social trust of high friction.

As objective reality recedes ever further from the media discourse, those voters who do not tune out altogether will likely begin to rely even more heavily on other heuristics - such as partisanship - that will only exacerbate polarization and stress on democratic institutions. Threats to Democratic Representation Democracy, as Robert Dahl wrote in 1972, requires "the continued responsiveness of the government to the preferences of its citizens."4 For elected officials to be responsive to the preferences of their constituents, however, they must first be able to discern those preferences. Public-opinion polls-which today are mostly immune from manipulation by AI-generated content-afford elected officials one window into their constituents' preferences.

Yet few citizens possess even basic political knowledge, and policy-specific knowledge is undoubtedly even lower.5 Therefore, legislators have the greatest incentive to be responsive to the most constituency views that are strongly held over a policy issue or those for whom the issue is highly salient.

Written correspondence has long held a central place in the ways elected officials take the pulse of their districts and above all to learn the preferences of those who are the most intensely mobilized on the issue.6 But in the generative AI era, these signals balancing the ledger on pressing policy issues are possibly greatly misleading. Improvements in the technology mean that it is now quite possible for nefarious actors to generate sham "constituent sentiment" on an industrial scale: generating unique messages taking positions on every side of myriad issues with ease. Indeed, even old technology gave legislators a hard time in distinguishing human and machine-generated communications. Our field experiment, which targeted all 7,200 state legislator offices in the United States back in 2020 using about 35,000 emails, randomly assigned several hundred leftand right-wing AI and human-penned letters on six issues each: we first had to compose advocacy letters about several issues and then use these as the training material for the now-out-of-date state-of-the-art variant of generative AI-GPT-3-to produce many hundreds of left-wing and right-wing advocacy letters. We then compared response rates to the human-written and AI-generated correspondence to assess the extent to which legislators were able to discern-and therefore not respond to-machine-written appeals undefinedOn three issues.

student

About the Creator

Global Update

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.