Futurism logo

If AI Makes Work Obsolete, Who Controls the Food Supply?

A Guardian analysis argues that the real AI threat isn’t unemployment itself — it’s who holds power over income, taxes, and basic survival if wages disappear.

By Behind the TechPublished about 5 hours ago 4 min read

What Happened (Facts)

A Guardian analysis piece by economist-journalist Eduardo Porter argues that the biggest missing debate in today’s AI panic is not “Will AI take our jobs?” but a more basic question: if human labor becomes economically irrelevant, how will people afford to live — and who decides what they get?

Porter frames the current moment as familiar in one sense: since the Industrial Revolution, new technologies have repeatedly triggered fears of mass joblessness, yet most working-age adults have continued to find employment. Still, he warns that this historical pattern does not eliminate the possibility that AI could eventually reduce labor’s role far more dramatically than past automation waves.

The article highlights a common optimistic claim from some AI leaders: that AI could generate extraordinary prosperity. Porter cites OpenAI CEO Sam Altman as expressing a belief that “the future can be vastly better than the present” because AI will make society extremely wealthy. Porter presents this as a hopeful vision — but one that depends on how wealth is distributed, not merely on how much wealth is created.

The piece’s core factual references include:

AI’s economic upside is plausible, but distribution remains a political problem: if AI boosts productivity, the central issue becomes how the benefits are shared among people who may not own the technologies producing that wealth.

Porter breaks the policy challenge into two parts:

designing a functioning redistribution system as labor income shrinks

confronting the deeper political issue of power — who gets to make the rules in a post-labor economy

He argues that advanced economies currently rely heavily on labor income as a base for taxation. If wages decline substantially, governments may struggle to fund services and transfers using existing fiscal structures.

Porter references a warning made at the AI Impact Summit in New Delhi by UN Secretary General António Guterres: that AI’s future should not be decided by a handful of countries or “the whims of a few billionaires,” and that guardrails are needed to preserve human agency and accountability.

He cites proposals from economists Anton Korinek and Lee Lockwood (University of Virginia) about how public finance might adapt if labor income collapses. The ideas discussed include shifting toward consumer taxes first, and then more heavily toward capital taxation in an AI-dominant economy.

The article also describes other policy tools attributed to economists and commentators, such as:

taxes aimed at slowing labor-displacing automation in early stages

taxes on land, spectrum, data, or monopoly rents

collecting taxes in shares rather than cash to build a public stake in AI firms over time

more radical options like government taking an equity slice upfront to distribute broadly

Porter also notes political obstacles, arguing that raising taxes significantly — especially taxes on capital — would require persuading powerful owners of AI systems to share the gains.

What Is Analysis (Interpretation)

Porter’s argument is ultimately about power, not technology.

His most important move is shifting the AI conversation away from whether jobs disappear and toward what happens to democracy when wages no longer matter. In modern societies, the political leverage of ordinary people is linked to their economic role: workers matter because their labor is necessary. If AI systems do most economically valuable work, that bargaining power could weaken — and the consequences could be structural.

1) “Prosperity” doesn’t answer “who eats”

A society can be rich in aggregate while still leaving many people insecure. Porter warns that a future where AI creates huge output could still be a future where only owners of AI infrastructure and models control the gains. In that scenario, the key question becomes: is consumption a right guaranteed by policy, or a privilege granted by those who own the machines?

This is why his framing (“who decides who gets to eat?”) is intentionally blunt. It forces attention onto governance, not gadgetry.

2) The tax-base problem is bigger than it sounds

Today’s states fund themselves largely by taxing work (income taxes, payroll taxes) and the spending that work enables. If wages shrink toward zero, the state’s ability to function becomes uncertain unless new taxation models emerge.

Porter’s cited ideas — shifting toward consumption taxes, then toward capital taxes, and possibly taxing fixed resources like land or data — are attempts to answer a hard question: how do you fund the state when people no longer earn? But he emphasizes that the technical design is easier than the politics. In other words, “We can imagine a tax system that works; can we imagine the people with power agreeing to it?”

3) Alignment isn’t only about machines — it’s about owners

AI labs often talk about “alignment”: ensuring AI systems behave according to human goals. Porter argues the bigger alignment problem is social: aligning the goals of AI systems and their owners with broader public interests.

That’s a crucial distinction. Even perfectly “aligned” AI could still amplify inequality if the ownership structure concentrates power. The risk isn’t only rogue AI behavior; it’s lawful, normalized governance by a small class of owners who can shape rules, tax policy, and resource allocation to suit themselves.

4) “Network-states” as an escape hatch

Porter’s mention of wealthy technologists exploring “network-states” (private governance experiments outside traditional democratic systems) serves as an ominous implication: if the rich can exit public obligations, redistribution becomes harder. If AI makes them even more economically dominant, the incentives to opt out of democratic constraint could grow stronger.

5) The article is a warning about timing

A subtle but important point is that many of the “big ideas” (equity-based taxation, public stakes in AI firms, stronger capital taxation) would need to be implemented before AI power becomes too concentrated. Porter implies that once the winners are entrenched, reform becomes far less likely.

artificial intelligencetech

About the Creator

Behind the Tech

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.