Journal logo

When the Internet Forgets: 2 Million News Articles Deleted from AI’s Memory

Inside the Common Crawl controversy — and why the future of AI could depend on who controls the past.

By Shakil SorkarPublished 2 months ago 3 min read
Vintage newspapers fading into lines of digital code — a powerful symbol of how the internet’s memory is being rewritten in the age of AI and data control.

The internet never forgets — or so we thought.

In late 2025, a quiet yet seismic shift took place in the world of artificial intelligence. Common Crawl, the nonprofit that provides one of the largest open datasets used to train AI models like ChatGPT, suddenly deleted over 2 million news articles from its archives.

At first glance, that may sound like a technical clean-up.

But in truth, it’s a story about power, privacy, and who gets to decide what knowledge machines — and by extension, humans — are allowed to access.

🧠 What Is Common Crawl — and Why It Matters

Common Crawl isn’t a household name, but its fingerprints are all over the modern web.

Since 2008, it’s been quietly crawling billions of web pages each month, storing text that powers search engines, language models, and research tools.

When AI companies say their systems are trained on “public internet data,” Common Crawl is often where that data came from.

Think of it as the digital library of humanity’s collective memory.

So when millions of news articles vanished overnight, people noticed.

⚖️ Why 2 Million Articles Disappeared

According to reports, Dutch publishers and news organizations demanded their content be removed from Common Crawl’s archives, citing copyright and data-use concerns.

They argued that AI models were “learning” from their journalism without credit or payment.

In response, Common Crawl complied — erasing millions of pages of reporting, investigations, and commentary.

For some, it was a victory for digital rights.

For others, it was the first sign that the open web might be closing.

🤖 The AI Industry’s Silent Dependence

Large language models — from OpenAI to Anthropic — rely on vast, diverse data.

When datasets shrink, so does AI’s ability to reason about the world.

Removing millions of legitimate news stories means these systems lose not only facts but also context — the nuance that separates truth from noise.

If AI models can’t learn from journalism, how will they distinguish misinformation from credible reporting?

That’s the paradox: protecting creators might unintentionally blind the very tools that could help fight fake news.

💬 Publishers vs. Progress

The debate isn’t new. News outlets have accused tech companies for years of exploiting their work for ad revenue and engagement.

But this new frontier — AI training data — raises a deeper question:

Should knowledge created for the public good stay open, even when it’s inconvenient for business?

Some publishers argue they’re simply defending their rights in an unfair digital economy.

Others warn that restricting data access turns the internet from a public library into a series of gated communities.

Either way, the consequences are global.

🌍 The Domino Effect

Now that Dutch publishers succeeded, others may follow.

If major media networks — say, in the US, UK, or Asia — make similar demands, entire decades of journalism could vanish from the AI datasets shaping future search engines and assistants.

That could mean AI trained on an incomplete memory — biased, blind to history, and easily manipulated by whoever controls the remaining data.

In other words, we may be witnessing the first battle in a new kind of censorship: not by governments, but by contracts.

🔍 What This Means for the Future of AI

This isn’t just a data story — it’s a cultural one.

AI systems mirror us. They learn from our stories, our mistakes, our discoveries.

When we start editing their memories, we edit our collective understanding of truth.

The real question isn’t whether AI should read our work.

It’s whether a future built on selective knowledge can ever truly be intelligent.

💭 A Future That Remembers

Maybe the solution lies somewhere in between — where journalists are compensated fairly, but access to information remains open.

Because every time we erase a headline, we don’t just protect rights — we risk losing history.

And an AI that forgets history might one day forget why truth matters at all.

#AI #CommonCrawl #DigitalRights #TechEthics #ArtificialIntelligence #NewsMedia #DataPrivacy #Censorship #VocalMedia #DigitalMemory #Technology #OpenInternet

© 2025 Shakil Sorkar. All rights reserved.

Originally written and published on Vocal Media.

Cover image created with AI assistance.

adviceapparelartblingobook reviewbusinessbusiness warscareercelebritiescriminalsdecoreconomyfact or fictionfeatureheroes and villainshistoryhow tohumanityhumorindustryinterviewlistliteraturemovie reviewnsfwphotographypoliticspop cultureproduct reviewquotesreligionsatiresocial mediatraveltv reviewvintageVocalwall streetworkflow

About the Creator

Shakil Sorkar

Welcome to my Vocal Media journal💖

If my content inspires, educates, or helps you in any way —

💖 Please consider leaving a tip to support my writing.

Every tip motivates me to keep researching, writing, sharing, valuable insights with you.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.