Journal logo

AI Hallucinations Lead to a New Cyber Threat: Slopsquatting

How Artificial Intelligence Is Accidentally Helping Hackers Target Developers Through Fake Packages

By Md Ajmol HossainPublished 9 months ago 3 min read
How Artificial Intelligence Is Accidentally Helping Hackers

Tools like GitHub Copilot, ChatGPT, and other large language models (LLMs) are helping developers write code faster and with fewer errors in the age of AI-assisted programming. However, despite their usefulness, these models have a major blind spot: they occasionally "hallucinate," or produce information that appears plausible but is entirely fake. One of the newest and most dangerous ways this is being exploited is through a cyber threat known as Slopsquatting.

A silent and potent form of malware delivery via fake package names is the result of this emerging threat taking advantage of the trust developers place in AI-generated code suggestions.

What Is Slopsquatting?

Slopsquatting is a novel form of software supply chain attack, and its name is a twist on the more familiar “typosquatting.” In typosquatting, attackers create malicious packages with names that closely resemble legitimate libraries—hoping developers will accidentally mistype the name and install the wrong one.

Slopsquatting, on the other hand, makes use of AI illusions when, during code generation, a language model suggests a software package that does not exist. For instance, if a developer asks an AI assistant how to solve a task and the model responds with installation instructions using a package like auth-helper-pro, the user might assume it's a real and helpful library.

The problem? Until a hacker sees the name, registers that package on a package manager like npm, PyPI, or RubyGems, and uploads malware, that package may never have existed. Now, any developer who puts their faith in the AI's advice becomes a victim.

The Research Behind the Threat⚠️

A recent study from researchers at three U.S. universities revealed just how serious this issue is. They looked at 16 different AI code generation models, from open-source options like OpenAI's GPT-4 to commercial tools like Google's Codey.

The results were eye-opening:

  • 19.7% of packages suggested by these models did not exist at the time of testing.
  • Open-source models such as DeepSeek and WizardCoder showed even higher hallucination rates—up to 21.7%.
  • Even commercial models hallucinated, though at lower rates. GPT-4 had a hallucination rate of just over 5%.

These hallucinated package names are often formatted like real libraries, making them highly believable to developers.

The researchers went a step further—they registered hundreds of these hallucinated package names across multiple platforms (like npm and PyPI) and monitored installation attempts. They saw hundreds of installation requests in a matter of weeks, indicating that developers were installing fake packages that did not exist before the AI's suggestion. Fortunately, the study did not include any malicious code because it was only an experiment to test the threat. But it shows just how easily real attackers could exploit this gap.

Why Is This Dangerous?💣

Slopsquatting represents a particularly stealthy and effective attack vector, for several reasons:

  • 🤝Trust in AI : Developers are increasingly relying on AI tools to boost productivity. Many accept suggestions without double-checking.
  • 🤷‍♂️Lack of Awareness: Most users are unaware that AI can hallucinate nonexistent package names.
  • ⚠️Quick Exploitability: As soon as an AI model generates a package name, it can be registered by an attacker in seconds.
  • 🏃‍♂️Silent Infections: Malicious packages can include spyware, ransomware, or code to exfiltrate sensitive data—without raising any immediate red flags.

What makes this especially worrying is that it blends social engineering with technical exploits, combining the authority of AI with the invisibility of fake packages.

How to Protect Yourself🛡️

Slopsquatting is new but preventable. Here are steps developers and teams can take:

  • ✅ Verify every AI recommendation: Don’t install a package unless you’ve confirmed it exists and is reputable.
  • 🔍 Check official repositories: Use platforms like npmjs.com or pypi.org to manually inspect the package's history, publisher, and source code.
  • 🧠 Educate your team: Make developers aware that AI can hallucinate. Just because it suggests something doesn’t mean it’s trustworthy.
  • 🔐 Use security tools: Consider software supply chain security tools like Socket.dev, Snyk, or GitHub’s dependency scanning.
  • ⚙️ Keep AI tools updated: Use reputable models and monitor their changelogs. Newer models may reduce hallucinations with better training.

Final Thoughts📝

Slopsquatting is a reminder that as AI becomes more powerful, so do the risks. While tools like ChatGPT and GitHub Copilot are transforming how developers write code, they are not infallible—and attackers are already learning how to take advantage of their weaknesses.

Skepticism, verification, and vigilance are crucial, as this new cyber threat demonstrates. Make sure to ask more than just "Does this work?" when your AI assistant suggests a convenient solution the next time. but also "Is this real?"

businesscriminalsliteraturesocial mediahow to

About the Creator

Md Ajmol Hossain

Hi, I’m Md Ajmol Hossain—an IT professional. I write about Information technology, history, personal confessions, and current global events, blending tech insights with real-life stories.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.