๐Ÿงจ Watch out for Ghost Packets: How AI and Typo Errors can put your code at risk

  


๐Ÿง‘‍๐Ÿ’ป "I just installed a package suggested by my AI assistant."
☠️ "And now my CI pipeline is compromised."

In modern software development, speed is everything. But this constant race hides often invisible pitfalls, capable of affecting even the most experienced teams. Among these: package hallucination and slopsquatting.

These two phenomena, linked respectively to the use of artificial intelligence in development and to human typing errors, are becoming increasingly exploited tools by cybercriminals to launch attacks on the software supply chain.

Let's find out in detail what it is and how to defend ourselves.

๐Ÿ”— Do you like Techelopment? Check out the site for all the details!

๐Ÿค– 1. Package Hallucination: When AIs Invent Packages

๐Ÿง  What is it?

Package hallucination is the phenomenon whereby generative AI models, such as GitHub Copilot, ChatGPT, or Cody, suggest package names that don't actually exist in the official repositories (PyPI, npm, etc.).

๐Ÿ” Why does this happen?

Language models generate text based on probabilities. So, if a name seems plausible, they can suggest it even if it has never been published.

๐Ÿงช Example:

# Suggested by an AI assistant
import fastjwt # ❌ nonexistent package

⚠️ What's the risk?

An attacker can register that nonexistent package, knowing that it will likely be suggested by AI models. If it does so with malicious code inside, anyone who installs it will be at risk.

๐Ÿงจ This type of attack exploits blind trust in automated suggestions and represents a new front for dependency security.


๐Ÿงผ 2. Slopsquatting: The AI Evolution of Typosquatting

✍️ What is it?

Slopsquatting is an emerging attack closely related to package hallucination. Unlike traditional typosquatting, it does not rely solely on human typing errors, but on AI-generated hallucinated names.

๐Ÿ“ Definition – Typosquatting:

A cyber attack in which an attacker registers a package with a name very similar to a legitimate one, hoping that someone will misspelt the name and install it by mistake.

๐Ÿ“ Definition – Slopsquatting:

A technique that exploits packages invented by AI, which have never existed before, but are plausible. The attacker records them in the hope that a developer will copy and paste the code suggested by the AI.

⚠️ Why is it dangerous?

Since some AI hallucinations are often repeated, hackers can take advantage of this by targeting those recurring names. They create a fake package with the same name and insert malicious code into it. The trick works even better because many of these names sound similar to real packages, making it difficult to spot the deception.

Keep in mind:
  • These are no longer human errors, but repeatable systemic errors.

  • Attackers can monitor names suggested by AI assistants and record them in advance.

๐Ÿ“ˆ Real data:

  • In a study of over 500,000 AI suggestions, more than 20% of the suggested packages were not existed.

  • Of these, many were subsequently registered with malicious code.

๐Ÿ“ƒ Real-world examples:

TypeLegitimate NameMalicious Version
Typosquattingrequestsrequestssreqeusts
Slopsquatting(no original)fastjwtsimpleapi

๐Ÿš€ Slopsquatting is more insidious because the user can't know if the name was correct or not: they trust the AI.

 

๐Ÿ›ก️ 3. How to Defend Yourself

✅ Best Practice

๐Ÿ” Manual Package Verification

  • Search on the official website (PyPI, npm, etc.)

  • Check maintainer, number of downloads, issues, updates

๐Ÿ“ฆ Lock versions

  • Use files like requirements.txtpoetry.lockpackage-lock.json

    • ๐Ÿงพ requirements.txt (Python): Lists the exact libraries and versions to install, e.g. requests==2.31.0

    • ๐Ÿงพ poetry.lock (Python with Poetry): Strictly locks all direct and indirect dependencies.

    • ๐Ÿงพ package-lock.json (JavaScript with npm): does the same for Node.js projects, locking the actually installed versions.

  • Avoid pip install packagename without specific version

๐Ÿงช Regular auditing

  • Recommended tools:

    • pip-audit

    • safety

    • Bandit

    • npm audityarn audit

๐Ÿง  Team education

  • Tell the team that Not all AI suggestions are reliable

  • Train dev and DevOps on these emerging risks

❌ Don't blindly trust AI

  • Always verify the suggested names

  • If the package is unfamiliar to you, check carefully before installing


๐Ÿ”ฌ Conclusion: Trust, but Verify

The future of development lies in automation and AI. But like any powerful tool, these too must be used with awareness.

๐Ÿงจ A poorly suggested or hastily typed package can seem like a small thing. But it can become a ticking time bomb for your infrastructure.

๐Ÿ” The security of your supply chain starts with the little things. And often, with the name of a package.



Follow me #techelopment

Official site: www.techelopment.it
facebook: Techelopment
instagram: @techelopment
X: techelopment
Bluesky: @techelopment
telegram: @techelopment_channel
whatsapp: Techelopment
youtube: @techelopment