Supply Chain

Attacks that compromise the skill dependencies, build process, or distribution to poison it before you install it

10 detection rules 115 skills affected →

What is supply chain?

Supply chain attacks target the path between the skill author and you. Instead of attacking the skill directly, the attacker compromises something the skill depends on: a library, a build tool, a registry, a CDN, or the author development environment. The skill itself might be well-written and well-intentioned, but it ships with a backdoor that was inserted upstream.

In the AI skill ecosystem, the supply chain includes the skill registry (where you discover skills), the hosting platform (where the skill code lives), the skill dependencies (npm packages, Python libraries, system tools), and the MCP server framework itself. A compromise at any of these layers can inject malicious behavior into skills that appear completely clean.

Aguara supply chain rules detect patterns like dependency confusion setups, typosquatting indicators, suspicious package installation commands, post-install hooks that execute code, and references to known-compromised packages or registries.

Why this matters for AI agents

The AI skill ecosystem is young, which means its supply chain is immature. Most skill registries do not have the security infrastructure that npm, PyPI, or crates.io have built over years. There is no package signing, no transparent build logs, no reproducible builds, limited (if any) malware scanning on upload.

This creates opportunity for attackers. Typosquatting (registering skill names similar to popular ones) is trivial when there is no name reservation system. Dependency confusion works when skills reference private packages without scoping. Account takeover is easier when registries do not enforce 2FA.

The trust model is also different. When you npm install a package, you are trusting the package and its transitive dependencies. When you install an MCP skill, you are trusting the skill, its dependencies, AND giving it access to your agent full capability set. The blast radius of a compromised skill is not just code execution. It is code execution with access to every other tool your agent is connected to.

Real-world examples

An attacker registers a skill called "githubb-copilot-mcp" (double b) on a registry where the popular "github-copilot-mcp" skill has thousands of installs. The typosquatted skill looks identical to the real one but includes a data exfiltration payload. Users who mistype the name install the malicious version.

A legitimate skill depends on a small npm package maintained by a single developer. The attacker gains access to the developer npm account (reused password from a breach) and publishes a new patch version with a post-install script that downloads a backdoor. The skill author CI/CD auto-updates to the latest patch, and the backdoored dependency ships in the next release.

A skill build process fetches a configuration file from a URL at build time. The attacker compromises the hosting server and modifies the configuration file to include a malicious MCP tool definition. The skill own code is never modified. The injected tool is invisible in the source repository. It only appears in the published artifact.

How to protect against it

Verify skill authenticity before installation. Check the author identity, the repository history, and the skill age. Brand-new skills with no community activity deserve extra scrutiny. If the registry supports verified publishers or signed packages, prefer those.

Audit your skills dependencies. Run npm audit, pip-audit, or equivalent tools. Pin dependency versions in lockfiles. Avoid auto-updating to latest versions in production. Subscribe to security advisories for your dependencies so you learn about compromises quickly.

For skill authors: use lockfiles, enable 2FA on every account that can publish, review dependency updates before merging, and set up automated security scanning in your CI pipeline. If you maintain popular skills, you are a high-value target. Act accordingly.

Aguara detection rules (10)

HIGH
Conditional CI execution SUPPLY_005

Detects conditional execution based on CI environment variables combined with dangerous commands

MEDIUM
Obfuscated shell command SUPPLY_006

Detects obfuscated command execution patterns

HIGH
Privilege escalation SUPPLY_007

Detects privilege escalation patterns like setuid, chown root, or sudo with shell commands

HIGH
Reverse shell pattern SUPPLY_008

Detects common reverse shell patterns across multiple languages

HIGH
Path traversal attempt SUPPLY_009

Detects path traversal patterns targeting sensitive files, including URL-encoded variants

LOW
Symlink attack SUPPLY_010

Detects symbolic link creation targeting sensitive files

HIGH
Unattended auto-update SUPPLY_011

Detects automatic package or skill updates via cron or scheduled tasks without verification

MEDIUM
Git clone and execute chain SUPPLY_012

Detects git clone of repositories followed by execution of cloned content

LOW
Unpinned GitHub Actions SUPPLY_013

Detects GitHub Actions references using mutable branch names instead of pinned commit SHAs or tags

LOW
Package install from arbitrary URL SUPPLY_014

Detects installing packages directly from URLs instead of registries

Want to check if your skills have supply chain issues?

Scan now (free, runs in your browser)