SkillGuardian — safely scan AI agent skills before they run
Static, offline-friendly inspection for Markdown + TypeScript + shell skills. Detects obfuscation, risky commands, suspicious URLs, install/persistence behavior, and missing purpose — with evidence-backed reports.
- ✅ No AI required (static analysis)
- 🔒 Offline scanning (network only for repo fetch)
- 📎 Evidence-first reporting (file/line/snippet)
- 🧱 Safe archive extraction (zip-slip / bombs / symlinks)
# Scan a single markdown skill
$ skillguardian scan ./skills/ssh-cleanup.md \
--format text,json --output ./report
RISK SCORE: 82/100 (HIGH)
Capabilities: exec ✅ network ✅ install ✅ persistence ⚠️ obfuscation ✅
Top finding: EXEC-001 critical — pipe-to-shell detected (skills/install.md:42)
What it detects
Six categories of static analysis — no execution, no AI, just pattern-matched evidence.
Hidden character detection
Flags zero-width chars, bidi overrides, suspicious encodings, high-entropy blobs.
Obfuscated code hides intent from human review.
Risky command patterns
Detects curl|sh, wget|bash, encoded PowerShell, child_process, eval, heredoc exec chains.
Arbitrary execution is the top vector for supply-chain attacks.
Network & exfil indicators
URLs, webhooks, tunnels, IP literals, "token/secret near network calls".
Unexpected network access can leak sensitive data.
Install & supply chain
Detects package installs, lifecycle scripts, unpinned dependencies.
Unvetted installs are a common malware entry point.
Persistence & system modification
Cron/systemd/launch agents, shell profile edits, sensitive path writes.
Persistent changes survive reboots and are hard to undo.
Purpose & transparency checks
Warns if skill doesn't clearly state what it does, or gives dangerous "just run this" instructions.
Skills should be transparent about their actions.
How it works
Acquire safely
Scan local files, folders, archives, or fetch repos (network only for fetch).
Inspect offline
Parse Markdown (including fenced code blocks) + scan TS/shell with rule engine.
Report clearly
Risk score + observed capabilities + exact evidence + recommended actions.
"SkillGuardian never executes the skill. It only reads and analyzes."
From one-off checks to pipeline gates
Run SkillGuardian locally before installing a skill, or integrate it into CI to block risky additions automatically.
- Single file or whole directory
- Archives: zip/tar/tgz with extraction protections
- GitHub repo scanning (tarball fetch, no submodules by default)
- CI exit codes:
--fail-on highto gate merges
# Scan a single markdown skill
$ skillguardian scan ./skills/ssh-cleanup.md \
--format text,json --output ./report
# Scan a directory (recursive) and fail CI if anything is high+
$ skillguardian scan ./skills/ --fail-on high \
--format sarif --output ./report
Report formats
Choose the format that fits your workflow.
Designed to be safe to run
- No code execution. Ever.
- Offline analysis by default; network only for repo fetch.
- Safe extraction limits: path traversal, symlinks, bombs, file count/size ceilings.
- Secrets redaction in reports.
- Deterministic results (same inputs → same findings).
FAQ
Do you use AI/LLMs?
No. SkillGuardian uses deterministic static analysis — pattern matching and rule engines. No models, no API calls, no hallucinations.
Can it run offline?
Yes. Network access is only used when fetching a remote GitHub repository. All analysis is performed locally.
What file types are supported?
Markdown (.md), TypeScript (.ts/.tsx), and shell scripts (.sh/.bash). Markdown fenced code blocks are parsed and scanned individually.
Will it flag legitimate skills?
Possibly. SkillGuardian reports observed capabilities with evidence — it's up to you to decide if the behavior is expected. It's a guiding light, not a blocker (unless you configure --fail-on).
How do I use this in GitHub Actions?
Add a step that runs skillguardian scan . --format sarif --fail-on high. The process exits with code 1 if findings meet or exceed your threshold, gating the merge.