First Seen
Feb 18, 2026
Last Scanned
Feb 22, 2026
Findings
3
Score
60/100
Findings (3)
Detects content pretending to be a system prompt
system prompt: Remove encoded or obfuscated directives (base64, ROT13, unicode escapes, hex-encoded text). All text should be in plaintext and human-readable.
Likely FP if the encoded content is legitimate data (e.g., a base64-encoded image, a hex-encoded binary hash) rather than concealed directives.
Detects fetching external URLs and using the content as agent instructions or rules
prompt
### Step 1: Create via Bot API Sanitize or validate all external inputs (file contents, API responses, user messages) before including them in prompts or tool calls. Implement input/output boundaries between trusted and untrusted data.
Likely FP if the matched text is the skill's own instruction set describing how to handle user input, not an actual injection payload.
Detects execution of shell script files via bash/sh command or direct invocation
bash
scripts/archive_topic.sh Replace direct shell script execution with a language-native implementation or a sandboxed executor. If shell scripts must run, restrict them to a vetted allowlist with integrity checks.
Likely FP if the match references running a script that is part of the skill's own repository (e.g., ./setup.sh) with clear, auditable contents.