First Seen
Feb 18, 2026
Last Scanned
Feb 22, 2026
Findings
3
Score
77/100
Findings (3)
Detects fetching external URLs and using the content as agent instructions or rules
fetch your spending rules Sanitize or validate all external inputs (file contents, API responses, user messages) before including them in prompts or tool calls. Implement input/output boundaries between trusted and untrusted data.
Likely FP if the matched text is the skill's own instruction set describing how to handle user input, not an actual injection payload.
Benign heading "'@openai/agents:*'\n;\n// Verbose logging\n..." followed by dangerous content (category: credential_access)
Save your API key on registration. It cannot be retrieved again. Store it in your platform's secure secrets manager or as an environment variable (CREDITCLAW_API_KEY). Ensure section headings accurately reflect the content that follows. Remove headings that could mislead an LLM into treating content differently than intended.
Likely FP if the heading mismatch is due to inconsistent markdown formatting or a benign section title that happens to contain keywords like system or config.
Detects patterns where external API responses are used directly without validation or sanitization
API key. If our data + immediately after use Validate and sanitize all data received from external APIs before using it in tool operations or agent prompts. Implement schema validation and treat API responses as untrusted input.
Likely FP if the match is a truncated table cell or documentation fragment that mentions API responses in a descriptive context, not actual unvalidated data processing.