Skip to content
#

prompt-injection-detector

Here are 4 public repositories matching this topic...

Language: All
Filter by language

Bidirectional LLM security firewall providing risk reduction (not complete protection) for human/LLM interfaces. Hexagonal architecture with multi-layer validation of inputs, outputs, memory and tool state. Beta status. ~528 KB wheel, optional ML guards.

  • Updated Dec 13, 2025
  • Python

Improve this page

Add a description, image, and links to the prompt-injection-detector topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the prompt-injection-detector topic, visit your repo's landing page and select "manage topics."

Learn more