rebuff
Prompt shield
Protects AI applications from prompt injection attacks through multiple layers of defense
LLM Prompt Injection Detector
1k stars
15 watching
82 forks
Language: TypeScript
last commit: 7 months ago
Linked from 1 awesome list
llmllmopsprompt-engineeringprompt-injectionpromptssecurity
Related projects:
Repository | Description | Stars |
---|---|---|
| A security toolkit designed to protect interactions with large language models from various threats and vulnerabilities. | 1,296 |
| An AI classifier designed to determine whether text is written by humans or machines. | 122 |
| A framework for analyzing the robustness of large language models to adversarial prompt attacks | 318 |
| A guide to help developers understand and mitigate the security risks of prompt injection in AI-powered applications and features. | 376 |
| An environment for training artificial intelligence models to respond optimally to security threats in computer networks | 21 |
| A security module for Koa applications that provides proactive protection against common security threats. | 19 |
| A CAPTCHA system with proof-of-work based rate limiting and token-based validation | 1,741 |
| A toolkit to detect and protect against vulnerabilities in Large Language Models. | 122 |
| Protects websites from bots and automated abuse by solving a challenge without collecting user data | 50 |
| Protects Home Assistant instance access by routing requests through the Tor network | 53 |
| A Lua module for detecting and mitigating Distributed Denial of Service (DDoS) attacks | 16 |
| Spam protection for Rails applications using text-based logic question captchas. | 56 |
| Detecting backdoors in language models to prevent malicious AI usage | 109 |
| Identifies web app endpoints and parameters to help detect vulnerabilities | 98 |
| A Chrome extension that allows users to curate and manage a custom library of AI prompts. | 1,111 |