llm-guard
LM protection framework
A security toolkit designed to protect interactions with large language models from various threats and vulnerabilities.
The Security Toolkit for LLM Interactions
1k stars
19 watching
165 forks
Language: Python
last commit: 3 months ago
Linked from 1 awesome list
adversarial-machine-learningchatgptlarge-language-modelsllmllm-securityllmopsprompt-engineeringprompt-injectionsecurity-toolstransformers
Related projects:
Repository | Description | Stars |
---|---|---|
| A toolkit to detect and protect against vulnerabilities in Large Language Models. | 122 |
| Protects AI applications from prompt injection attacks through multiple layers of defense | 1,144 |
| Evaluates the confidentiality of Large Language Models integrated with external tools and services | 30 |
| An open-source toolkit for building and evaluating large language models | 267 |
| A high-performance LLM written in Python/Jax for training and inference on Google Cloud TPUs and GPUs. | 1,557 |
| A framework for managing and testing large language models to evaluate their performance and optimize user experiences. | 451 |
| A comprehensive toolset for building Large Language Model (LLM) based applications | 1,733 |
| Provides pre-trained language models and tools for fine-tuning and evaluation | 439 |
| A security scanner for Large Language Model prompts to detect potential threats and vulnerabilities | 326 |
| A set of tools and guidelines for assessing the security vulnerabilities of language models in AI applications | 28 |
| An API that provides a unified interface to multiple large language models for chat fine-tuning | 79 |
| A benchmark for evaluating large language models in multiple languages and formats | 93 |
| Enables users to engage with multiple large language models simultaneously and access their APIs | 256 |
| A framework and benchmark for training and evaluating multi-modal large language models, enabling the development of AI agents capable of seamless interaction between humans and machines. | 305 |