lm_risk_cards
Model risk assessment toolkit
A set of tools and guidelines for assessing the security vulnerabilities of language models in AI applications
Risks and targets for assessing LLMs & LLM vulnerabilities
25 stars
5 watching
7 forks
Language: Python
last commit: 6 months ago
Linked from 1 awesome list
llmllm-securityred-teamingsecurityvulnerability
Related projects:
Repository | Description | Stars |
---|---|---|
safellama/plexiglass | A toolkit to detect and protect against vulnerabilities in Large Language Models. | 121 |
howiehwong/trustllm | A toolkit for assessing trustworthiness in large language models | 466 |
vhellendoorn/code-lms | A guide to using pre-trained large language models in source code analysis and generation | 1,782 |
protectai/llm-guard | A security toolkit designed to protect interactions with large language models from various threats and vulnerabilities. | 1,242 |
ucsc-vlaa/vllm-safety-benchmark | A benchmark for evaluating the safety and robustness of vision language models against adversarial attacks. | 67 |
melih-unsal/demogpt | A comprehensive toolset for building Large Language Model (LLM) based applications | 1,710 |
aiplanethub/beyondllm | An open-source toolkit for building and evaluating large language models | 263 |
deadbits/vigil-llm | A security scanner for Large Language Model prompts to detect potential threats and vulnerabilities | 309 |
ethz-spylab/rlhf_trojan_competition | Detecting backdoors in language models to prevent malicious AI usage | 107 |
mpaepper/llm_agents | Builds agents controlled by large language models (LLMs) to perform tasks with tool-based components | 931 |
lzw-lzw/remoteglm | Develops a multimodal large-scale model for analyzing remote sensing images in scene analysis tasks | 97 |
davidmigloz/langchain_dart | Provides a set of tools and components to simplify the integration of Large Language Models into Dart/Flutter applications | 425 |
13o-bbr-bbq/machine_learning_security | A collection of tools and techniques for applying machine learning to improve security in software applications | 1,979 |
mlgroupjlu/llm-eval-survey | A repository of papers and resources for evaluating large language models. | 1,433 |