lm_risk_cards
Model risk assessment toolkit
A set of tools and guidelines for assessing the security vulnerabilities of language models in AI applications
Risks and targets for assessing LLMs & LLM vulnerabilities
28 stars
6 watching
7 forks
Language: Python
last commit: 9 months ago
Linked from 1 awesome list
llmllm-securityred-teamingsecurityvulnerability
Related projects:
Repository | Description | Stars |
---|---|---|
| A toolkit to detect and protect against vulnerabilities in Large Language Models. | 122 |
| A toolkit for assessing trustworthiness in large language models | 491 |
| A guide to using pre-trained large language models in source code analysis and generation | 1,789 |
| A security toolkit designed to protect interactions with large language models from various threats and vulnerabilities. | 1,296 |
| A benchmark for evaluating the safety and robustness of vision language models against adversarial attacks. | 72 |
| A comprehensive toolset for building Large Language Model (LLM) based applications | 1,733 |
| An open-source toolkit for building and evaluating large language models | 267 |
| A security scanner for Large Language Model prompts to detect potential threats and vulnerabilities | 326 |
| Detecting backdoors in language models to prevent malicious AI usage | 109 |
| Builds agents controlled by large language models (LLMs) to perform tasks with tool-based components | 940 |
| Develops a multimodal large-scale model for analyzing remote sensing images in scene analysis tasks | 108 |
| Provides a set of tools and components to simplify the integration of Large Language Models into Dart/Flutter applications | 441 |
| An open-source project that explores the intersection of machine learning and security to develop tools for detecting vulnerabilities in web applications. | 1,987 |
| A repository of papers and resources for evaluating large language models. | 1,450 |