safety-gymnasium
Safe RL Benchmark
A unified benchmark for safe reinforcement learning algorithms and environments.
NeurIPS 2023: Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
410 stars
10 watching
53 forks
Language: Python
last commit: 9 months ago
Linked from 1 awesome list
constraint-rlconstraint-satisfaction-problemreinforcement-learningsafe-policy-optimizationsafe-reinforcement-learningsafe-reinforcement-learning-environmentssafety-criticalsafety-critical-systems
Related projects:
Repository | Description | Stars |
---|---|---|
| A framework designed to accelerate the development of safe reinforcement learning algorithms by providing a modular, high-performance platform for parallel computing and out-of-box toolkits. | 954 |
| A benchmark suite for unsupervised reinforcement learning agents, providing pre-trained models and scripts for testing and fine-tuning agent performance. | 335 |
| A high-throughput reinforcement learning library with optimized synchronous and asynchronous implementations of policy gradients. | 839 |
| A collection of benchmarks and implementations for testing reinforcement learning-based Volt-VAR control algorithms | 20 |
| A collection of reinforcement learning algorithms and tools for training agents in complex environments. | 43 |
| Provides benchmarking policies and datasets for offline reinforcement learning | 85 |
| Provides tools and algorithms for developing reinforcement learning policies in game environments. | 3 |
| A Python library implementing state-of-the-art deep reinforcement learning algorithms for Keras and OpenAI Gym environments. | 8 |
| Improves safety and helpfulness of large language models by fine-tuning them using safety-critical tasks | 47 |
| A toolkit for developing and evaluating reinforcement learning algorithms in a reproducible manner | 1,893 |
| A modular reinforcement learning library with support for various environments and frameworks | 588 |
| An RL framework for building and training reinforcement learning models in Python | 266 |
| A benchmark for evaluating the safety and robustness of vision language models against adversarial attacks. | 72 |
| A set of tools and environments for learning-based control and reinforcement learning in robotics with symbolic safety constraints | 645 |
| A framework for parallel population-based reinforcement learning | 507 |