Quantus
Explainability toolkit
An eXplainable AI toolkit for evaluating and interpreting neural network explanations in various deep learning frameworks.
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
556 stars
10 watching
76 forks
Language: Jupyter Notebook
last commit: 12 days ago deep-learningexplainable-aiinterpretabilitymachine-learningpytorchquantification-evaluation-methodsreproducibilitytensorflowxai
Related projects:
Repository | Description | Stars |
---|---|---|
trusted-ai/aix360 | A toolkit for explaining complex AI models and data-driven insights | 1,633 |
deel-ai/xplique | An Explainable AI toolbox that provides various methods and tools to understand and interpret the behavior of neural networks | 644 |
tensorflow/tcav | An interpretability method that provides explanations for neural network predictions by highlighting high-level concepts relevant to classification tasks. | 632 |
andreysharapov/xaience | An online repository providing resources and information on explainable AI, algorithmic fairness, ML security, and related topics | 107 |
pbiecek/xaiaterum2020 | An R package and workshop materials for explaining machine learning models using explainable AI techniques | 52 |
csinva/hierarchical-dnn-interpretations | Provides an implementation of Hierarchical explanations for Neural Network predictions | 125 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,125 |
explainx/explainx | Provides a framework to understand and explain the behavior of machine learning models used in data science applications. | 417 |
csinva/imodels | An open-source package that provides interpretable machine learning models compatible with scikit-learn. | 1,399 |
dianna-ai/dianna | A Python package providing an explainable AI interface to research projects | 48 |
h2oai/mli-resources | Provides tools and techniques for interpreting machine learning models | 484 |
interpretml/dice | Provides counterfactual explanations for machine learning models to facilitate interpretability and understanding. | 1,364 |
ibm/aihwkit | An open source toolkit for developing and training neural networks on analog computing devices | 363 |
jbloomaus/decisiontransformerinterpretability | An open-source project that provides tools and utilities to understand how transformers are used in reinforcement learning tasks. | 73 |
modeloriented/dalex | A tool to help understand and explain the behavior of complex machine learning models | 1,375 |