 interpret
 interpret 
 AI model explainer
 An open-source package for explaining machine learning models and promoting transparency in AI decision-making
Fit interpretable models. Explain blackbox machine learning.
6k stars
 146 watching
 736 forks
 
Language: C++ 
last commit: 11 months ago 
Linked from   4 awesome lists  
  aiartificial-intelligencebiasblackboxdifferential-privacyexplainabilityexplainable-aiexplainable-mlgradient-boostingimlinterpretabilityinterpretable-aiinterpretable-machine-learninginterpretable-mlinterpretmlmachine-learningscikit-learntransparencyxai 
 Related projects:
| Repository | Description | Stars | 
|---|---|---|
|  | Teaching software developers how to build transparent and explainable machine learning models using Python | 673 | 
|  | Provides counterfactual explanations for machine learning models to facilitate interpretability and understanding. | 1,373 | 
|  | A comprehensive resource for explaining the decisions and behavior of machine learning models. | 4,811 | 
|  | An interactive tool for analyzing and understanding machine learning models | 3,500 | 
|  | Provides tools and techniques for interpreting machine learning models | 483 | 
|  | A framework to explain and debug blackbox machine learning models with a single line of code. | 419 | 
|  | An open-source package that provides interpretable machine learning models compatible with scikit-learn. | 1,406 | 
|  | Provides tools to understand and interpret the decisions made by XGBoost models in machine learning | 253 | 
|  | A Python toolbox for developing and diagnosing interpretable machine learning models with low-code and high-code APIs. | 1,221 | 
|  | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,135 | 
|  | A Python library for building interactive dashboards to explain machine learning models | 2,321 | 
|  | A tool to help understand and explain the behavior of complex machine learning models | 1,390 | 
|  | An implementation of a method to interpret ensemble models by learning compact representations from them | 8 | 
|  | This project provides tools to induce rules from trained neural networks to explain model predictions and data patterns. | 21 | 
|  | An exploratory tool for analyzing and understanding machine learning models | 14 |