shap
Model explainer
Provides an algorithm to explain the output of machine learning models using game theory and Shapley values.
A game theoretic approach to explain the output of any machine learning model.
23k stars
245 watching
3k forks
Language: Jupyter Notebook
last commit: 2 months ago
Linked from 6 awesome lists
deep-learningexplainabilitygradient-boostinginterpretabilitymachine-learningshapshapley
Related projects:
Repository | Description | Stars |
---|---|---|
| Provides an efficient approach to computing Shapley values for explaining machine learning model predictions. | 116 |
| A Python library for explaining machine learning models | 2,421 |
| Provides visualizations and explanations to help understand machine learning model interactions and decisions | 2,749 |
| An open-source Python library for evaluating and explaining the contribution of individual classifiers in machine learning ensembles. | 219 |
| A Python library for building interactive dashboards to explain machine learning models | 2,321 |
| Calculates fair valuation of individual training data points in machine learning models. | 259 |
| Teaching software developers how to build transparent and explainable machine learning models using Python | 673 |
| A minimal PyTorch implementation of a transformer-based language model | 20,474 |
| An open-source package for explaining machine learning models and promoting transparency in AI decision-making | 6,324 |
| Provides counterfactual explanations for machine learning models to facilitate interpretability and understanding. | 1,373 |
| Practices implementing popular machine learning algorithms from scratch to gain a deeper understanding of their mathematics | 23,191 |
| An automation tool for machine learning workflows in Python | 9,026 |
| A library for generating synthetic tabular data based on real-world patterns | 2,416 |
| Provides implementations of fundamental machine learning models and algorithms from scratch in Python | 24,092 |
| A package for computing asymmetric Shapley values to assess causality in machine learning models | 72 |