 lime
 lime 
 Classifier explainer
 An R package for providing explanations of predictions made by black box classifiers.
Local Interpretable Model-Agnostic Explanations (R port of original Python package)
486 stars
 31 watching
 110 forks
 
Language: R 
last commit: about 3 years ago 
Linked from   2 awesome lists  
  caretmodel-checkingmodel-evaluationmodelingr 
 Related projects:
| Repository | Description | Stars | 
|---|---|---|
|  | Provides a method to generate explanations for predictions made by any black box classifier. | 798 | 
|  | An open-source Python library for evaluating and explaining the contribution of individual classifiers in machine learning ensembles. | 219 | 
|  | An R package for explaining the predictions made by machine learning models in data science applications. | 2 | 
|  | A Python implementation of a method to explain the predictions of machine learning models | 42 | 
|  | An R package and workshop materials for explaining machine learning models using explainable AI techniques | 52 | 
|  | Provides methods to interpret and explain the behavior of machine learning models | 494 | 
|  | A Python package implementing an interpretable machine learning model for text classification with visualization tools | 336 | 
|  | A general classifier module with Bayesian and LSI classification capabilities | 554 | 
|  | A tool to help understand and explain the behavior of complex machine learning models | 1,390 | 
|  | A Redis-backed system for classifying documents into categories based on their content. | 38 | 
|  | An R package to provide interpretability features for LightGBM models. | 23 | 
|  | A tool for creating interactive, model-agnostic explanations of machine learning models in R | 328 | 
|  | Provides an efficient approach to computing Shapley values for explaining machine learning model predictions. | 116 | 
|  | An ALE plot generation tool for explaining machine learning model predictions | 160 | 
|  | A tool for explaining predictions from machine learning models by attributing them to specific input variables and their interactions. | 82 |