mli-resources
Model interpreter
Provides tools and techniques for interpreting machine learning models
H2O.ai Machine Learning Interpretability Resources
484 stars
150 watching
131 forks
Language: Jupyter Notebook
last commit: almost 4 years ago
Linked from 1 awesome list
accountabilitydata-miningdata-scienceexplainable-mlfairnessfatmlh2oimlinterpretabilityinterpretable-aiinterpretable-machine-learninginterpretable-mljupyter-notebooksmachine-learningmachine-learning-interpretabilitymlipythontransparencyxaixgboost
Related projects:
Repository | Description | Stars |
---|---|---|
h2oai/article-information-2019 | A framework for building and evaluating machine learning systems with high accuracy and interpretability, particularly in human-centered applications. | 13 |
jphall663/interpretable_machine_learning_with_python | Teaching software developers how to build transparent and explainable machine learning models using Python | 673 |
csinva/imodels | An open-source package that provides interpretable machine learning models compatible with scikit-learn. | 1,399 |
h2oai/h2o-tutorials | Provides tutorials and training materials for machine learning with H2O, a platform for building predictive models. | 1,483 |
h2oai/h2o-3 | An in-memory machine learning platform that supports various algorithms and provides tools for building, deploying, and scaling machine learning models | 6,922 |
andreysharapov/xaience | An online repository providing resources and information on explainable AI, algorithmic fairness, ML security, and related topics | 107 |
h2oai/h2o-llm-eval | An evaluation framework for large language models with Elo rating system and A/B testing capabilities | 50 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,125 |
mayer79/flashlight | A toolset for understanding and interpreting complex machine learning models | 22 |
trusted-ai/aix360 | A toolkit for explaining complex AI models and data-driven insights | 1,633 |
interpretml/dice | Provides counterfactual explanations for machine learning models to facilitate interpretability and understanding. | 1,364 |
applieddatasciencepartners/xgboostexplainer | Provides tools to understand and interpret the decisions made by XGBoost models in machine learning | 252 |
jianbo-lab/l2x | A Python framework for learning to interpret models using information-theoretic methods | 124 |
marcelrobeer/explabox | An exploratory tool for analyzing and understanding machine learning models | 15 |
pbiecek/xai_resources | A collection of resources and papers related to Explainable Artificial Intelligence (XAI) for machine learning model interpretability. | 822 |