lit

ML Model Analyzer

An interactive tool for analyzing and understanding machine learning models

The Learning Interpretability Tool: Interactively analyze ML models to understand their behavior in an extensible and framework agnostic interface.

GitHub

4k stars
68 watching
357 forks
Language: TypeScript
last commit: 26 days ago
machine-learningnatural-language-processingvisualization

Related projects:

Repository Description Stars
interpretml/interpret An open-source package for explaining machine learning models and promoting transparency in AI decision-making 6,324
h2oai/mli-resources Provides tools and techniques for interpreting machine learning models 483
christophm/interpretable-ml-book A comprehensive resource for explaining the decisions and behavior of machine learning models. 4,811
jphall663/interpretable_machine_learning_with_python Teaching software developers how to build transparent and explainable machine learning models using Python 673
mayer79/flashlight A toolset for understanding and interpreting complex machine learning models 22
csinva/imodels An open-source package that provides interpretable machine learning models compatible with scikit-learn. 1,406
interpretml/dice Provides counterfactual explanations for machine learning models to facilitate interpretability and understanding. 1,373
pair-code/what-if-tool An interactive tool for exploring and understanding the behavior of machine learning models 928
cemoody/lda2vec A framework for creating interpretable natural language models by combining word embeddings and topic modeling. 3,152
selfexplainml/piml-toolbox A Python toolbox for developing and diagnosing interpretable machine learning models with low-code and high-code APIs. 1,221
marcotcr/lime A tool for explaining the decisions of machine learning models 11,663
brightmart/text_classification An NLP project offering various text classification models and techniques for deep learning exploration 7,881
ianarawjo/chainforge An environment for battle-testing prompts to Large Language Models (LLMs) to evaluate response quality and performance. 2,413
explainx/explainx A framework to explain and debug blackbox machine learning models with a single line of code. 419
h2oai/article-information-2019 A framework for building and evaluating machine learning systems with high accuracy and interpretability, particularly in human-centered applications. 13