ExplainaBoard

Model Comparer

An interactive tool to analyze and compare the performance of natural language processing models

Interpretable Evaluation for AI Systems

GitHub

361 stars
11 watching
36 forks
Language: Python
last commit: over 1 year ago

Related projects:

Repository Description Stars
neulab/compare-mt A tool for comparing the performance of different language generation systems. 467
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,347
interpretml/dice Provides counterfactual explanations for machine learning models to facilitate interpretability and understanding. 1,364
modeloriented/dalex A tool to help understand and explain the behavior of complex machine learning models 1,375
pair-code/what-if-tool An interactive tool for exploring and understanding the behavior of machine learning models 917
blobcity/autoai A Python-based framework for automating the process of finding and training the best-performing machine learning model for regression and classification tasks on numerical data. 174
tensorflow/model-analysis Evaluates and visualizes the performance of machine learning models. 1,258
blent-ai/alepython An ALE plot generation tool for explaining machine learning model predictions 158
johnsnowlabs/langtest A tool for testing and evaluating large language models with a focus on AI safety and model assessment. 501
openai/simple-evals A library for evaluating language models using standardized prompts and benchmarking tests. 1,939
marcelrobeer/explabox An exploratory tool for analyzing and understanding machine learning models 15
pbiecek/xaiaterum2020 An R package and workshop materials for explaining machine learning models using explainable AI techniques 52
marcelrobeer/contrastiveexplanation Provides explanations for why an instance has a certain outcome by contrasting it with what would have happened if the outcome had been different. 45
thomasp85/lime An R package for providing explanations of predictions made by black box classifiers. 485
giuseppec/iml Provides methods to interpret and explain the behavior of machine learning models 492