lofo-importance

Feature evaluator

A tool to evaluate feature importance by iteratively removing each feature and evaluating model performance on validation sets.

Leave One Feature Out Importance

GitHub

817 stars
13 watching
85 forks
Language: Python
last commit: 10 months ago
Linked from 3 awesome lists

data-scienceexplainable-aifeature-importancefeature-selectionmachine-learning

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
giuseppec/featureimportance A tool to assess feature importance in machine learning models 33
iancovert/sage A Python package for calculating global feature importance using Shapley values in machine learning models 253
allenai/olmo-eval An evaluation framework for large language models. 310
modeloriented/ingredients Provides tools to assess and visualize the importance and effects of features in machine learning models 37
koalaverse/vip A package that provides a consistent interface for computing feature importance in machine learning models from various R packages. 186
kundajelab/deeplift A Python library implementing methods for visualizing and interpreting the importance of features in deep neural networks 826
yeolab/anchor An algorithm to identify unimodal, bimodal, and multimodal features in data 27
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
h2oai/h2o-llm-eval An evaluation framework for large language models with Elo rating system and A/B testing capabilities 50
ethicalml/xai An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance 1,125
referefref/aiocrioc An automated tool that extracts and analyzes indicators of compromise from text data using natural language processing and OCR techniques. 31
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 55
huggingface/lighteval A toolkit for evaluating Large Language Models across multiple backends 804
openai/simple-evals A library for evaluating language models using standardized prompts and benchmarking tests. 1,939
koraykv/fex A Lua-based library for feature extraction in computer vision applications using the SIFT algorithm 10