lofo-importance
Feature evaluator
A tool to evaluate feature importance by iteratively removing each feature and evaluating model performance on validation sets.
Leave One Feature Out Importance
821 stars
13 watching
85 forks
Language: Python
last commit: about 1 year ago
Linked from 3 awesome lists
data-scienceexplainable-aifeature-importancefeature-selectionmachine-learning
Related projects:
Repository | Description | Stars |
---|---|---|
giuseppec/featureimportance | A tool to assess feature importance in machine learning models | 33 |
iancovert/sage | A Python package for calculating global feature importance using Shapley values in machine learning models | 256 |
allenai/olmo-eval | A framework for evaluating language models on NLP tasks | 326 |
modeloriented/ingredients | Provides tools to assess and visualize the importance and effects of features in machine learning models | 37 |
koalaverse/vip | A package that provides a consistent interface for computing feature importance in machine learning models from various R packages. | 187 |
kundajelab/deeplift | A Python library implementing methods for visualizing and interpreting the importance of features in deep neural networks | 837 |
yeolab/anchor | An algorithm to identify unimodal, bimodal, and multimodal features in data | 27 |
chenllliang/mmevalpro | A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
h2oai/h2o-llm-eval | An evaluation framework for large language models with Elo rating system and A/B testing capabilities | 50 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,135 |
referefref/aiocrioc | Automates the extraction of indicators of compromise from text-based reports | 31 |
freedomintelligence/mllm-bench | Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
huggingface/lighteval | An all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends. | 879 |
openai/simple-evals | Evaluates language models using standardized benchmarks and prompting techniques. | 2,059 |
koraykv/fex | A Lua-based library for feature extraction in computer vision applications using the SIFT algorithm | 10 |