CLIP_benchmark
Model comparator
Evaluates and compares the performance of various CLIP-like models on different tasks and datasets.
CLIP-like model evaluation
632 stars
12 watching
80 forks
Language: Jupyter Notebook
last commit: over 1 year ago Related projects:
| Repository | Description | Stars |
|---|---|---|
| | A library for learning audio embeddings from text and audio data using contrastive language-audio pretraining | 1,457 |
| | A repository containing a collection of large datasets used for training and testing AI models, specifically designed to improve image-text matching capabilities. | 239 |
| | Comparative benchmarks of various machine learning algorithms | 169 |
| | A platform for comparing and evaluating AI and machine learning algorithms at scale | 1,779 |
| | Analyzes LLM responses side-by-side to compare and contrast differences in generated text | 347 |
| | A benchmark for evaluating large language models in multiple languages and formats | 93 |
| | Automates comparison and synchronization of XML documents across directories. | 21 |
| | An interactive tool to analyze and compare the performance of natural language processing models | 362 |
| | Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 15 |
| | Evaluates and benchmarks large language models' video understanding capabilities | 121 |
| | A series of large language models trained from scratch to excel in multiple NLP tasks | 7,743 |
| | Provides pre-trained face detection and analysis models using large-scale image-text data | 281 |
| | An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models | 84 |
| | Automates model building and deployment process by optimizing hyperparameters and compressing models for edge computing. | 200 |
| | A benchmarking framework for large language models | 81 |