CoBSAT
ML Model Benchmarker
Provides a benchmarking framework and dataset for evaluating the performance of large language models in text-to-image tasks
Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"
30 stars
1 watching
1 forks
Language: Jupyter Notebook
last commit: 4 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| A benchmarking suite for multimodal in-context learning models | 31 |
| Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 15 |
| Evaluating and improving large multimodal models through in-context learning | 21 |
| An LLM-free benchmark suite for evaluating MLLMs' hallucination capabilities in various tasks and dimensions | 98 |
| Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| An all-in-one web-based IDE for machine learning and data science | 3,446 |
| A benchmark for evaluating large language models' ability to process multimodal input | 322 |
| Automates tasks involving MarkLogic using Gradle | 72 |
| An interactive platform for exploring and comparing various machine learning algorithms and techniques using visualizations and example code. | 1,667 |
| A benchmark for evaluating large language models in multiple languages and formats | 93 |
| A high-performance statistical machine learning library written in Common Lisp | 261 |
| Tutorials and resources for learning Common Lisp Machine Learning with CLML. | 31 |
| Provides training materials and tools for building machine learning applications | 72 |
| An implementation of a multimodal learning approach to improve language models' ability to recognize unseen images and understand novel concepts. | 91 |
| Provides an interface to train and predict with machine learning models using LIBLINEAR | 83 |