MIC
Multimodal learner
Develops a multimodal vision-language model to enable machines to understand complex relationships between instructions and images in various tasks.
MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU
337 stars
10 watching
15 forks
Language: Python
last commit: about 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
| Develops a large-scale dataset and benchmark for training multimodal chart understanding models using large language models. | 87 |
| A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks | 73 |
| A benchmarking suite for multimodal in-context learning models | 31 |
| A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation | 1,568 |
| Improves performance of vision language tasks by integrating computer vision capabilities into large language models | 314 |
| Extending pretraining models to handle multiple modalities by aligning language and video representations | 751 |
| An end-to-end image captioning system that uses large multi-modal models and provides tools for training, inference, and demo usage. | 1,849 |
| Evaluates the capabilities of large multimodal models using a set of diverse tasks and metrics | 274 |
| An end-to-end trained model capable of generating natural language responses integrated with object segmentation masks for interactive visual conversations | 797 |
| A large vision-language model using a mixture-of-experts architecture to improve performance on multi-modal learning tasks | 2,023 |
| Evaluating and improving large multimodal models through in-context learning | 21 |
| Develops a multimodal task and dataset to assess vision-language models' ability to handle interleaved image-text inputs. | 33 |
| Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 15 |
| Training and deploying large language models on computer vision tasks using region-of-interest inputs | 517 |
| Trains a multimodal chatbot that combines visual and language instructions to generate responses | 1,478 |