Monkey

Image captioner

An end-to-end image captioning system that uses large multi-modal models and provides tools for training, inference, and demo usage.

【CVPR 2024 Highlight】Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models

GitHub

2k stars
22 watching
132 forks
Language: Python
last commit: about 2 months ago

Related projects:

Repository Description Stars
kohjingyu/fromage A framework for grounding language models to images and handling multimodal inputs and outputs 478
lyuchenyang/macaw-llm A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation 1,568
yuliang-liu/multimodalocr An evaluation benchmark for OCR capabilities in large multmodal models. 484
pleisto/yuren-baichuan-7b A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks 73
openbmb/viscpm A family of large multimodal models supporting multimodal conversational capabilities and text-to-image generation in multiple languages 1,098
yfzhang114/slime Develops large multimodal models for high-resolution understanding and analysis of text, images, and other data types. 143
zhourax/vega Develops a multimodal task and dataset to assess vision-language models' ability to handle interleaved image-text inputs. 33
bytedance/lynx-llm A framework for training GPT4-style language models with multimodal inputs using large datasets and pre-trained models 231
yuweihao/mm-vet Evaluates the capabilities of large multimodal models using a set of diverse tasks and metrics 274
runpeidong/dreamllm A framework to build versatile Multimodal Large Language Models with synergistic comprehension and creation capabilities 402
phellonchen/x-llm A framework that enables large language models to process and understand multimodal inputs from various sources such as images and speech. 308
ailab-cvc/seed An implementation of a multimodal language model with capabilities for comprehension and generation 585
multimodal-art-projection/omnibench Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. 15
mbzuai-oryx/groundinglmm An end-to-end trained model capable of generating natural language responses integrated with object segmentation masks for interactive visual conversations 797
yuxie11/r2d2 A framework for large-scale cross-modal benchmarks and vision-language tasks in Chinese 157