DreamLLM
Multimodal Model Builder
A framework to build versatile Multimodal Large Language Models with synergistic comprehension and creation capabilities
[ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation
394 stars
16 watching
6 forks
Language: Python
last commit: 7 months ago Related projects:
Repository | Description | Stars |
---|---|---|
phellonchen/x-llm | A framework that enables large language models to process and understand multimodal inputs from various sources such as images and speech. | 306 |
yuliang-liu/monkey | A toolkit for building conversational AI models that can process images and text inputs. | 1,825 |
bytedance/lynx-llm | A framework for training GPT4-style language models with multimodal inputs using large datasets and pre-trained models | 229 |
ailab-cvc/seed | An implementation of a multimodal language model with capabilities for comprehension and generation | 576 |
pleisto/yuren-baichuan-7b | A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks | 72 |
openbmb/viscpm | A family of large multimodal models supporting multimodal conversational capabilities and text-to-image generation in multiple languages | 1,089 |
nvlabs/eagle | Develops high-resolution multimodal LLMs by combining vision encoders and various input resolutions | 539 |
elanmart/psmm | An implementation of a neural network model for character-level language modeling. | 50 |
csuhan/onellm | A framework for training and fine-tuning multimodal language models on various data types | 588 |
mbzuai-oryx/groundinglmm | An end-to-end trained model capable of generating natural language responses integrated with object segmentation masks. | 781 |
vita-mllm/vita | A large multimodal language model designed to process and analyze video, image, text, and audio inputs in real-time. | 961 |
damo-nlp-mt/polylm | A polyglot large language model designed to address limitations in current LLM research and provide better multilingual instruction-following capability. | 76 |
facebookresearch/spiritlm | This repository provides an end-to-end language model capable of generating coherent text based on both spoken and written inputs. | 777 |
multimodal-art-projection/omnibench | Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 14 |
luogen1996/lavin | An open-source implementation of a vision-language instructed large language model | 508 |