VITA
Multimodal processor
A large multimodal language model designed to process and analyze video, image, text, and audio inputs in real-time.
✨✨VITA: Towards Open-Source Interactive Omni Multimodal LLM
961 stars
40 watching
59 forks
Language: Python
last commit: 29 days ago large-multimodal-modelsmultimodal-large-language-models
Related projects:
Repository | Description | Stars |
---|---|---|
openbmb/viscpm | A family of large multimodal models supporting multimodal conversational capabilities and text-to-image generation in multiple languages | 1,089 |
llava-vl/llava-interactive-demo | An all-in-one demo for interactive image processing and generation | 351 |
multimodal-art-projection/omnibench | Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 14 |
lyuchenyang/macaw-llm | A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation | 1,550 |
neulab/pangea | An open-source multilingual large language model designed to understand and generate content across diverse languages and cultural contexts | 91 |
ailab-cvc/seed | An implementation of a multimodal language model with capabilities for comprehension and generation | 576 |
alpha-vllm/wemix-llm | An LLaMA-based multimodal language model with various instruction-following and multimodal variants. | 17 |
mlpc-ucsd/bliva | A multimodal LLM designed to handle text-rich visual questions | 269 |
damo-nlp-mt/polylm | A polyglot large language model designed to address limitations in current LLM research and provide better multilingual instruction-following capability. | 76 |
runpeidong/dreamllm | A framework to build versatile Multimodal Large Language Models with synergistic comprehension and creation capabilities | 394 |
pleisto/yuren-baichuan-7b | A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks | 72 |
yuliang-liu/monkey | A toolkit for building conversational AI models that can process images and text inputs. | 1,825 |
yfzhang114/slime | Develops large multimodal models for high-resolution understanding and analysis of text, images, and other data types. | 137 |
mlo-lab/muvi | A software framework for multi-view latent variable modeling with domain-informed structured sparsity | 29 |
haozhezhao/mic | Develops a multimodal vision-language model to enable machines to understand complex relationships between instructions and images in various tasks. | 334 |