VITA
Multimodal processor
A large multimodal language model designed to process and analyze video, image, text, and audio inputs in real-time.
✨✨VITA: Towards Open-Source Interactive Omni Multimodal LLM
1k stars
40 watching
58 forks
Language: Python
last commit: 4 months ago large-multimodal-modelsmultimodal-large-language-models
Related projects:
Repository | Description | Stars |
---|---|---|
| A family of large multimodal models supporting multimodal conversational capabilities and text-to-image generation in multiple languages | 1,098 |
| An all-in-one demo for interactive image processing and generation | 353 |
| Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 15 |
| A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation | 1,568 |
| An open-source multilingual large language model designed to understand and generate content across diverse languages and cultural contexts | 92 |
| An implementation of a multimodal language model with capabilities for comprehension and generation | 585 |
| An LLaMA-based multimodal language model with various instruction-following and multimodal variants. | 17 |
| A multimodal LLM designed to handle text-rich visual questions | 270 |
| A polyglot large language model designed to address limitations in current LLM research and provide better multilingual instruction-following capability. | 77 |
| A framework to build versatile Multimodal Large Language Models with synergistic comprehension and creation capabilities | 402 |
| A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks | 73 |
| An end-to-end image captioning system that uses large multi-modal models and provides tools for training, inference, and demo usage. | 1,849 |
| Develops large multimodal models for high-resolution understanding and analysis of text, images, and other data types. | 143 |
| A software framework for multi-view latent variable modeling with domain-informed structured sparsity | 27 |
| Develops a multimodal vision-language model to enable machines to understand complex relationships between instructions and images in various tasks. | 337 |