Macaw-LLM
Multimodal LLM
A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
2k stars
33 watching
128 forks
Language: Python
last commit: over 1 year ago deep-learninglanguage-modelmachine-learningmulti-modal-learningnatural-language-processingneural-networks
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks | 73 |
| | A framework that enables large language models to process and understand multimodal inputs from various sources such as images and speech. | 308 |
| | An open-source multilingual large language model designed to understand and generate content across diverse languages and cultural contexts | 92 |
| | A framework for training GPT4-style language models with multimodal inputs using large datasets and pre-trained models | 231 |
| | A polyglot large language model designed to address limitations in current LLM research and provide better multilingual instruction-following capability. | 77 |
| | An LLaMA-based multimodal language model with various instruction-following and multimodal variants. | 17 |
| | An end-to-end image captioning system that uses large multi-modal models and provides tools for training, inference, and demo usage. | 1,849 |
| | An open bilingual LLM developed using the LingoWhale model, trained on a large dataset of high-quality middle English text, and fine-tuned for specific tasks such as conversation generation. | 134 |
| | Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 15 |
| | An implementation of a multimodal language model with capabilities for comprehension and generation | 585 |
| | An end-to-end trained model capable of generating natural language responses integrated with object segmentation masks for interactive visual conversations | 797 |
| | An open-source implementation of a vision-language instructed large language model | 513 |
| | A framework for training and fine-tuning multimodal language models on various data types | 601 |
| | Extending pretraining models to handle multiple modalities by aligning language and video representations | 751 |
| | An API that provides a unified interface to multiple large language models for chat fine-tuning | 79 |