Macaw-LLM

Multimodal LLM

A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation

Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration

GitHub

2k stars
33 watching
127 forks
Language: Python
last commit: 5 months ago
deep-learninglanguage-modelmachine-learningmulti-modal-learningnatural-language-processingneural-networks

Related projects:

Repository Description Stars
pleisto/yuren-baichuan-7b A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks 72
phellonchen/x-llm A framework that enables large language models to process and understand multimodal inputs from various sources such as images and speech. 306
neulab/pangea An open-source multilingual large language model designed to understand and generate content across diverse languages and cultural contexts 91
bytedance/lynx-llm A framework for training GPT4-style language models with multimodal inputs using large datasets and pre-trained models 229
damo-nlp-mt/polylm A polyglot large language model designed to address limitations in current LLM research and provide better multilingual instruction-following capability. 76
alpha-vllm/wemix-llm An LLaMA-based multimodal language model with various instruction-following and multimodal variants. 17
yuliang-liu/monkey A toolkit for building conversational AI models that can process images and text inputs. 1,825
deeplangai/lingowhale-8b An open bilingual LLM developed using the LingoWhale model, trained on a large dataset of high-quality middle English text, and fine-tuned for specific tasks such as conversation generation. 134
multimodal-art-projection/omnibench Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. 14
ailab-cvc/seed An implementation of a multimodal language model with capabilities for comprehension and generation 576
mbzuai-oryx/groundinglmm An end-to-end trained model capable of generating natural language responses integrated with object segmentation masks. 781
luogen1996/lavin An open-source implementation of a vision-language instructed large language model 508
csuhan/onellm A framework for training and fine-tuning multimodal language models on various data types 588
pku-yuangroup/languagebind Extending pretraining models to handle multiple modalities by aligning language and video representations 723
victordibia/llmx An API that provides a unified interface to multiple large language models for chat fine-tuning 79