yuren-baichuan-7b

Multimodal LLM

A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks

基于baichuan-7b的开源多模态大语言模型

GitHub

73 stars
5 watching
8 forks
Language: Python
last commit: about 1 year ago
baichuan-7bdeep-learningllmmultimodaltransformers

Related projects:

Repository Description Stars
lyuchenyang/macaw-llm A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation 1,568
damo-nlp-mt/polylm A polyglot large language model designed to address limitations in current LLM research and provide better multilingual instruction-following capability. 77
phellonchen/x-llm A framework that enables large language models to process and understand multimodal inputs from various sources such as images and speech. 308
openbmb/viscpm A family of large multimodal models supporting multimodal conversational capabilities and text-to-image generation in multiple languages 1,098
xverse-ai/xverse-7b A multilingual large language model developed by XVERSE Technology Inc. 50
deeplangai/lingowhale-8b An open bilingual LLM developed using the LingoWhale model, trained on a large dataset of high-quality middle English text, and fine-tuned for specific tasks such as conversation generation. 134
neulab/pangea An open-source multilingual large language model designed to understand and generate content across diverse languages and cultural contexts 92
yuliang-liu/monkey An end-to-end image captioning system that uses large multi-modal models and provides tools for training, inference, and demo usage. 1,849
multimodal-art-projection/omnibench Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. 15
mbzuai-oryx/groundinglmm An end-to-end trained model capable of generating natural language responses integrated with object segmentation masks for interactive visual conversations 797
ailab-cvc/seed An implementation of a multimodal language model with capabilities for comprehension and generation 585
bytedance/lynx-llm A framework for training GPT4-style language models with multimodal inputs using large datasets and pre-trained models 231
damo-nlp-sg/m3exam A benchmark for evaluating large language models in multiple languages and formats 93
luogen1996/lavin An open-source implementation of a vision-language instructed large language model 513
alpha-vllm/wemix-llm An LLaMA-based multimodal language model with various instruction-following and multimodal variants. 17