yuren-baichuan-7b
Multimodal LLM
A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks
基于baichuan-7b的开源多模态大语言模型
72 stars
5 watching
8 forks
Language: Python
last commit: 12 months ago baichuan-7bdeep-learningllmmultimodaltransformers
Related projects:
Repository | Description | Stars |
---|---|---|
lyuchenyang/macaw-llm | A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation | 1,550 |
damo-nlp-mt/polylm | A polyglot large language model designed to address limitations in current LLM research and provide better multilingual instruction-following capability. | 76 |
phellonchen/x-llm | A framework that enables large language models to process and understand multimodal inputs from various sources such as images and speech. | 306 |
openbmb/viscpm | A family of large multimodal models supporting multimodal conversational capabilities and text-to-image generation in multiple languages | 1,089 |
xverse-ai/xverse-7b | A multilingual large language model developed by XVERSE Technology Inc. | 50 |
deeplangai/lingowhale-8b | An open bilingual LLM developed using the LingoWhale model, trained on a large dataset of high-quality middle English text, and fine-tuned for specific tasks such as conversation generation. | 134 |
neulab/pangea | An open-source multilingual large language model designed to understand and generate content across diverse languages and cultural contexts | 91 |
yuliang-liu/monkey | A toolkit for building conversational AI models that can process images and text inputs. | 1,825 |
multimodal-art-projection/omnibench | Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 14 |
mbzuai-oryx/groundinglmm | An end-to-end trained model capable of generating natural language responses integrated with object segmentation masks. | 781 |
ailab-cvc/seed | An implementation of a multimodal language model with capabilities for comprehension and generation | 576 |
bytedance/lynx-llm | A framework for training GPT4-style language models with multimodal inputs using large datasets and pre-trained models | 229 |
damo-nlp-sg/m3exam | A benchmark for evaluating large language models in multiple languages and formats | 92 |
luogen1996/lavin | An open-source implementation of a vision-language instructed large language model | 508 |
alpha-vllm/wemix-llm | An LLaMA-based multimodal language model with various instruction-following and multimodal variants. | 17 |