polite-flamingo

Model trainer

Develops training methods to improve the politeness and natural flow of multi-modal Large Language Models

🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)

GitHub

63 stars
5 watching
3 forks
Language: Python
last commit: 12 months ago
large-language-modelsmultimodal-large-language-modelsvisual-instruction-tuning

Related projects:

Repository Description Stars
openai/finetune-transformer-lm This project provides code and model for improving language understanding through generative pre-training using a transformer-based architecture. 2,160
vpgtrans/vpgtrans Transfers visual prompt generators across large language models to reduce training costs and enable customization of multimodal LLMs 269
flagai-open/aquila2 Provides pre-trained language models and tools for fine-tuning and evaluation 437
csuhan/onellm A framework for training and fine-tuning multimodal language models on various data types 588
kendryte/toucan-llm A large language model with 70 billion parameters designed for chatbot and conversational AI tasks 29
llava-vl/llava-plus-codebase A platform for training and deploying large language and vision models that can use tools to perform tasks 704
brightmart/xlnet_zh Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks 230
clue-ai/chatyuan Large language model for dialogue support in multiple languages 1,902
openbmb/cpm-live A live training platform for large-scale deep learning models, allowing community participation and collaboration in model development and deployment. 511
peremartra/large-language-model-notebooks-course A practical course teaching large language models and their applications through hands-on projects using OpenAI API and Hugging Face library. 1,281
huggingface/nanotron A library for training large language models with parallel computing and mixed precision training methods 1,244
microsoft/mpnet Develops a method for pre-training language understanding models by combining masked and permuted techniques, and provides code for implementation and fine-tuning. 288
open-mmlab/multimodal-gpt Trains a multimodal chatbot that combines visual and language instructions to generate responses 1,477
bobazooba/xllm A tool for training and fine-tuning large language models using advanced techniques 380
thu-coai/eva Pre-trained chatbot models for Chinese open-domain dialogue systems 305