OneLLM
Language model trainer
A framework for training and fine-tuning multimodal language models on various data types
[CVPR 2024] OneLLM: One Framework to Align All Modalities with Language
588 stars
11 watching
32 forks
Language: Python
last commit: about 1 month ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
microsoft/mpnet | Develops a method for pre-training language understanding models by combining masked and permuted techniques, and provides code for implementation and fine-tuning. | 288 |
yunwentechnology/unilm | This project provides pre-trained models for natural language understanding and generation tasks using the UniLM architecture. | 438 |
bobazooba/xllm | A tool for training and fine-tuning large language models using advanced techniques | 380 |
elanmart/psmm | An implementation of a neural network model for character-level language modeling. | 50 |
openai/finetune-transformer-lm | This project provides code and model for improving language understanding through generative pre-training using a transformer-based architecture. | 2,160 |
vhellendoorn/code-lms | A guide to using pre-trained large language models in source code analysis and generation | 1,782 |
bilibili/index-1.9b | A lightweight, multilingual language model with a long context length | 904 |
llava-vl/llava-plus-codebase | A platform for training and deploying large language and vision models that can use tools to perform tasks | 704 |
bytedance/lynx-llm | A framework for training GPT4-style language models with multimodal inputs using large datasets and pre-trained models | 229 |
openai/lm-human-preferences | Training methods and tools for fine-tuning language models using human preferences | 1,229 |
brightmart/xlnet_zh | Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
yiren-jian/blitext | Develops and trains models for vision-language learning with decoupled language pre-training | 24 |
pleisto/yuren-baichuan-7b | A multi-modal large language model that integrates natural language and visual capabilities with fine-tuning for various tasks | 72 |
lyuchenyang/macaw-llm | A multi-modal language model that integrates image, video, audio, and text data to improve language understanding and generation | 1,550 |
luogen1996/lavin | An open-source implementation of a vision-language instructed large language model | 508 |