Chinese-Vicuna
LLaMA model trainer
An instruction-following Chinese LLaMA-based model project aimed at training and fine-tuning models on specific hardware configurations for efficient deployment.
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
4k stars
58 watching
419 forks
Language: C
last commit: 3 months ago
Linked from 1 awesome list
alpacachinesellamavicuna
Related projects:
Repository | Description | Stars |
---|---|---|
| Develops and maintains a Chinese language model finetuned on LLaMA, used for text generation and summarization tasks. | 711 |
| A system that uses large language and vision models to generate and process visual instructions | 20,683 |
| Tuning a large language model on consumer hardware using low-rank adaptation | 18,710 |
| A tool for efficiently fine-tuning large language models across multiple architectures and methods. | 36,219 |
| Develops a multimodal Chinese language model with visual capabilities | 429 |
| An audio-visual language model designed to understand and respond to video content with improved instruction-following capabilities | 2,842 |
| An implementation of a method for fine-tuning language models to follow instructions with high efficiency and accuracy | 5,775 |
| A deep learning project providing an open-source implementation of the LLaMA2 model with Chinese and English text data | 2,235 |
| A Chinese language large language model built from OpenLLaMA and fine-tuned on various datasets for multilingual text generation. | 65 |
| Provides a unified interface for fine-tuning large language models with parameter-efficient methods and instruction collection data | 2,640 |
| A Chinese finance-focused large language model fine-tuning framework | 596 |
| Sharing technical knowledge and practical experience on large language models | 11,871 |
| An optimized version of the Stable Vicuna language model with Chinese support, developed to improve domestic AI and GPT capabilities in China. | 65 |
| This repository provides large language models and chat capabilities based on pre-trained Chinese models. | 14,797 |
| An incremental pre-trained Chinese large language model based on the LLaMA-7B model | 234 |