Chinese-LLaMA-Alpaca
Chinese LLaMA & Alpaca models
Develops and deploys large language models for natural language processing tasks in Chinese, particularly for text encoding and decoding.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
19k stars
184 watching
2k forks
Language: Python
last commit: 9 months ago alpacaalpaca-2large-language-modelsllamallama-2llmloranlpplmpre-trained-language-modelsquantization
Related projects:
Repository | Description | Stars |
---|---|---|
cvi-szu/linly | A collection of pre-trained language models for Chinese text processing and dialogue generation. | 3,034 |
liguodongiot/llm-action | Sharing technical knowledge and practical experience on large language models | 11,871 |
lc1332/chinese-alpaca-lora | Develops and maintains a Chinese language model finetuned on LLaMA, used for text generation and summarization tasks. | 711 |
facico/chinese-vicuna | An instruction-following Chinese LLaMA-based model project aimed at training and fine-tuning models on specific hardware configurations for efficient deployment. | 4,152 |
paddlepaddle/paddlenlp | A comprehensive NLP and LLM library that provides an easy-to-use interface for a wide range of tasks, including text classification, neural search, question answering, information extraction, and more. | 12,224 |
tloen/alpaca-lora | Tuning a large language model on consumer hardware using low-rank adaptation | 18,710 |
openbmb/minicpm | A language model designed to surpass the capabilities of GPT-3.5-Turbo on various tasks such as text generation, tool calling, and long-text processing | 7,209 |
ymcui/chinese-llama-alpaca-2 | Develops and deploys 64K long context models for Chinese language processing tasks | 7,117 |
phoebussi/alpaca-cot | Provides a unified interface for fine-tuning large language models with parameter-efficient methods and instruction collection data | 2,640 |
meta-llama/llama3 | Provides pre-trained and instruction-tuned Llama 3 language models and tools for loading and running inference | 27,527 |
opengvlab/llama-adapter | An implementation of a method for fine-tuning language models to follow instructions with high efficiency and accuracy | 5,775 |
antimatter15/alpaca.cpp | A locally run Instruction-Tuned Chat-Style LLM project that combines foundation models and fine-tuning to create a chat interface. | 10,249 |
hiyouga/llama-factory | A tool for efficiently fine-tuning large language models across multiple architectures and methods. | 36,219 |
lightning-ai/lit-llama | An implementation of a large language model using the nanoGPT architecture | 6,013 |
meta-llama/llama | A collection of tools and utilities for deploying, fine-tuning, and utilizing large language models. | 56,832 |