alpaca-lora
Language model tuning
Tuning a large language model on consumer hardware using low-rank adaptation
Instruct-tune LLaMA on consumer hardware
19k stars
154 watching
2k forks
Language: Jupyter Notebook
last commit: 8 months ago
Linked from 2 awesome lists
Related projects:
Repository | Description | Stars |
---|---|---|
| Provides a unified interface for fine-tuning large language models with parameter-efficient methods and instruction collection data | 2,640 |
| A locally run Instruction-Tuned Chat-Style LLM project that combines foundation models and fine-tuning to create a chat interface. | 10,249 |
| Develops an instruction-following LLaMA model for research use only, with the goal of fine-tuning and releasing it under specific licenses and restrictions. | 29,663 |
| An instruction-following Chinese LLaMA-based model project aimed at training and fine-tuning models on specific hardware configurations for efficient deployment. | 4,152 |
| A tool for efficiently fine-tuning large language models across multiple architectures and methods. | 36,219 |
| An implementation of a large language model using the nanoGPT architecture | 6,013 |
| A method to adapt large language models by reducing their parameter count using low-rank adaptation matrices | 10,959 |
| A system that uses large language and vision models to generate and process visual instructions | 20,683 |
| Provides tools and examples for fine-tuning the Meta Llama model and building applications with it | 15,578 |
| An implementation of a method for fine-tuning language models to follow instructions with high efficiency and accuracy | 5,775 |
| Sharing technical knowledge and practical experience on large language models | 11,871 |
| Enables LLM inference with minimal setup and high performance on various hardware platforms | 69,185 |
| Optimizes large language model inference on limited GPU resources | 5,446 |
| Develops and maintains a Chinese language model finetuned on LLaMA, used for text generation and summarization tasks. | 711 |
| An open-source toolkit for pretraining and fine-tuning large language models | 2,732 |