stanford_alpaca
Research model
Develops an instruction-following LLaMA model for research use only, with the goal of fine-tuning and releasing it under specific licenses and restrictions.
Code and documentation to train Stanford's Alpaca models, and generate the data.
30k stars
343 watching
4k forks
Language: Python
last commit: over 1 year ago
Linked from 2 awesome lists
deep-learninginstruction-followinglanguage-model
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | Tuning a large language model on consumer hardware using low-rank adaptation | 18,710 |
| | Provides a unified interface for fine-tuning large language models with parameter-efficient methods and instruction collection data | 2,640 |
| | Recreated weights from Stanford Alpaca model fine-tuned for specific task | 406 |
| | Provides inference code and tools for fine-tuning large language models, specifically designed for code generation tasks | 16,097 |
| | A locally run Instruction-Tuned Chat-Style LLM project that combines foundation models and fine-tuning to create a chat interface. | 10,249 |
| | An instruction-following Chinese LLaMA-based model project aimed at training and fine-tuning models on specific hardware configurations for efficient deployment. | 4,152 |
| | Develops and maintains a Chinese language model finetuned on LLaMA, used for text generation and summarization tasks. | 711 |
| | An automatic evaluation tool for large language models | 1,568 |
| | A collection of tools and utilities for deploying, fine-tuning, and utilizing large language models. | 56,832 |
| | A cleaned and curated version of an Alpaca dataset used to train a large language model | 1,525 |
| | A Python client for Alpaca's trade API | 1,745 |
| | A research project that develops a Traditional-Chinese instruction-following language model using Alpaca as a basis. | 134 |
| | A project providing onnx models and tools for inference with LLaMa transformer model on various devices | 356 |
| | A dataset for training and fine-tuning large language models on Chinese text prompts. | 392 |
| | This project generates instruction-following data using GPT-4 to fine-tune large language models for real-world tasks. | 4,244 |