peft
Parameter adaptation
An efficient method for fine-tuning large pre-trained models by adapting only a small fraction of their parameters
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
17k stars
111 watching
2k forks
Language: Python
last commit: about 2 months ago
Linked from 2 awesome lists
adapterdiffusionllmloraparameter-efficient-learningpythonpytorchtransformers
Related projects:
Repository | Description | Stars |
---|---|---|
huggingface/trl | A library designed to train transformer language models with reinforcement learning using various optimization techniques and fine-tuning methods. | 10,308 |
huggingface/transformers | A collection of pre-trained machine learning models for various natural language and computer vision tasks, enabling developers to fine-tune and deploy these models on their own projects. | 136,357 |
modelscope/ms-swift | A framework for efficient fine-tuning and deployment of large language models | 4,659 |
huggingface/diffusion-models-class | A comprehensive course teaching the theory and hands-on implementation of diffusion models in image and audio generation using PyTorch. | 3,722 |
huggingface/optimum | A toolkit for optimizing and accelerating the training and inference of machine learning models on various hardware platforms. | 2,618 |
optimalscale/lmflow | A toolkit for fine-tuning and inferring large machine learning models | 8,312 |
huggingface/diffusers | A PyTorch-based library for training and using state-of-the-art diffusion models to generate images, audio, and 3D structures | 26,676 |
microsoft/lora | A method to adapt large language models by reducing their parameter count using low-rank adaptation matrices | 10,959 |
huggingface/lerobot | A platform providing pre-trained models, datasets, and tools for robotics with focus on imitation learning and reinforcement learning. | 7,874 |
facebookresearch/metaseq | A codebase for working with Open Pre-trained Transformers, enabling deployment and fine-tuning of transformer models on various platforms. | 6,519 |
huggingface/alignment-handbook | Provides recipes and guidelines for training language models to align with human preferences and AI goals | 4,800 |
huggingface/autotrain-advanced | A no-code solution for training state-of-the-art machine learning models quickly and easily. | 4,151 |
eleutherai/gpt-neox | Provides a framework for training large-scale language models on GPUs with advanced features and optimizations. | 6,997 |
huggingface/accelerate | A tool to simplify training and deployment of PyTorch models on various devices and configurations | 8,056 |
tloen/alpaca-lora | Tuning a large language model on consumer hardware using low-rank adaptation | 18,710 |