LoRA

Parameter reduction

A method to adapt large language models by reducing their parameter count using low-rank adaptation matrices

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

GitHub

11k stars
70 watching
686 forks
Language: Python
last commit: 3 months ago
Linked from 1 awesome list

adaptationdebertadeep-learninggpt-2gpt-3language-modelloralow-rankpytorchroberta

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
tloen/alpaca-lora Tuning a large language model on consumer hardware using low-rank adaptation 18,651
phoebussi/alpaca-cot Provides a unified interface for fine-tuning large language models with parameter-efficient methods and instruction collection data 2,619
huggingface/peft An efficient method for fine-tuning large pre-trained models by adapting only a small fraction of their parameters 16,437
huggingface/lerobot A platform providing pre-trained models, datasets, and tools for robotics with focus on imitation learning and reinforcement learning. 7,518
git-cloner/llama2-lora-fine-tuning Fine-tuning the LLaMA 2 chat model using DeepSpeed and Lora for improved performance on a large dataset. 167
adapter-hub/adapters A unified library for parameter-efficient and modular transfer learning in NLP tasks 2,577
lich99/chatglm-finetune-lora A codebase for fine-tuning the ChatGLM-6b language model using low-rank adaptation (LoRA) and providing finetuned weights. 724
meta-llama/codellama Provides inference code and tools for fine-tuning large language models, specifically designed for code generation tasks 16,039
microsoft/flaml Automates machine learning workflows and optimizes model performance using large language models and efficient algorithms 3,919
optimalscale/lmflow A toolkit for finetuning large language models and providing efficient inference capabilities 8,273
ermlab/politbert Trains a language model using a RoBERTa architecture on high-quality Polish text data 33
rasbt/llms-from-scratch Developing and pretraining a GPT-like Large Language Model from scratch 32,908
wybiral/micropython-lora A MicroPython library for controlling Semtech SX127x LoRa modules over SPI. 36
peremartra/large-language-model-notebooks-course A practical course teaching large language models and their applications through hands-on projects using OpenAI API and Hugging Face library. 1,281
rdspring1/pytorch_gbw_lm Trains a large-scale PyTorch language model on the 1-Billion Word dataset 123