LLaMA-Adapter
Instruction-following model tuner
An implementation of a method for fine-tuning language models to follow instructions with high efficiency and accuracy
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
6k stars
79 watching
375 forks
Language: Python
last commit: 11 months ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
| An open-source toolkit for pretraining and fine-tuning large language models | 2,732 |
| A tool for efficiently fine-tuning large language models across multiple architectures and methods. | 36,219 |
| A system that uses large language and vision models to generate and process visual instructions | 20,683 |
| An implementation of a large language model using the nanoGPT architecture | 6,013 |
| Enables LLM inference with minimal setup and high performance on various hardware platforms | 69,185 |
| Provides tools and examples for fine-tuning the Meta Llama model and building applications with it | 15,578 |
| An audio-visual language model designed to understand and respond to video content with improved instruction-following capabilities | 2,842 |
| An incremental pre-trained Chinese large language model based on the LLaMA-7B model | 234 |
| Provides inference code and tools for fine-tuning large language models, specifically designed for code generation tasks | 16,097 |
| An efficient C#/.NET library for running Large Language Models (LLMs) on local devices | 2,750 |
| Develops large multimodal models for various computer vision tasks including image and video analysis | 3,099 |
| An open-source framework for training large language models with vision capabilities. | 3,229 |
| An inference and serving engine for large language models | 31,982 |
| A framework for training and serving large language models using JAX/Flax | 2,428 |
| An open-source Python client for running Large Language Models (LLMs) locally on any device. | 71,176 |