ChatGLM-finetune-LoRA
LM fine-tuner
A codebase for fine-tuning the ChatGLM-6b language model using low-rank adaptation (LoRA) and providing finetuned weights.
Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)
727 stars
8 watching
64 forks
Language: Jupyter Notebook
last commit: over 1 year ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
| Fine-tuning the LLaMA 2 chat model using DeepSpeed and Lora for improved performance on a large dataset. | 171 |
| This is an experimental project for fine-tuning the NLB language model with a specific dataset and evaluating its performance on translation tasks. | 7 |
| This project provides code and model for improving language understanding through generative pre-training using a transformer-based architecture. | 2,167 |
| Exploring various LLMs and their applications in natural language processing and related areas | 1,854 |
| Builds composable LLM applications with Java | 295 |
| Fine-tunes a pre-trained language model to generate responses in the medical domain with improved accuracy. | 973 |
| A practical course teaching large language models and their applications through hands-on projects using OpenAI API and Hugging Face library. | 1,338 |
| Training methods and tools for fine-tuning language models using human preferences | 1,240 |
| A tool for training and fine-tuning large language models using advanced techniques | 387 |
| Develops and maintains a Chinese language model finetuned on LLaMA, used for text generation and summarization tasks. | 711 |
| Fine-tunes a language model to generate human-like text based on visual instructions | 85 |
| A shared task for fine-tuning large language models to answer questions and generate responses in Ukrainian. | 13 |
| Fine-tuning a Chinese doctor chat model based on ChatGLM-6B | 788 |
| Develops and fine-tunes a math-based conversational AI model for generating solutions to arithmetic operations | 164 |
| Enables users to interact with large language models within Neovim for tasks like code optimization and translation. | 65 |