LaWGPT
Legal Language Model
Develops a Chinese language-based large language model trained on legal knowledge to improve understanding and execution of laws
🎉 Repo for LaWGPT, Chinese-Llama tuned with Chinese Legal knowledge. 基于ä¸æ–‡æ³•å¾‹çŸ¥è¯†çš„大è¯è¨€æ¨¡åž‹
6k stars
48 watching
533 forks
Language: Python
last commit: 5 months ago Related projects:
Repository | Description | Stars |
---|---|---|
cvi-szu/linly | A collection of pre-trained language models for Chinese text processing and dialogue generation. | 3,029 |
liguodongiot/llm-action | A comprehensive resource sharing project focused on large language model (LLM) engineering and applications, covering various aspects from training to inference and deployment. | 10,677 |
tloen/alpaca-lora | Tuning a large language model on consumer hardware using low-rank adaptation | 18,651 |
lc1332/chinese-alpaca-lora | Develops and maintains a Chinese language model finetuned on LLaMA, used for text generation and summarization tasks. | 711 |
phoebussi/alpaca-cot | Provides a unified interface for fine-tuning large language models with parameter-efficient methods and instruction collection data | 2,619 |
lightning-ai/lit-llama | An implementation of a large language model using the nanoGPT architecture | 5,993 |
linksoul-ai/chinese-llama-2-7b | A deep learning project providing an open-source implementation of the LLaMA2 model with Chinese and English text data | 2,228 |
jittor/jittorllms | A high-performance deep learning framework designed to efficiently deploy large models on low-end hardware. | 2,374 |
meta-llama/codellama | Provides inference code and tools for fine-tuning large language models, specifically designed for code generation tasks | 16,039 |
meta-llama/llama | A collection of tools and utilities for deploying, fine-tuning, and utilizing large language models. | 56,437 |
git-cloner/llama2-lora-fine-tuning | Fine-tuning the LLaMA 2 chat model using DeepSpeed and Lora for improved performance on a large dataset. | 167 |
andrewzhe/lawyer-llama | An AI model trained on legal data to provide answers and explanations in Chinese law | 851 |
meta-llama/llama3 | Provides pre-trained and instruction-tuned Llama 3 language models and tools for loading and running inference | 27,138 |
hiyouga/llama-factory | A unified platform for fine-tuning multiple large language models with various training approaches and methods | 34,436 |
openbmb/minicpm | A language model designed to surpass the capabilities of GPT-3.5-Turbo on various tasks such as text generation, tool calling, and long-text processing | 7,131 |