Cornucopia-LLaMA-Fin-Chinese
LLaMA fine tuner
A Chinese finance-focused large language model fine-tuning framework
聚宝盆(Cornucopia): 中文金融系列开源可商用大模型,并提供一套高效轻量化的垂直领域LLM训练框架(Pretraining、SFT、RLHF、Quantize等)
596 stars
5 watching
63 forks
Language: Python
last commit: over 1 year ago chinesefinancelarge-language-modelsllamanlpqarlhfsfttext-generationtransformers
Related projects:
Repository | Description | Stars |
---|---|---|
| Develops and maintains a Chinese language model finetuned on LLaMA, used for text generation and summarization tasks. | 711 |
| Fine-tuning the LLaMA 2 chat model using DeepSpeed and Lora for improved performance on a large dataset. | 171 |
| A deep learning project providing an open-source implementation of the LLaMA2 model with Chinese and English text data | 2,235 |
| Develops a multimodal Chinese language model with visual capabilities | 429 |
| Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 646 |
| Improves safety and helpfulness of large language models by fine-tuning them using safety-critical tasks | 47 |
| A custom Chinese version of the Meta Llama 2 model for improved Chinese language support and application | 748 |
| An incremental pre-trained Chinese large language model based on the LLaMA-7B model | 234 |
| A Chinese language large language model built from OpenLLaMA and fine-tuned on various datasets for multilingual text generation. | 65 |
| Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
| A tool to streamline fine-tuning of multimodal models for vision-language tasks | 1,415 |
| A large language model with 70 billion parameters designed for chatbot and conversational AI tasks | 29 |
| Reproduces results from a paper on efficient multi-lingual language model fine-tuning using a rewritten framework on top of the fastai library | 284 |
| Debiasing techniques to minimize hallucinations in large visual language models | 75 |
| A codebase for fine-tuning the ChatGLM-6b language model using low-rank adaptation (LoRA) and providing finetuned weights. | 727 |