Aquila2
Language model toolkit
Provides pre-trained language models and tools for fine-tuning and evaluation
The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.
439 stars
5 watching
30 forks
Language: Python
last commit: about 1 year ago llmllm-inferencellm-training
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | This project provides code and model for improving language understanding through generative pre-training using a transformer-based architecture. | 2,167 |
| | A guide to using pre-trained large language models in source code analysis and generation | 1,789 |
| | Training methods and tools for fine-tuning language models using human preferences | 1,240 |
| | An open-source toolkit for building and evaluating large language models | 267 |
| | A collection of pre-trained language models for natural language processing tasks | 989 |
| | An evaluation toolkit and platform for assessing large models in various domains | 307 |
| | An 8B and 13B language model based on the Llama architecture with multilingual capabilities. | 2,031 |
| | A comprehensive toolset for building Large Language Model (LLM) based applications | 1,733 |
| | A repository of pre-trained language models for various tasks and domains. | 121 |
| | Develops training methods to improve the politeness and natural flow of multi-modal Large Language Models | 63 |
| | An incremental pre-trained Chinese large language model based on the LLaMA-7B model | 234 |
| | A lightweight, multilingual language model with a long context length | 920 |
| | A platform for training and deploying large language and vision models that can use tools to perform tasks | 717 |
| | A large language model with 70 billion parameters designed for chatbot and conversational AI tasks | 29 |
| | An open-source implementation of a vision-language instructed large language model | 513 |