Pretrained-Language-Model
Language models
A collection of pre-trained language models and optimization techniques for efficient natural language processing
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
3k stars
56 watching
626 forks
Language: Python
last commit: 10 months ago knowledge-distillationlarge-scale-distributedmodel-compressionpretrained-modelsquantization
Related projects:
Repository | Description | Stars |
---|---|---|
thunlp/plmpapers | Compiles and organizes key papers on pre-trained language models, providing a resource for developers and researchers. | 3,328 |
huawei-noah/hebo | An open-source software library providing tools and frameworks for optimizing complex systems and improving machine learning models through Bayesian optimization and reinforcement learning. | 3,286 |
huawei-noah/efficient-ai-backbones | A collection of efficient AI backbone architectures developed by Huawei Noah's Ark Lab. | 4,054 |
huawei-noah/pretrained-ipt | This project develops a pre-trained transformer model for image processing tasks such as denoising, super-resolution, and deraining. | 448 |
brightmart/text_classification | An NLP project offering various text classification models and techniques for deep learning exploration | 7,861 |
huawei-noah/efficient-computing | A collection of research methods and techniques developed by Huawei to improve the efficiency of neural networks in computer vision and other applications. | 1,202 |
brightmart/xlnet_zh | Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
cluebenchmark/cluepretrainedmodels | Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. | 804 |
ethan-yt/guwenbert | A pre-trained language model for classical Chinese based on RoBERTa and ancient literature. | 506 |
ymcui/macbert | Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 645 |
dbiir/uer-py | A toolkit for building pre-training models and fine-tuning them on downstream tasks in natural language processing | 3,001 |
qwenlm/qwen | This repository provides large language models and chat capabilities based on pre-trained Chinese models. | 14,164 |
ymcui/chinese-xlnet | Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture | 1,653 |
google-research/bert | Provides pre-trained models and code for natural language processing tasks using TensorFlow | 38,204 |
thudm/glm | A general-purpose language model pre-trained with an autoregressive blank-filling objective and designed for various natural language understanding and generation tasks. | 3,199 |