Pretrained-Language-Model

Language models

A collection of pre-trained language models and optimization techniques for efficient natural language processing

Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.

GitHub

3k stars
57 watching
628 forks
Language: Python
last commit: 12 months ago
knowledge-distillationlarge-scale-distributedmodel-compressionpretrained-modelsquantization

Related projects:

Repository Description Stars
thunlp/plmpapers Compiles and organizes key papers on pre-trained language models, providing a resource for developers and researchers. 3,331
huawei-noah/hebo An open-source software library providing tools and frameworks for optimizing complex systems and improving machine learning models through Bayesian optimization and reinforcement learning. 3,306
huawei-noah/efficient-ai-backbones A collection of efficient AI backbone architectures developed by Huawei Noah's Ark Lab. 4,098
huawei-noah/pretrained-ipt This project develops a pre-trained transformer model for image processing tasks such as denoising, super-resolution, and deraining. 451
brightmart/text_classification An NLP project offering various text classification models and techniques for deep learning exploration 7,881
huawei-noah/efficient-computing A collection of research methods and techniques developed by Huawei to improve the efficiency of neural networks in computer vision and other applications. 1,218
brightmart/xlnet_zh Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks 230
cluebenchmark/cluepretrainedmodels Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. 806
ethan-yt/guwenbert Pre-trained language model for classical Chinese texts using RoBERTa architecture 511
ymcui/macbert Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks 646
dbiir/uer-py A toolkit for building pre-training models and fine-tuning them on downstream tasks in natural language processing 3,018
qwenlm/qwen This repository provides large language models and chat capabilities based on pre-trained Chinese models. 14,797
ymcui/chinese-xlnet Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture 1,652
google-research/bert Provides pre-trained models and code for natural language processing tasks using TensorFlow 38,374
thudm/glm A general-purpose language model pre-trained with an autoregressive blank-filling objective and designed for various natural language understanding and generation tasks. 3,207