pretrained-models

NLP toolkit

A collection of pre-trained language models for natural language processing tasks

Open Language Pre-trained Model Zoo

GitHub

987 stars
15 watching
136 forks
last commit: about 3 years ago

Related projects:

Repository Description Stars
balavenkatesh3322/nlp-pretrained-model A collection of pre-trained natural language processing models 170
langboat/mengzi Develops lightweight yet powerful pre-trained models for natural language processing tasks 534
thunlp/openclap A repository of pre-trained language models for natural language processing tasks in Chinese 979
yunwentechnology/unilm A pre-trained Chinese language model for natural language understanding and generation tasks. 438
brightmart/xlnet_zh Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks 230
01-ai/yi A series of large language models trained from scratch to excel in multiple NLP tasks 7,722
ymcui/chinese-xlnet Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture 1,652
ymcui/pert Develops a pre-trained language model to learn semantic knowledge from permuted text without mask labels 354
zhuiyitechnology/gau-alpha An implementation of a Transformer model with Gated Attention Unit, designed for natural language processing tasks. 97
zhuiyitechnology/wobert Provides pre-trained Chinese language models for natural language processing tasks 458
flagai-open/aquila2 Provides pre-trained language models and tools for fine-tuning and evaluation 438
cluebenchmark/cluepretrainedmodels Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. 805
baai-wudao/model A repository of pre-trained language models for various tasks and domains. 121
turkunlp/wikibert Provides pre-trained language models derived from Wikipedia texts for natural language processing tasks 34
yandex/faster-rnnlm A toolkit for training efficient neural network language models on large datasets with hierarchical softmax and noise contrastive estimation. 560