pretrained-models

NLP toolkit

A collection of pre-trained language models for natural language processing tasks

Open Language Pre-trained Model Zoo

GitHub

989 stars
15 watching
138 forks
last commit: about 3 years ago

Related projects:

Repository Description Stars
balavenkatesh3322/nlp-pretrained-model A collection of pre-trained natural language processing models 170
langboat/mengzi Develops lightweight yet powerful pre-trained models for natural language processing tasks 533
thunlp/openclap A repository of pre-trained language models for natural language processing tasks in Chinese 977
yunwentechnology/unilm This project provides pre-trained models and tools for natural language understanding (NLU) and generation (NLG) tasks in Chinese. 439
brightmart/xlnet_zh Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks 230
01-ai/yi A series of large language models trained from scratch to excel in multiple NLP tasks 7,743
ymcui/chinese-xlnet Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture 1,652
ymcui/pert Develops a pre-trained language model to learn semantic knowledge from permuted text without mask labels 356
zhuiyitechnology/gau-alpha An implementation of a transformer-based NLP model utilizing gated attention units 98
zhuiyitechnology/wobert A Word-based Chinese BERT model trained on large-scale text data using pre-trained models as a foundation 460
flagai-open/aquila2 Provides pre-trained language models and tools for fine-tuning and evaluation 439
cluebenchmark/cluepretrainedmodels Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. 806
baai-wudao/model A repository of pre-trained language models for various tasks and domains. 121
turkunlp/wikibert Provides pre-trained language models derived from Wikipedia texts for natural language processing tasks 34
yandex/faster-rnnlm A toolkit for training efficient neural network language models on large datasets with hierarchical softmax and noise contrastive estimation. 560