xlnet_zh
Chinese language model
Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks
中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large
230 stars
6 watching
35 forks
Language: Python
last commit: about 5 years ago bertlanguage-modelpre-trainrobertaxlnet
Related projects:
Repository | Description | Stars |
---|---|---|
ymcui/chinese-xlnet | Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture | 1,653 |
cluebenchmark/cluepretrainedmodels | Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. | 804 |
shawn-ieitsystems/yuan-1.0 | Large-scale language model with improved performance on NLP tasks through distributed training and efficient data processing | 591 |
ymcui/macbert | Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 645 |
tencent/tencent-hunyuan-large | This project makes a large language model accessible for research and development | 1,114 |
cluebenchmark/electra | Trains and evaluates a Chinese language model using adversarial training on a large corpus. | 140 |
zhuiyitechnology/wobert | A pre-trained Chinese language model that uses word embeddings and is designed to process Chinese text | 458 |
yunwentechnology/unilm | This project provides pre-trained models for natural language understanding and generation tasks using the UniLM architecture. | 438 |
nkcs-iclab/linglong | A pre-trained Chinese language model with a modest parameter count, designed to be accessible and useful for researchers with limited computing resources. | 17 |
zhuiyitechnology/pretrained-models | A collection of pre-trained language models for natural language processing tasks | 987 |
hit-scir/chinese-mixtral-8x7b | An implementation of a large language model for Chinese text processing, focusing on MoE (Multi-Headed Attention) architecture and incorporating a vast vocabulary. | 641 |
01-ai/yi | A series of large language models trained from scratch to excel in multiple NLP tasks | 7,699 |
ymcui/pert | Develops a pre-trained language model to learn semantic knowledge from permuted text without mask labels | 354 |
ymcui/lert | A pre-trained language model designed to leverage linguistic features and outperform comparable baselines on Chinese natural language understanding tasks. | 202 |
langboat/mengzi | Develops lightweight yet powerful pre-trained models for natural language processing tasks | 534 |