xlnet_zh
Chinese language model
Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks
中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large
230 stars
6 watching
35 forks
Language: Python
last commit: over 5 years ago bertlanguage-modelpre-trainrobertaxlnet
Related projects:
Repository | Description | Stars |
---|---|---|
| Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture | 1,652 |
| Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. | 806 |
| Large-scale language model with improved performance on NLP tasks through distributed training and efficient data processing | 591 |
| Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 646 |
| This project makes a large language model accessible for research and development | 1,245 |
| Trains and evaluates a Chinese language model using adversarial training on a large corpus. | 140 |
| A Word-based Chinese BERT model trained on large-scale text data using pre-trained models as a foundation | 460 |
| This project provides pre-trained models and tools for natural language understanding (NLU) and generation (NLG) tasks in Chinese. | 439 |
| A pre-trained Chinese language model with a modest parameter count, designed to be accessible and useful for researchers with limited computing resources. | 18 |
| A collection of pre-trained language models for natural language processing tasks | 989 |
| An implementation of a large language model for Chinese text processing, focusing on MoE (Multi-Headed Attention) architecture and incorporating a vast vocabulary. | 645 |
| A series of large language models trained from scratch to excel in multiple NLP tasks | 7,743 |
| Develops a pre-trained language model to learn semantic knowledge from permuted text without mask labels | 356 |
| A pre-trained language model designed to leverage linguistic features and outperform comparable baselines on Chinese natural language understanding tasks. | 202 |
| Develops lightweight yet powerful pre-trained models for natural language processing tasks | 533 |