ELECTRA
Chinese Language Model
Trains and evaluates a Chinese language model using adversarial training on a large corpus.
中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model
140 stars
9 watching
11 forks
last commit: over 4 years ago adversarial-networksalbertbertelectraganlanguage-modelpretrained-models
Related projects:
Repository | Description | Stars |
---|---|---|
cluebenchmark/cluepretrainedmodels | Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. | 804 |
cluebenchmark/cluecorpus2020 | A large-scale pre-training corpus for Chinese language models | 925 |
ymcui/chinese-electra | Provides pre-trained Chinese language models based on the ELECTRA framework for natural language processing tasks | 1,403 |
clue-ai/chatyuan | Large language model for dialogue support in multiple languages | 1,902 |
clue-ai/chatyuan-7b | An updated version of a large language model designed to improve performance on multiple tasks and datasets | 13 |
brightmart/xlnet_zh | Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
clue-ai/promptclue | A pre-trained language model for multiple natural language processing tasks with support for few-shot learning and transfer learning. | 654 |
cluebenchmark/supercluelyb | A benchmarking platform for evaluating Chinese general-purpose models through anonymous, random battles | 141 |
shannonai/chinesebert | A deep learning model that incorporates visual and phonetic features of Chinese characters to improve its ability to understand Chinese language nuances | 542 |
felixgithub2017/mmcu | Evaluates the semantic understanding capabilities of large Chinese language models using a multimodal dataset. | 87 |
yunwentechnology/unilm | This project provides pre-trained models for natural language understanding and generation tasks using the UniLM architecture. | 438 |
ymcui/macbert | Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 645 |
ethan-yt/guwenbert | A pre-trained language model for classical Chinese based on RoBERTa and ancient literature. | 506 |
baai-wudao/model | A repository of pre-trained language models for various tasks and domains. | 121 |
hit-scir/chinese-mixtral-8x7b | An implementation of a large language model for Chinese text processing, focusing on MoE (Multi-Headed Attention) architecture and incorporating a vast vocabulary. | 641 |