LERT
Linguistic Model
A pre-trained language model designed to leverage linguistic features and outperform comparable baselines on Chinese natural language understanding tasks.
LERT: A Linguistically-motivated Pre-trained Language Model(语言学信息增强的预训练模型LERT)
202 stars
3 watching
15 forks
Language: Python
last commit: almost 2 years ago bertlertnlpplmpre-trainpytorchtensorflowtransformer
Related projects:
Repository | Description | Stars |
---|---|---|
ymcui/pert | Develops a pre-trained language model to learn semantic knowledge from permuted text without mask labels | 356 |
ymcui/macbert | Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 646 |
ymcui/chinese-xlnet | Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture | 1,652 |
ieit-yuan/yuan2.0-m32 | A high-performance language model designed to excel in tasks like natural language understanding, mathematical computation, and code generation | 182 |
ymcui/chinese-mobilebert | An implementation of MobileBERT, a pre-trained language model, in Python for NLP tasks. | 81 |
ymcui/chinese-electra | Provides pre-trained Chinese language models based on the ELECTRA framework for natural language processing tasks | 1,405 |
brightmart/xlnet_zh | Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
zhuiyitechnology/pretrained-models | A collection of pre-trained language models for natural language processing tasks | 989 |
ymcui/chinese-mixtral | Develops and releases Mixtral-based models for natural language processing tasks with a focus on Chinese text generation and understanding | 589 |
yunwentechnology/unilm | This project provides pre-trained models and tools for natural language understanding (NLU) and generation (NLG) tasks in Chinese. | 439 |
bilibili/index-1.9b | A lightweight, multilingual language model with a long context length | 920 |
nkcs-iclab/linglong | A pre-trained Chinese language model with a modest parameter count, designed to be accessible and useful for researchers with limited computing resources. | 18 |
vhellendoorn/code-lms | A guide to using pre-trained large language models in source code analysis and generation | 1,789 |
yuangongnd/ltu | An audio and speech large language model implementation with pre-trained models, datasets, and inference options | 396 |
zhuiyitechnology/wobert | A Word-based Chinese BERT model trained on large-scale text data using pre-trained models as a foundation | 460 |