LERT

Linguistic Model

A pre-trained language model designed to leverage linguistic features and outperform comparable baselines on Chinese natural language understanding tasks.

LERT: A Linguistically-motivated Pre-trained Language Model(语言学信息增强的预训练模型LERT)

GitHub

202 stars
3 watching
16 forks
Language: Python
last commit: over 1 year ago
bertlertnlpplmpre-trainpytorchtensorflowtransformer

Related projects:

Repository Description Stars
ymcui/pert Develops a pre-trained language model to learn semantic knowledge from permuted text without mask labels 354
ymcui/macbert Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks 645
ymcui/chinese-xlnet Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture 1,653
ieit-yuan/yuan2.0-m32 A high-performance language model designed to excel in tasks like natural language understanding, mathematical computation, and code generation 180
ymcui/chinese-mobilebert An implementation of MobileBERT, a pre-trained language model, in Python for NLP tasks. 80
ymcui/chinese-electra Provides pre-trained Chinese language models based on the ELECTRA framework for natural language processing tasks 1,403
brightmart/xlnet_zh Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks 230
zhuiyitechnology/pretrained-models A collection of pre-trained language models for natural language processing tasks 987
ymcui/chinese-mixtral Develops and releases Mixtral-based models for natural language processing tasks with a focus on Chinese text generation and understanding 584
yunwentechnology/unilm This project provides pre-trained models for natural language understanding and generation tasks using the UniLM architecture. 438
bilibili/index-1.9b A lightweight, multilingual language model with a long context length 904
nkcs-iclab/linglong A pre-trained Chinese language model with a modest parameter count, designed to be accessible and useful for researchers with limited computing resources. 17
vhellendoorn/code-lms A guide to using pre-trained large language models in source code analysis and generation 1,782
yuangongnd/ltu An audio and speech large language model implementation with pre-trained models, datasets, and inference options 385
zhuiyitechnology/wobert A pre-trained Chinese language model that uses word embeddings and is designed to process Chinese text 458