WoBERT
Word-based Chinese Model
A Word-based Chinese BERT model trained on large-scale text data using pre-trained models as a foundation
以词为基本单位的中文BERT
460 stars
8 watching
70 forks
Language: Python
last commit: over 3 years ago Related projects:
Repository | Description | Stars |
---|---|---|
| Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 646 |
| An upgraded version of SimBERT with integrated retrieval and generation capabilities | 441 |
| Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
| Pre-trained language model for classical Chinese texts using RoBERTa architecture | 511 |
| An implementation of MobileBERT, a pre-trained language model, in Python for NLP tasks. | 81 |
| Provides pre-trained language models derived from Wikipedia texts for natural language processing tasks | 34 |
| This project provides pre-trained models and tools for natural language understanding (NLU) and generation (NLG) tasks in Chinese. | 439 |
| A collection of pre-trained language models for natural language processing tasks | 989 |
| A pre-trained BERT model designed to facilitate NLP research and development with limited Thai language resources | 6 |
| Pretrained Chinese text generation model trained on large-scale data | 558 |
| A BERT-based pre-trained model for Chinese classical poetry | 146 |
| Develops a pre-trained language model to learn semantic knowledge from permuted text without mask labels | 356 |
| A high-performance language model designed to excel in tasks like natural language understanding, mathematical computation, and code generation | 182 |
| Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. | 806 |
| An implementation of a transformer-based NLP model utilizing gated attention units | 98 |