guwenbert
Chinese Model
Pre-trained language model for classical Chinese texts using RoBERTa architecture
GuwenBERT: 古文预训练语言模型(古文BERT) A Pre-trained Language Model for Classical Chinese (Literary Chinese)
511 stars
6 watching
38 forks
last commit: over 3 years ago bertclassical-chineseguwenbertliterary-chinesetransformers
Related projects:
Repository | Description | Stars |
---|---|---|
| A Word-based Chinese BERT model trained on large-scale text data using pre-trained models as a foundation | 460 |
| Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 646 |
| Provides pre-trained language models derived from Wikipedia texts for natural language processing tasks | 34 |
| This project provides pre-trained models and tools for natural language understanding (NLU) and generation (NLG) tasks in Chinese. | 439 |
| A deep learning model that incorporates visual and phonetic features of Chinese characters to improve its ability to understand Chinese language nuances | 545 |
| Trains and evaluates a Chinese language model using adversarial training on a large corpus. | 140 |
| A pre-trained Chinese language model designed to be robust against maliciously crafted texts | 15 |
| Provides pre-trained weights for a biomedical language representation model | 672 |
| A high-performance language model designed to excel in tasks like natural language understanding, mathematical computation, and code generation | 182 |
| Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
| Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. | 806 |
| A collection of pre-trained language models for natural language processing tasks | 989 |
| An implementation of a transformer-based NLP model utilizing gated attention units | 98 |
| A BERT model trained on scientific text for natural language processing tasks | 1,532 |
| An AI-powered text generation model trained on Chinese data to perform various tasks such as conversation, translation, and content creation. | 418 |