wikibert
Language model library
Provides pre-trained language models derived from Wikipedia texts for natural language processing tasks
BERT models for many languages created from Wikipedia texts
34 stars
12 watching
1 forks
last commit: over 4 years ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
| Develops lightweight yet powerful pre-trained models for natural language processing tasks | 533 |
| Provides pre-trained language models for natural language processing tasks | 155 |
| Provides pre-trained BERT models for Nordic languages with limited training data. | 164 |
| A repository of pre-trained language models for natural language processing tasks in Chinese | 977 |
| A Word-based Chinese BERT model trained on large-scale text data using pre-trained models as a foundation | 460 |
| A collection of pre-trained language models for natural language processing tasks | 989 |
| Pre-trained language model for classical Chinese texts using RoBERTa architecture | 511 |
| A repository of pre-trained language models for various tasks and domains. | 121 |
| Provides pre-trained binary models for natural language text processing across multiple languages | 4 |
| Pre-trained language models for biomedical natural language processing tasks | 560 |
| A Polish BERT-based language model trained on various corpora for natural language processing tasks | 70 |
| A collection of lightweight state-of-the-art language models designed to support multilinguality, coding, and reasoning tasks on constrained resources. | 232 |
| Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 646 |
| A language model trained on Danish Wikipedia data for named entity recognition and masked language modeling | 9 |
| This repository contains source files and training scripts for language models. | 12 |