biobert-pretrained
Language Model
Provides pre-trained weights for a biomedical language representation model
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
672 stars
26 watching
88 forks
last commit: over 4 years ago
Linked from 2 awesome lists
Related projects:
Repository | Description | Stars |
---|---|---|
| Provides pre-trained language representation models for biomedical text mining tasks | 1,970 |
| Pre-trained language models for biomedical natural language processing tasks | 560 |
| A collection of pre-trained natural language processing models | 170 |
| A BERT model trained on scientific text for natural language processing tasks | 1,532 |
| Provides pre-trained language models derived from Wikipedia texts for natural language processing tasks | 34 |
| Pre-trained word and sentence embeddings for biomedical text analysis | 578 |
| Pre-trained language model for classical Chinese texts using RoBERTa architecture | 511 |
| Improves pre-trained Chinese language models by incorporating a correction task to alleviate inconsistency issues with downstream tasks | 646 |
| Develops lightweight yet powerful pre-trained models for natural language processing tasks | 533 |
| Provides pre-trained BERT models for Nordic languages with limited training data. | 164 |
| Provides pre-trained language models for natural language processing tasks | 155 |
| A collection of pre-trained language models for natural language processing tasks | 989 |
| Training data for a handwritten recognition system | 21 |
| Develops a pre-trained language model to learn semantic knowledge from permuted text without mask labels | 356 |
| An open-source BERT-based language model pre-trained on financial text data | 685 |