bluebert
NLP libraries
Pre-trained language models for biomedical natural language processing tasks
BlueBERT, pre-trained on PubMed abstracts and clinical notes (MIMIC-III).
560 stars
23 watching
79 forks
Language: Python
last commit: over 1 year ago
Linked from 1 awesome list
bertbert-modellanguage-modelmimic-iiinatural-language-processingpubmedpubmed-abstracts
Related projects:
Repository | Description | Stars |
---|---|---|
dmis-lab/biobert | Provides pre-trained language representation models for biomedical text mining tasks | 1,970 |
ncbi-nlp/biosentvec | Pre-trained word and sentence embeddings for biomedical text analysis | 578 |
naver/biobert-pretrained | Provides pre-trained weights for a biomedical language representation model | 672 |
allenai/scibert | A BERT model trained on scientific text for natural language processing tasks | 1,532 |
dbmdz/berts | Provides pre-trained language models for natural language processing tasks | 155 |
balavenkatesh3322/nlp-pretrained-model | A collection of pre-trained natural language processing models | 170 |
emilyalsentzer/clinicalbert | Provides clinical BERT embeddings for natural language processing tasks in healthcare | 680 |
openbmb/bmlist | A curated list of large machine learning models tracked over time | 341 |
langboat/mengzi | Develops lightweight yet powerful pre-trained models for natural language processing tasks | 533 |
turkunlp/wikibert | Provides pre-trained language models derived from Wikipedia texts for natural language processing tasks | 34 |
nttcslab-nlp/doc_lm | This repository contains source files and training scripts for language models. | 12 |
davidnemeskey/embert | Provides pre-trained transformer-based models and tools for natural language processing tasks | 2 |
alibaba/alicemind | A collection of pre-trained encoder-decoders and related optimization techniques for natural language processing | 1,986 |
ncbi/genegpt | An LLM that leverages NCBI Web APIs to answer biomedical information questions with improved accuracy and reliability | 384 |
ermlab/politbert | Trains a language model using a RoBERTa architecture on high-quality Polish text data | 33 |