edu-bert
Education NLP model
A pre-trained language model designed to improve natural language processing tasks in education
好未来开源教育领域首个在线教学中文预训练模型TAL-EduBERT
187 stars
32 watching
41 forks
Language: Python
last commit: almost 4 years ago Related projects:
Repository | Description | Stars |
---|---|---|
allenai/scibert | A BERT model trained on scientific text for natural language processing tasks | 1,521 |
balavenkatesh3322/nlp-pretrained-model | A collection of pre-trained natural language processing models | 170 |
nttcslab-nlp/doc_lm | This repository contains source files and training scripts for language models. | 12 |
dbmdz/berts | Provides pre-trained language models for natural language processing tasks | 155 |
german-nlp-group/german-transformer-training | Trains German transformer models to improve language understanding | 23 |
turkunlp/wikibert | Provides pre-trained language models derived from Wikipedia texts for natural language processing tasks | 34 |
multimodal-art-projection/map-neo | A large language model designed for research and application in natural language processing tasks. | 877 |
zhuiyitechnology/pretrained-models | A collection of pre-trained language models for natural language processing tasks | 987 |
peleiden/daluke | A language model trained on Danish Wikipedia data for named entity recognition and masked language modeling | 9 |
ymcui/pert | Develops a pre-trained language model to learn semantic knowledge from permuted text without mask labels | 354 |
ymcui/lert | A pre-trained language model designed to leverage linguistic features and outperform comparable baselines on Chinese natural language understanding tasks. | 202 |
clue-ai/promptclue | A pre-trained language model for multiple natural language processing tasks with support for few-shot learning and transfer learning. | 654 |
jd-aig/nlp_baai | A collection of natural language processing models and tools for collaboration on a joint project between BAAI and JDAI. | 252 |
brightmart/xlnet_zh | Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
yxuansu/tacl | Improves pre-trained language models by encouraging an isotropic and discriminative distribution of token representations. | 92 |