PromptCBLUE
Medical NLP training data
A large-scale instruction-tuning dataset for multi-task and few-shot learning in the medical domain
PromptCBLUE: a large-scale instruction-tuning dataset for multi-task and few-shot learning in the medical domain in Chinese
328 stars
6 watching
34 forks
Language: Python
last commit: about 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
| Develops and deploys large language models for Chinese medical consultations to improve answer accuracy | 531 |
| A large-scale dataset for training models to perform multiple tasks and zero-shot learning in natural language processing. | 473 |
| Develops and deploys a large language model for Chinese traditional medicine applications | 316 |
| A pre-trained language model for multiple natural language processing tasks with support for few-shot learning and transfer learning. | 656 |
| Develops lightweight yet powerful pre-trained models for natural language processing tasks | 533 |
| Interactive notebooks for EEG/MEG data analysis using Python | 26 |
| A unified language service engine pre-trained on large amounts of medical domain data | 471 |
| Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
| Develops a large-scale dataset and benchmark for training multimodal chart understanding models using large language models. | 87 |
| Pre-trained language models for biomedical natural language processing tasks | 560 |
| A collection of data for evaluating Chinese machine reading comprehension systems | 419 |
| A collection of tools and modeling code for a large multilingual Natural Language Understanding dataset | 541 |
| An evaluation suite to assess language models' performance in multi-choice questions | 93 |
| Provides a flexible and configurable framework for training deep learning models with PyTorch. | 1,196 |