PromptCLUE

NLP Model

A pre-trained language model for multiple natural language processing tasks with support for few-shot learning and transfer learning.

PromptCLUE, 全中文任务支持零样本学习模型

GitHub

656 stars
8 watching
66 forks
Language: Jupyter Notebook
last commit: over 1 year ago
bertchinesefew-shot-learninggpt-3multitask-learningpretrained-modelsprompt-tuningrobertat5-modeltransfer-learningzero-shot-learning

Related projects:

Repository Description Stars
cluebenchmark/cluepretrainedmodels Provides pre-trained models for Chinese language tasks with improved performance and smaller model sizes compared to existing models. 806
cluebenchmark/pclue A large-scale dataset for training models to perform multiple tasks and zero-shot learning in natural language processing. 473
clue-ai/chatyuan-7b An updated version of a large language model designed to improve performance on multiple tasks and datasets 13
clue-ai/chatyuan Large language model for dialogue support in multiple languages 1,903
cluebenchmark/electra Trains and evaluates a Chinese language model using adversarial training on a large corpus. 140
cluebenchmark/cluecorpus2020 A large-scale Chinese corpus for pre-training language models. 927
balavenkatesh3322/nlp-pretrained-model A collection of pre-trained natural language processing models 170
01-ai/yi A series of large language models trained from scratch to excel in multiple NLP tasks 7,743
zhuiyitechnology/pretrained-models A collection of pre-trained language models for natural language processing tasks 989
curiosity-ai/catalyst A C# Natural Language Processing library with pre-trained models and tools for building custom models 752
langboat/mengzi Develops lightweight yet powerful pre-trained models for natural language processing tasks 533
cluebenchmark/supercluelyb A benchmarking platform for evaluating Chinese general-purpose models through anonymous, random battles 143
thunlp/openclap A repository of pre-trained language models for natural language processing tasks in Chinese 977
brightmart/xlnet_zh Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks 230