OFA-Chinese

Transformer-based OFA model

Transforms the OFA-Chinese model to work with the Hugging Face Transformers framework

transformers结构的中文OFA模型

GitHub

123 stars
5 watching
16 forks
Language: Python
last commit: almost 2 years ago

Related projects:

Repository Description Stars
fastnlp/cpt A pre-trained transformer model for natural language understanding and generation tasks in Chinese 481
thudm/chinese-transformer-xl A pre-trained Chinese language model based on the Transformer-XL architecture. 218
zhuiyitechnology/gau-alpha An implementation of a Gated Attention Unit-based Transformer model for natural language processing tasks 96
zhuiyitechnology/roformer An enhanced transformer model with improved relative position embeddings for natural language processing tasks 819
lonepatient/nezha_chinese_pytorch An implementation of a Chinese language model using PyTorch and transformer architecture. 262
zhuiyitechnology/roformer-v2 A faster and more effective text processing model based on the RoFormer architecture 149
huggingface/pytorch-openai-transformer-lm Implementing OpenAI's transformer language model in PyTorch with pre-trained weights and fine-tuning capabilities 1,511
yunwentechnology/unilm This project provides pre-trained models for natural language understanding and generation tasks using the UniLM architecture. 438
wangyuxinwhy/uniem Develops unified sentence embedding models for NLP tasks 833
huggingface/tflite-android-transformers Converts popular transformer models to run on Android devices for efficient inference and generation tasks. 392
wuhaozhe/style_avatar Generates stylized talking faces and videos using deep learning models 278
a312863063/seeprettyface-ganerator-dongman A Python implementation of a StyleGAN-based anime face generator 151
aznn/laravel-scene Laravel library to convert models into standardized API responses 29
german-nlp-group/german-transformer-training Trains German transformer models to improve language understanding 23
shawn-ieitsystems/yuan-1.0 Large-scale language model with improved performance on NLP tasks through distributed training and efficient data processing 591