finetune-transformer-lm
Language model trainer
This project provides code and model for improving language understanding through generative pre-training using a transformer-based architecture.
Code and model for the paper "Improving Language Understanding by Generative Pre-Training"
2k stars
74 watching
502 forks
Language: Python
last commit: about 6 years ago paper
Related projects:
Repository | Description | Stars |
---|---|---|
| Training methods and tools for fine-tuning language models using human preferences | 1,240 |
| Implementing OpenAI's transformer language model in PyTorch with pre-trained weights and fine-tuning capabilities | 1,511 |
| Trains German transformer models to improve language understanding | 23 |
| Develops a method for pre-training language understanding models by combining masked and permuted techniques, and provides code for implementation and fine-tuning. | 288 |
| Provides pre-trained language models and tools for fine-tuning and evaluation | 439 |
| An implementation of a transformer-based NLP model utilizing gated attention units | 98 |
| A guide to using pre-trained large language models in source code analysis and generation | 1,789 |
| A pre-trained transformer model for natural language understanding and generation tasks in Chinese | 482 |
| A framework for training and fine-tuning multimodal language models on various data types | 601 |
| A repository providing tools and datasets to fine-tune language models for specific tasks | 1,484 |
| Research tool for training large transformer language models at scale | 1,926 |
| Provides a flexible and configurable framework for training deep learning models with PyTorch. | 1,196 |
| A collection of tools and scripts for training large transformer language models at scale | 1,342 |
| Develops pretraining and finetuning techniques for language models using metadata-conditioned text generation | 18 |
| An open-source implementation of a vision-language instructed large language model | 513 |