gpt-2
Language model repository
A repository providing code and models for research into language modeling and multitask learning
Code for the paper "Language Models are Unsupervised Multitask Learners"
23k stars
633 watching
6k forks
Language: Python
last commit: 3 months ago paper
Related projects:
Repository | Description | Stars |
---|---|---|
minimaxir/gpt-2-simple | A tool for retraining and fine-tuning the OpenAI GPT-2 text generation model on new datasets. | 3,398 |
openai/gpt-2-output-dataset | A collection of GPT-2 model outputs for research in detection, biases, and detectability | 1,941 |
brexhq/prompt-engineering | Guides software developers on how to effectively use and build systems around Large Language Models like GPT-4. | 8,448 |
karpathy/mingpt | A minimal PyTorch implementation of a transformer-based language model | 20,175 |
eleutherai/gpt-neox | Provides a framework for training large-scale language models on GPUs with advanced features and optimizations. | 6,941 |
thunlp/plmpapers | Compiles and organizes key papers on pre-trained language models, providing a resource for developers and researchers. | 3,328 |
openai/consistency_models | A PyTorch-based framework for training and sampling consistency models in image generation | 6,166 |
dair-ai/ml-papers-explained | An explanation of key concepts and advancements in the field of Machine Learning | 7,315 |
instruction-tuning-with-gpt-4/gpt-4-llm | This project generates instruction-following data using GPT-4 to fine-tune large language models for real-world tasks. | 4,210 |
imoneoi/openchat | Fine-tuned language models trained on mixed-quality data | 5,260 |
flagai-open/flagai | An open-source toolkit for training and deploying large-scale AI models on various downstream tasks with multi-modality | 3,830 |
openai/finetune-transformer-lm | This project provides code and model for improving language understanding through generative pre-training using a transformer-based architecture. | 2,160 |
thunlp/promptpapers | A curated list of papers on prompt-based tuning for pre-trained language models, providing insights and advancements in the field. | 4,092 |
ricklamers/gpt-code-ui | An interactive code generation and execution tool using AI models | 3,561 |
flagai-open/aquila2 | Provides pre-trained language models and tools for fine-tuning and evaluation | 437 |