GPT-4-LLM
GPT-4 data generator
This project generates instruction-following data using GPT-4 to fine-tune large language models for real-world tasks.
Instruction Tuning with GPT-4
4k stars
43 watching
300 forks
Language: HTML
last commit: over 1 year ago
Linked from 1 awesome list
alpacachatgptgpt-4instruction-tuningllama
Related projects:
Repository | Description | Stars |
---|---|---|
| A system that uses large language and vision models to generate and process visual instructions | 20,683 |
| A curated collection of high-quality datasets for training large language models. | 2,708 |
| Provides a unified interface for fine-tuning large language models with parameter-efficient methods and instruction collection data | 2,640 |
| A curated list of papers on prompt-based tuning for pre-trained language models, providing insights and advancements in the field. | 4,112 |
| Compiles and organizes key papers on pre-trained language models, providing a resource for developers and researchers. | 3,331 |
| A curated list of resources to help developers navigate the landscape of large language models and their applications in NLP | 9,551 |
| A repository providing code and models for research into language modeling and multitask learning | 22,644 |
| An open-source project that enhances visual instruction tuning for text-rich image understanding by integrating GPT-4 models with multimodal datasets. | 259 |
| A platform for training, serving, and evaluating large language models to enable tool use capability | 4,888 |
| An implementation of a method for fine-tuning language models to follow instructions with high efficiency and accuracy | 5,775 |
| Developing and pretraining a GPT-like Large Language Model from scratch | 35,405 |
| A tool for efficiently fine-tuning large language models across multiple architectures and methods. | 36,219 |
| Guides software developers on how to effectively use and build systems around Large Language Models like GPT-4. | 8,487 |
| Provides recipes and guidelines for training language models to align with human preferences and AI goals | 4,800 |
| A general-purpose language model pre-trained with an autoregressive blank-filling objective and designed for various natural language understanding and generation tasks. | 3,207 |