awesome-prompts

Chatbot prompts

A curated list of prompts for improving AI chatbot performance

Curated list of chatgpt prompts from the top-rated GPTs in the GPTs Store. Prompt Engineering, prompt attack & prompt protect. Advanced Prompt Engineering papers.

GitHub

5k stars
63 watching
487 forks
last commit: 2 months ago
awesomeawesome-listchatgptgpt4gptsgptstorepaperspromptprompt-engineering

Open GPTs Prompts

💻Professional Coder
prompt 5,308 2 months ago
👌Academic Assistant Pro
prompt 5,308 2 months ago
✏️All-around Writer
prompt 5,308 2 months ago
📗All-around Teacher
prompt 5,308 2 months ago
AutoGPT
prompt 5,308 2 months ago (The prompt is urgly and not stable now, let's improve it together!)

Other GPTs

Auto Literature Review Link
Scholar GPT Pro Link
Paraphraser & Proofreader Link
AI Detector Pro Link
Paper Review Pro Link
Auto Thesis PPT Link
Paper Interpreter Pro Link
Data Analysis Link
PDF Translator Link
AI Detector Link
AutoGPT Link
TeamGPT Link
GPT Link
AwesomeGPTs Link
Prompt Engineer Link
Paimon Link
Link
Jessica Link
Logo Designer Link
Text Adventure RGP Link
Alina Link
My Boss Link
My Excellent Classmates Link
I Ching divination Link

Excellent Prompts From Community

prompt 5,308 2 months ago
github link 28,935 8 months ago
prompt 5,308 2 months ago
discord
prompt 5,308 2 months ago
discord
prompt 5,308 2 months ago
discord
prompt 5,308 2 months ago
discord
prompt 5,308 2 months ago
discord
prompt 5,308 2 months ago
discord
prompt 5,308 2 months ago
paper
prompt 5,308 2 months ago
discord

Advanced Prompt Engineering

https://ar5iv.labs.arxiv.org/html/2307.15337
https://ar5iv.labs.arxiv.org/html/2308.09687
https://ar5iv.labs.arxiv.org/html/2305.16582
https://ar5iv.labs.arxiv.org/html/2308.10379
https://ar5iv.labs.arxiv.org/html/2104.01431
https://ar5iv.labs.arxiv.org/html/2302.12822
https://ar5iv.labs.arxiv.org/html/2210.03493
https://ar5iv.labs.arxiv.org/html/2305.15408
https://ar5iv.labs.arxiv.org/html/2212.10509
https://ar5iv.labs.arxiv.org/html/2305.17812
https://ar5iv.labs.arxiv.org/html/2301.13379
https://ar5iv.labs.arxiv.org/html/2212.10001
https://ar5iv.labs.arxiv.org/html/2305.04091
https://ar5iv.labs.arxiv.org/html/2310.06692
https://ar5iv.labs.arxiv.org/html/2205.11916
Chainlit : A Python library for making chatbot interfaces
Embedchain 22,943 2 days ago : A Python library for managing and syncing unstructured data with LLMs
FLAML (A Fast Library for Automated Machine Learning & Tuning) : A Python library for automating selection of models, hyperparameters, and other tunable choices
GenAIScript : JavaScript-ish scripts to create execute prompts, extract structured data, integrated in Visual Studio Code
Guardrails.ai : A Python library for validating outputs and retrying failures. Still in alpha, so expect sharp edges and bugs
Guidance 19,137 3 days ago : A handy looking Python library from Microsoft that uses Handlebars templating to interleave generation, prompting, and logical control
Haystack 17,817 2 days ago : Open-source LLM orchestration framework to build customizable, production-ready LLM applications in Python
HoneyHive : An enterprise platform to evaluate, debug, and monitor LLM apps
LangChain 95,185 2 days ago : A popular Python/JavaScript library for chaining sequences of language model prompts
LiteLLM 14,080 2 days ago : A minimal Python library for calling LLM APIs with a consistent format
LlamaIndex 36,899 2 days ago : A Python library for augmenting LLM apps with data
LMQL : A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools
OpenAI Evals 15,069 about 2 months ago : An open-source library for evaluating task performance of language models and prompts
Outlines 9,567 2 days ago : A Python library that provides a domain-specific language to simplify prompting and constrain generation
Parea AI : A platform for debugging, testing, and monitoring LLM apps
Portkey : A platform for observability, model management, evals, and security for LLM apps
Promptify 3,276 8 months ago : A small Python library for using language models to perform NLP tasks
PromptPerfect : A paid product for testing and improving prompts
Prompttools 2,718 3 months ago : Open-source Python tools for testing and evaluating models, vector DBs, and prompts
Scale Spellbook : A paid product for building, comparing, and shipping language model apps
Semantic Kernel 22,046 2 days ago : A Python/C#/Java library from Microsoft that supports prompt templating, function chaining, vectorized memory, and intelligent planning
Weights & Biases : A paid product for tracking model training and prompt engineering experiments
YiVal 2,658 7 months ago : An open-source GenAI-Ops tool for tuning and evaluating prompts, retrieval configurations, and model parameters using customizable datasets, evaluation methods, and evolution strategies
Brex's Prompt Engineering Guide 8,448 about 1 year ago : Brex's introduction to language models and prompt engineering
learnprompting.org : An introductory course to prompt engineering
Lil'Log Prompt Engineering : An OpenAI researcher's review of the prompt engineering literature (as of March 2023)
OpenAI Cookbook: Techniques to improve reliability : A slightly dated (Sep 2022) review of techniques for prompting language models
promptingguide.ai : A prompt engineering guide that demonstrates many techniques
Xavi Amatriain's Prompt Engineering 101 Introduction to Prompt Engineering and : A basic but opinionated introduction to prompt engineering and a follow up collection with many advanced methods starting with CoT
Andrew Ng's DeepLearning.AI : A short course on prompt engineering for developers
Andrej Karpathy's Let's build GPT : A detailed dive into the machine learning underlying GPT
Prompt Engineering by DAIR.AI : A one-hour video on various prompt engineering techniques
Scrimba course about Assistants API : A 30-minute interactive course about the Assistants API
LinkedIn course: Introduction to Prompt Engineering: How to talk to the AIs : Short video introduction to prompt engineering
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022) : Using few-shot prompts to ask models to think step by step improves their reasoning. PaLM's score on math word problems (GSM8K) rises from 18% to 57%
Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022) : Taking votes from multiple outputs improves accuracy even more. Voting across 40 outputs raises PaLM's score on math word problems further, from 57% to 74%, and 's from 60% to 78%
Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023) : Searching over trees of step by step reasoning helps even more than voting over chains of thought. It lifts 's scores on creative writing and crosswords
Language Models are Zero-Shot Reasoners (2022) : Telling instruction-following models to think step by step improves their reasoning. It lifts 's score on math word problems (GSM8K) from 13% to 41%
Large Language Models Are Human-Level Prompt Engineers (2023) : Automated searching over possible prompts found a prompt that lifts scores on math word problems (GSM8K) to 43%, 2 percentage points above the human-written prompt in Language Models are Zero-Shot Reasoners
Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling (2023) : Automated searching over possible chain-of-thought prompts improved ChatGPT's scores on a few benchmarks by 0–20 percentage points
Faithful Reasoning Using Large Language Models (2022) : Reasoning can be improved by a system that combines: chains of thought generated by alternative selection and inference prompts, a halter model that chooses when to halt selection-inference loops, a value function to search over multiple reasoning paths, and sentence labels that help avoid hallucination
STaR: Bootstrapping Reasoning With Reasoning (2022) : Chain of thought reasoning can be baked into models via fine-tuning. For tasks with an answer key, example chains of thoughts can be generated by language models
ReAct: Synergizing Reasoning and Acting in Language Models (2023) : For tasks with tools or an environment, chain of thought works better if you prescriptively alternate between asoning steps (thinking about what to do) and ing (getting information from a tool or environment)
Reflexion: an autonomous agent with dynamic memory and self-reflection (2023) : Retrying tasks with memory of prior failures improves subsequent performance
Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP (2023) : Models augmented with knowledge via a "retrieve-then-read" can be improved with multi-hop chains of searches
Improving Factuality and Reasoning in Language Models through Multiagent Debate (2023) : Generating debates between a few ChatGPT agents over a few rounds improves scores on various benchmarks. Math word problem scores rise from 77% to 85%