Ernest Davis, July 3, 2022.
Haphazard collection: Some that I have run across, for one reason or another.
Pretrain, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing Pengfei Lui et al.
Language Models are Few-Shot Learners. Original GPT-3 Paper.
Chain of thought prompting elicits reasoning in large language models
Large Language Models are Zero-Shot Reasoners. "Let's think step by step".
Tweet by Joscha Bach Suggesting "The following is a task protocol conducted with a SOTA AI giving the most correct answer possible."
Autoformalization with Large Language Models Note the long specialized prompts in Appendix A e.g. all of pps 16-17 is a single prompt.
Solving Quantitative Reasoning Problems with Language Models Minerva system for solving math word problems. Note the 4-shot prompt in appendix D.2, p. 19.
Multitask Prompted Training Enables Zero-Shot Task Generalization. This is a rather different kind of prompt engineering; the prompts are used in fine-tuning the model, not at the time of task execution. Thanks to Vijay Saraswat for pointing this out.
Improving machine reading comprehension with general reading strategies. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2018. In NAACL-HLT
Explain yourself! leveraging language models for commonsense reasoning. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019.
Unsupervised commonsense question answering with self-talk. Vered Schwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. EMNLP
Show your work: Scratchpads for interme diate computation with language models
Generated knowledge prompting for commonsense reasoning. Jiachen Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, and Hannaneh Hajishirzi. 2022.
Think about it! improving defeasible reasoning by first modeling the question scenario. Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Peter Clark, Yiming Yang, and Eduard H. Hovy. 2021. EMNLP
Including explanations (like Chain of Thought) does not always work: The Unreliability of Explanations in Few-Shot In-Context Learning Xi Ye Greg Durrett
Automatically generating prompts AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, Sameer Singh
Recursively prompt the model to prove its own belief (then use MaxSAT to resolve inconsistency): Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi
Investigating mapping between continuous prompts and their closest representation in language: Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts Daniel Khashabi, Shane Lyu, Sewon Min, Lianhui Qin, Kyle Richardson, Sean Welleck, Hannaneh Hajishirzi, Tushar Khot, Ashish Sabharwal, Sameer Singh, Yejin Choi
Tweet about getting GPT-3 to reverse words — it can't do this task at first but if you break down the steps when you prompt it, it can.