The fine art of human prompt engineering: How to talk to a person like ChatGPT
While AI assistants like ChatGPT have taken the world by storm, a growing body of research shows that it's also possible to generate useful outputs from what might be called "human language models," or people. Much like large language models (LLMs) in AI, HLMs have the ability to take information you provide and transform it into meaningful responses—if you know how to craft effective instructions, called "prompts."
Human prompt engineering is an ancient art form dating at least back to Aristotle's time, and it also became widely popular through books published in the modern era before the advent of computers. Since interacting with humans can be difficult, we've put together a guide to a few key prompting techniques that will help you get the most out of conversations with human language models. But first, let's go over some of what HLMs can do.
LLMs like those that power ChatGPT, Microsoft Copilot, Google Gemini, and Anthropic Claude all rely on an input called a "prompt," which can be a text string or an image encoded into a series of tokens (fragments of data). The goal of each AI model is to take those tokens and predict the next most-likely tokens that follow, based on data trained into their neural networks. That prediction becomes the output of the model.