Prompt Engineering
Learn the fundamentals of prompt engineering that can help optimize AI model responses.
What is prompt engineering?
“The art and science of asking questions is the source of all knowledge.”—Thomas Berger
In generative AI, the art of asking the right questions is known as prompt engineering, and these carefully crafted questions are called prompts.
At its core, prompt engineering is the process of designing effective prompts for generative AI models. It involves providing the AI model with clear and specific instructions on what we want it to accomplish.
Key concepts in prompt engineering
Here are some important key concepts related to prompt engineering.
Context
Context in prompt engineering is crucial for the model to understand the task and generate meaningful responses. The context includes background information that helps the model frame the question or task correctly.
For example, if we ask a language model to write a short story, the context might include the genre, characters, and setting. If we need a technical explanation, the context should specify the domain or field of interest, such as software development, cloud computing, or healthcare.
Look at the difference between these two prompts, showcasing how context can make our prompt more effective:
Prompt without context: “Explain how machine learning works.”
Prompt with context: “Explain how machine learning algorithms are used in predictive analytics for financial forecasting.”
The second prompt provides more context, helping the model understand the specific application of machine learning, resulting in a more relevant and tailored response.
Instruction
An instruction is a clear and direct guideline that specifies what we want the model to do. It tells the model not just what to do, but often how to do it. Clear instructions ensure that the model doesn’t wander off-topic and stays focused on the user’s intent.
Example of providing a clear instruction: “Write a 300-word essay about climate change in the style of a formal scientific paper.”
In this case, the model not only knows the topic but also understands the required style, tone, and word count.
Instructions are essential when we want the model to perform complex tasks like generating reports, summarizing text, or making decisions based on specific criteria.
Negative prompts
Negative prompts are used to tell the model what not to do. These are particularly useful when we want to avoid certain kinds of outputs or restrict the model from going off-topic.
Example of a prompt with negative instruction: “Describe the benefits of using cloud computing for businesses, but do not mention cost savings.”
By specifying what should not be included (in this case, cost savings), we help the model avoid irrelevant or undesired responses.
Negative prompts are highly effective for filtering out biases, misinformation, or irrelevant details, helping the model focus on delivering accurate and balanced information. Here are a few examples of negative prompts that can demonstrate their utility across various contexts:
Avoiding misinformation: “Explain the causes of climate change, but do not reference myths like ‘it’s caused by solar activity.’”
This prompt will filter out widely debunked claims, ensuring the model provides scientifically accurate information.Maintaining neutrality: “Discuss the advantages and disadvantages of electric vehicles, but do not reference specific manufacturers.”
This will encourage an unbiased evaluation by avoiding brand-specific comparisons.Enhancing focus: “Summarize the impacts of urbanization, but avoid discussing economic effects.”
This prompt will keep the response centered on ecological and social consequences rather than veering into financial analysis.
By using negative prompts in these ways, we can guide the model to provide responses that are more precise, relevant, and tailored to our specific needs.
Model latent space
The latent space of a model refers to the internal representations it builds to process and generate output. When we input a prompt into a model, the model translates that prompt into a vector in its latent space, where it internally “understands” the relationships between words, concepts, and tasks.
While this is a more technical concept, it’s important to recognize that the quality and clarity of our prompt can influence how the model navigates this latent space. The more specific and clear our prompt, the more likely the model will find the right connections in its latent space to generate the desired output.
Think of the latent space as a vast, multidimensional “map” of knowledge, where different concepts are represented at varying distances from each other based on their relationships. A well-crafted prompt helps the model find the right region of that space to generate accurate responses.
Techniques of prompting
Now that we’ve explored the foundational concepts of prompt engineering, let’s explore some practical techniques for optimizing our prompts.
Be specific and detailed
The more detailed and specific our prompt is, the better the response. If we’re asking the model to generate content, describe the context, desired format, tone, and any specific points to cover.
Vague prompt: “Write an article about space.”
Detailed prompt: “Write a 500-word article about the challenges of human colonization on Mars, including scientific, technological, and ethical considerations.”
The second prompt provides clear guidance on what the article should focus on and the length it should be.
Use few-shot or zero-shot prompting
Few-shot and zero-shot prompting are techniques where we provide the model with a small number of examples (few-shot) or none at all (zero-shot) to help it understand the task.
Few-shot prompting: We give the model a few examples of the kind of response we expect.
Example of few-shot prompting:
Translate the following English sentences into French:
How are you? -> Comment ça va?
Where is the bathroom? -> Où sont les toilettes?
Now, translate: What time is the meeting?
Zero-shot prompting: We provide no examples and simply ask the model to perform the task based on our instructions.
Example of a zero-shot prompting:
Translate this sentence into French: What time is the meeting?
Both techniques are useful depending on how familiar we think the model is with the task and how much we want to guide it.
Use chain-of-thought prompting
Chain-of-thought prompting helps the model break down complex tasks into smaller, manageable steps. By explicitly asking the model to explain its reasoning or walk through its process, we can improve the quality and accuracy of the output.
Examples of chain-of-thought prompt:
“I want to know if the Earth is the only planet with liquid water. First, list the characteristics of a planet with liquid water, then compare Earth with other planets in the solar system.”
[Image of a red apple] + “Look at this image of a coral reef and analyze its health. First, describe visible signs of coral bleaching, then explain possible causes.”
This approach can help the model systematically reason through a complex query, making its response more logical and coherent. Chain-of-thought prompting is particularly effective in various contexts where structured reasoning is essential.
Here are a few use cases:
Mathematical problem solving: “Solve the equation 2x + 3 = 7. First, isolate the variable x, then solve for its value.”
This ensures the model provides a step-by-step solution, helping users follow the logic and verify the correctness of the answer.Scientific explanations: “Explain how photosynthesis works. Start by describing the role of sunlight, then explain the chemical process in plants.”
By breaking the explanation into stages, the model delivers a clearer and more detailed response.Decision-making assistance: “Help me decide whether to buy or lease a car. First, list the pros and cons of buying, then do the same for leasing, and finally compare the two options.”
This step-by-step approach provides a thorough evaluation, aiding in informed decision-making.Historical analysis: “What led to the fall of the Roman Empire? First, describe the internal factors, then discuss the external pressures, and finally summarize their combined impact.”
This guides the model to present a comprehensive and logically organized analysis.
By using chain-of-thought prompting, we can encourage the model to approach tasks methodically, leading to responses that are not only more accurate but also easier to understand and evaluate.
Experiment with temperature and max tokens
Temperature and max tokens are parameters that control the randomness and length of the model’s responses. Lower temperatures make the model’s responses more deterministic and focused, while higher temperatures introduce more creativity and variability.
Temperature: This parameter controls how creative or deterministic the model’s responses will be. Lower values (0.1–0.3) make responses more focused and factual, while higher values (0.7–1.0) make them more varied and creative.
Max tokens: This limits the number of tokens (words or characters) the model can generate in a response. Setting an appropriate max token limit ensures the model’s output stays within the desired length.
Adjusting these parameters can help refine the quality of responses based on our specific needs.
Prompt engineering is both an art and a science, enabling users to unlock the full potential of generative AI models. By understanding and applying key concepts like context, instruction, and negative prompts, and leveraging techniques such as chain-of-thought and parameter adjustments, we can craft prompts that guide models toward accurate, relevant, and creative outputs.
Get hands-on with 1400+ tech skills courses.