- What is a Prompt?
- Types of Prompt Content
- Prompt Design Strategies
- Strategies for Iterating Prompts
- Fallback Responses
- Things to Avoid
Prompt design plays a crucial role in obtaining accurate and high-quality responses from language models. It involves creating well-structured prompts that will get the answer you want. Whether you're asking a question, providing instructions, or offering examples, the way you design your prompts can significantly impact the response you receive.
In this blog post, we'll introduce you to some fundamental concepts, strategies and best practices for prompt design.
What is a Prompt?
A prompt is a natural language request submitted to a language model to receive a response. Prompts can take various forms, such as questions, instructions, contextual information, examples or partial input. Depending on the type of model being used, prompts can generate text, embeddings, code, images, videos, music and more.
Types of Prompt Content
When designing a prompt, you can include different types of content to help the model understand what you want. These content types are input, context and examples.
Input is the text in the prompt to which you want the model to respond. It's a required content type and can take the form of a question, a task, an entity or a completion input.
Question input prompts the model to answer a specific question. For example:
Task input prompts the model to perform a task or provide suggestions. For example:
Entity input prompts the model to perform an action on a specific entity. It can benefit from including instructions. For example:
Completion input prompts the model to complete or continue a given text. For example:
Context in prompts can take the form of either instructions for the model's behaviour or information the model uses to generate a response. Including contextual information helps provide necessary details or limit the response boundaries. For example:
Examples in prompts are input-output pairs that guide the model in generating an ideal response. They are effective for customising the response format. For example:
Prompt Design Strategies
Even if you’re new to machine learning, prompt design allows you to shape the responses generated by these models with minimal effort. By carefully constructing prompts, you can guide the model to produce your desired results.
While there's no one-size-fits-all approach to prompt design, there are common strategies you can use to influence the model's responses. In the following sections, we will explore these strategies in more detail and give you the knowledge to design prompts effectively. Let's dive in and discover the key techniques so you can unlock the full potential of prompt design.
Giving Clear Instructions
One effective strategy in prompt design is to provide clear instructions to the model. By giving explicit guidance, you can customise the behaviour of the language model to suit your needs. It's important to ensure that the instructions are clear and concise for best results.
For example, you can provide a prompt that instructs the model to summarise a given text:
In the above example, the model provides a concise summary of the given text. However, if you want the summary to be understood more easily, you can ask it to write the summary in a way that a fifth grader can understand:
By adding the instruction to simplify the summary for a fifth grader, the model generates a response that is easier to understand.
- To customise the behaviour of the model, give clear and concise instructions
- Adding specific instructions can help you to tailor the output to your desired level of complexity
Using Examples in Prompts
Including examples in your prompts can help to guide the model's responses. Examples provide the model with a clear understanding of the expected output and help it to identify patterns and relationships. Prompts with examples are called few-shot prompts, while prompts without examples are called zero-shot prompts. Few-shot prompts are useful for regulating formatting, phrasing, scoping or general patterns of model responses.
Zero-shot vs Few-shot Prompts
A zero-shot prompt does not include any examples and asks the model to choose the best explanation. For example:
If you want the model to produce concise responses, you can provide examples in the prompt that prioritise shorter explanations. By including these examples, the model can be guided to choose concise responses. For instance:
In the above example, the prompt provides two examples that favour shorter explanations. As a result, the model selects the shorter explanation (Explanation 2) instead of the longer one (Explanation 1).
Finding the Optimal Number of Examples
You can experiment with the number of examples included in the prompt to achieve the desired results. More complex models may pick up on patterns using just a few examples, while simpler models may require more examples.
However, it is important to avoid including too many examples as the model might start overfitting (or beginning to attach too much weight to the examples you've provided).
Additionally, it's more effective to use examples that demonstrate the desired pattern instead of examples that show what to avoid.
- Including examples in prompts helps the model learn how to respond
- Use examples to show the model the desired patterns instead of patterns to avoid
- Experiment with the number of examples based on the model's complexity. Too few examples may be ineffective, while too many can cause overfitting
Allowing the Model to Complete Partial Input
Generative language models can act as advanced autocompletion tools. When you provide partial content, the model can generate the rest or continue the content based on what it thinks should come next. If you include examples or context, the model can take them into account.
Consider the following example where a prompt includes an instruction and an entity input:
In this case, the model follows the prompt and provides the requested JSON object with the specified fields and quantities.
However, writing out instructions in natural language can sometimes be challenging. To address this, you can provide an example and a response prefix, allowing the model to complete it:
In this example, the model generates the output based on the given example and response prefix. Notice that "waffles" is excluded from the output because it was not listed in the context as a valid field.
- Generative language models can complete partial input based on the provided context
- Including examples and context helps the model generate more accurate and desired responses
- When providing partial input, you can let the model complete it by giving examples and response prefixes to guide its output
Prompting the Model for Formatted Responses
The completion strategy can also be used to format the response generated by the model. For example, you can prompt the model to create an outline for an essay. The model will generate an outline structure based on the provided prompt.
Consider the following example:
In this example, the prompt asks the model to create an outline for an essay about hummingbirds. The model generates an outline structure with sections such as Introduction, Body, Conclusion, and Hummingbird Facts.
Sometimes, instructing the model with natural language can be challenging. To overcome this, you can provide a partial outline and let the model complete it based on the provided pattern.
In this case, the model completes the partial outline based on the provided pattern. By adding the * you’re telling it to write things as bullet points for you, so each section of the outline is generated by the model in response to the initial instructions.
- Generative language models can be prompted to format their responses in specific ways
- By providing a partial input or an example, you can guide the model to generate a response that follows a specific format or structure
- This approach can be helpful when you find that instructing the model using natural language is proving challenging
- The model can generate structured responses such as essay outlines based on the provided instructions and examples
Including Contextual Information
To help the model solve a problem or provide specific guidance, you can include relevant instructions and information in the prompt. This ensures that the model has all the necessary details and avoids making assumptions.
Consider the following example, where the prompt asks the model to give troubleshooting guidance for a disconnected WiFi issue with a Google WiFi router:
In this example, the response provides generic troubleshooting guidance for a disconnected WiFi issue. However, the guidance is not specific to the Google WiFi router or the status of the LED indicator lights.
To customise the response for the specific router, you can provide additional contextual information in the prompt. This can be done by including the troubleshooting guide for the router as part of the prompt. The model will then refer to this information when generating the response.
In this modified example, the prompt includes the troubleshooting guide for the Google WiFi router. The model generates a response that specifically addresses the network error indicated by the yellow blinking light on the router.
- Include relevant instructions and information in the prompt to guide the model's response
- Providing context helps the model generate more specific and accurate guidance
- Including troubleshooting guides or specific details in the prompt improves the tailored response from the model
Using Prefixes for Prompt Customisation
Prefixes can be added to the prompt content to serve different purposes depending on where they’re placed. Here are the three main types of prefixes you can use:
- Input prefix: By adding a prefix to the input, you can indicate specific semantic aspects to the model. For example, using "English:" and "French:" as prefixes can distinguish between different languages.
- Output prefix: Even though the output is generated by the model, you can include an output prefix in the prompt. This prefix provides the model with information about the expected response format. For instance, using "JSON:" as an output prefix signals that the output should be in JSON format.
- Example prefix: In few-shot prompts, you can add prefixes to the examples to provide labels for the model. These labels help the model to generate appropriate responses that align with the provided examples.
Consider the following example, where "Text:" serves as the input prefix and "The answer is:" serves as the output prefix:
In this example, the input prefix "Text:" indicates that the following text should be classified into specific categories. The output prefix "The answer is:" provides the model with the expected format for its response.
- Prefixes can be added to the prompt content to guide the model's behaviour
- Input prefixes convey meaning about the input
- Output prefixes specify the expected format for the model's response
- Example prefixes in few-shot prompts help the model generate responses aligned with the provided examples
Strategies for Iterating Prompts
When designing prompts, it's common to iterate and make adjustments to achieve the desired responses consistently. Below are some strategies to consider during prompt iteration.
Using Different Phrasing
Try using different words or phrasing in your prompts, even if they convey the same meaning. Different phrasings can often yield varied responses from the model. If you're not getting the expected results, experiment with rephrasing your prompts.
- Version 1: How do I bake a pie?
- Version 2: Suggest a recipe for a pie.
- Version 3: What's a good pie recipe?
Switching to a Similar Task
If the model doesn't follow your instructions for a specific task, try providing instructions for a similar task that achieves the same result. This can help to guide the model in the right direction.
For example, if you want the model to categorise a book but it doesn't adhere to the provided options, rephrase the prompt as a multiple-choice question instead.
Changing the Order of Prompt Content
The order of content in the prompt can impact the model's response. Experiment with changing the order of content and observe how it affects the generated response.
- Version 1: [examples] [context] [input]
- Version 2: [input] [examples] [context]
- Version 3: [examples] [input] [context]
In some cases, when the prompt or response triggers a safety filter, the model may provide a fallback response. A fallback response indicates that the model cannot provide the requested information or assistance. An example of a fallback response is "I'm not able to help with that, as I'm only a language model."
Things to Avoid
Here are some important considerations and what-not-to-dos when working with language models:
- Avoid relying solely on models to generate factual information, as they may not always provide accurate results
- Exercise caution when using models for maths and logic problems, as they may not consistently produce correct answers
- Be mindful of token limits for input content. Too much content can negatively impact the model's performance
- Note that, after a certain amount of generated content, the results may become repetitive or nonsensical
- Ensure that anything provided by the language models includes human intervention and that quality checks have been performed. Be aware that simply copying and pasting the output provided is plagiarism and you will be penalised for this
Remember to iterate and experiment with your prompts to refine the output and achieve the desired responses from the model.
Prompt design is a crucial aspect of interacting with language models, allowing you to shape the responses generated by these models. By understanding the different types of prompt content and adhering to best practices, you can get more accurate and relevant responses.
Whether you're asking questions, providing instructions or using examples, thoughtful prompt design empowers you to harness the full potential of language models. Remember to iterate and experiment with your prompts as these trials are the key to evolving your model’s output over time.