Prompt Engineering

Last modified: November 19, 2024

Introduction

Prompt engineering involves the skillful structuring of instructions to guide generative Artificial Intelligence (AI) models in producing the desired outcomes, blending technical precision and creativity. With today’s more advanced models, it has become easier for models to interpret user intent with minimal input, becoming more user-friendly, and in exchange, users have adapted their language to be better understood by the Large Language Models (LLM). The image below shows an example of this.

These prompts typically contain input from the end-user or are generated by the app which is enriched with instructions from the developer/administrator or enriched by the app. A prompt typically contains at least one of the following:

  • instructions on what the model should do
  • context and information that the model needs to follow the instructions
  • the relevant input data (from the end-user or passed from a microflow)
  • the requested output structure (e.g. tone of voice or a JSON format)

The prompts are key components in the interaction with GenAI. When implementing patterns like RAG and ReAct as a developer, you can influence the system’s behavior by modifying the prompt. You need to explain to the system how to use the knowledge and functions that are provided, otherwise, it might ignore them, act differently, or start hallucinating.

Prompt Types

To enhance the understanding of prompt engineering, it is crucial to differentiate the different types of prompts. As illustrated in the image above, both the system and the user prompt are sent to the language model via API calls. These prompts serve different roles in guiding the AI’s responses.

System Prompt

The system prompt represents the desired behavior and guidelines of the AI model. It gives the model, for example, a role, tone, ethical regulations, and subject specification. This is traditionally set by a prompt engineer. An example of this type of prompt is: You are a helpful assistant who provides only information about Mendix – providing direct guidelines on the role, content, and limitations.

User Prompt

A user prompt is another fundamental type. It is the user’s input, question, or request sent to the LLM as portrayed in the image above. This is the place where end-users can write anything they ask, such as Give me five ideas to create a cool app in the Mendix platform.

Context Prompt

Depending on the project or use case, adding contextual information to the model may be necessary. Normally, this information, called context prompt or conversation history, is sent in the same interaction as the system and user prompt. It captures the historical information of the conversation to maintain coherence with the end user and be context aware. In the Mendix app chatbot setup, developers configure this within their application, and it is included in the request sent to the LLM using the Chat Completions (with history) operation.

To understand this concept, imagine a user interacting with a chatbot while asking, How should I start?. If in previous interactions, the user asked about Mendix, the LLM will understand that the question refers to the Mendix apps. In cases where the context is not needed, such as in command-based interactions where the inquiry could be: Turn on the lights and the LLM does not need any historical conversation, developers can use operations like Chat Completions (without history).

Typical Components of a Prompt

A prompt typically consists of four main components that work together to guide the AI’s responses. The system prompt sets the foundational instructions and the preferred output style. The context prompt provides relevant context and additional information. Lastly, the user prompt is the specific question or input data from the end-user.

Instructions

Explain what the model should do. The model follows the instructions more easily if you:

  • have a specific task in mind
  • break it down into clear steps and create instructions that can be followed
  • explain what persona or role the model should fulfill
  • provide details and limitations

When the input text is coming directly from the end-user, also include what not to do.

Output Style

You can instruct the model to format the output in a specific way. For example:

  • tell the model to specify its reasoning steps, or just give the answer
  • give examples of the output style you want, for example, a JSON structure, if you want to use a structured response to generate data or get structured information about intermediate steps taken, or decisions made, in coming up with a final response to a prompt.
  • request that responses be in a particular tone of voice, target a specific audience, or have a specified content length
  • request the use (or not) of Markdown formatting
  • ask the model to skip or include a preamble

Context and Additional Information

After telling the model what to do, you can include additional context. This can include:

  • information about the end-user of your application – for example their language, role, department, specific database records
  • context information – for example, all data related to an object the end-user is looking at
  • knowledge coming from Retrieval Augmented Generation (RAG)

Tip: you can provide information in a JSON or XML structure to ensure the information is presented in a consistent way. From Mendix apps, you can use Export Mappings to create JSON structures and Export XML Documents to create XML structures.

Input Data

The actual input provided by the end-user, either the exact message as typed by the user (for example, in a chatbot interface) or some specific set of data the user entered (for example, in a custom app). This is typically the main, if not all, component in the user prompt.

Prompt Techniques

Different prompt techniques can be used to guide AI models in performing their specific tasks. Each technique is described along with its use case and an example prompt instruction in the following table:

Prompt technique Description Use Case and Prompt Example
Interview Pattern Approach Using an interview-style approach allows the model to ask follow-up questions to the end-user to provide a better-fitting response. Use case: Movie Recommendation Engine.
Prompt instruction: You will act as a movie recommender expert. You will ask the user a series of detailed questions, one at a time, to understand their preferences in movies.
Instruction Prompt The instruction gives the AI model directions on how to perform a task. It can be a guidance on the type of output, such as summarization or translation, style, format, and more. Use case: Mendix ML Kit Python Script Generator.
Prompt instruction: You will act as an expert Python developer specializing in the Mendix ML Kit. The output/response should be given in a python script with annotations for the Mendix ML Kit.
Few-shot Prompt It helps the model to learn a task or pattern dynamically by providing examples. It can also be part of the system prompt. Use case: English-Spanish Translator.
Prompt instruction: You are a kind assistant who helps translate English texts to Spanish. For example, “Good Evening” to Spanish: “Buenas Noches
Chain-of-Thought It simplifies complex tasks by turning them into discrete steps that happen in a certain order. Use case: Medical Diagnosis for interns.
Prompt instruction: You are a diagnosis assistant designed to help trainee doctors ask a series of questions for patients’ initial evaluation. Your goal is to identify the patient’s symptoms, health history, and other relevant variables to reach an accurate evaluation that, depending on the result, will be forwarded to nurses or doctors. Start by asking the patient about their primary symptoms and the reason for their visit. Then, …
Tree-of-Thought Similar to a decision tree, it includes several lines of thought to allow the model to evaluate and find its path to the correct outcome. Use case: Support Assistant Bot.
Prompt instruction: You are a helpful assistant supporting the IT department with employees’ requests, such as support tickets, licenses, or hardware inquiries. Follow the instructions below according to the type of request. If the user asks about … If the request is vague or incomplete, … If the request is about licenses or hardware, first … then … If the user wants to know about their support tickets, …

Use an Iterative Approach

Mendix recommends that you test your prompt against different scenarios. Writing a prompt is therefore somewhat similar to modeling a microflow.

You should do the following:

  1. Setting a goal: what should the model do?
  2. Think about your test and edge cases: what should the model do in a particular situation?
  3. Draft version: Write a first version of the prompt.
  4. Testing and more testing: Test your prompt against your test cases.
  5. Refinement of the prompt: Refine the prompt, by tweaking your variables and writing defensive statements against undesired behavior.

There is a difference between how models behave. For example, newer models might interpret instructions slightly differently, or be more elaborate. You should therefore retest your prompt when you switch models (for example, after moving from ChatGPT-3.5 to ChatGPT-4o).

Tips for Better Prompting

There are some techniques that have been found to produce better responses from GenAI models. The following examples are focusing on system prompts. If you would like to see some examples, visit our Prompt Library.

Be Clear

Specificity and clarity are important. Like humans, large language models (LLMs) require specific instructions and cannot guess what you want. There is a difference between:

You are a helpful assistant who helps users with their questions and requests.

and

You are a helpful assistant who provides information about Mendix. 
If the user has a technical question, check the Mendix Documentation and include the link. 
If the user is struggling with a bug, check Mendix Forum or Documentation for a solution. 
Please provide the source of the information in your response. 
Lastly, if you are not sure about the response, do not try to create one but rather inform the user that you do not know the answer.

Tip: if you are unsure about whether a prompt is clear enough, ask a co-worker to interpret the prompt and see if they would follow the prompt and reach your desired outcome.

Explicitly Teach the Model to Solve the Problem

Instead of relying on the model to come up with the best strategy to solve a problem, break the larger problem down into smaller steps.

Provide the model with examples of the steps to solve the problem. This encourages the model to follow those patterns. As a result, the quality of the output will be higher compared to asking the LLM to come up with the answer right away.

When you want the model to respond in a specific manner or syntax that is hard to describe, it can be particularly useful to provide examples. This technique is known as One-Shot-Prompting (1 example) or Few-Shot-Prompting (multiple examples).

You are a classification assistant.
Your job is to classify user reviews based on their sentiment.

<examples>
User prompt: I love the product!
Response: positive

User prompt: It didn't meet my expectations
Response: negative

User prompt: It's the best thing I ever bought
Response: positive
</examples>

Allow the Model to Say “I don’t know”

A model will always try to follow the instructions and can therefore come up with a response that might not be what you expect, or worse made up. This is known as hallucination.

If your prompt includes instructions that allow the LLM to ask for more info, or respond that it does not know something, this will make it more effective.

Example instructions are:

If you are unsure how to respond, say “Sorry, I didn’t get that. Could you rephrase the question or provide more details?”
You are a barista who only talks about coffee.
If a user asks something about other topics, say:
    “Sorry, as a barista I cannot help you with that. Would you like some recommendations on how to brew coffee?”

Or, when using RAG:

You are a helpful assistant who tries to answer user questions based on chunks of topic-specific data.
If you cannot answer a question based on the provided information alone, you respond that you do not know.
For the current question, please base the answer on the following pieces of information:
<information>
...
</information>

Let the model assume a role

You can prime the model by explaining what it does. This will create a bias in the model towards specific reasoning and increase the quality of their answer based on what you expect from the (stereotypical) persona.

Examples are:

You are a helpdesk assistant.
You are a writer who specializes in marketing content.

Tell the Model How to Use Provided Tools

When using features like function calling, give the functions a descriptive name. Also, instruct the model on what functions can do and how they should be used. This will guide the LLM to call the functions at the right moment and use the response correctly.

For example, say you have a tool called GetTicketInformationForIdentifier which retrieves information from a specific support ticket in a database; you could add the following to the prompt:

Do not make assumptions about the Ticket Identifier.
Ask for clarification if you do not know this.
Only use the ticket information from the GetTicketInformationForIdentifier function for answering questions on ticket information.

Provide Structure

When the prompt becomes longer it can help to use XML-like tags to give more structure to the prompt. This will help the model interpret the different sections and their role in the prompt better.

For example, you could use something like:

<instructions>
Answer the question from the user.
Base the answer on the articles provided.
Provide a reference to the articles where relevant.
</instructions>
<article>{article 1}</article>
<article>{article 2}</article>
<input>{user input}</input>
<output_formatting>
Write in a lively tone of voice.
Do not exceed 200 words.
Skip the preamble.
</output_formatting>

Learn More

Prompt Library

Check out our Prompt Library for examples you can apply or use as inspiration for the prompts in your apps.

Showcases

Check out the GenAI showcase app in the Marketplace to see how you can apply prompt engineering in practice to let a model perform specific tasks from the Mendix app.

Bedrock and Anthropic Claude

OpenAI