Agent Commons

Last modified: May 9, 2025

Introduction

The Agent Commons module enables users to develop, test, and optimize their GenAI use cases by creating effective agents that interact with large language models (LLMs). With the Agent Commons module, you can use the Agent Builder interface within your app to define agents at runtime and manage multiple versions over time.

You can wire up prompts, microflows (as tools), knowledge bases, and large language models to build agentic patterns that support your business logic. The Agent Builder also allows you to define variables that act as placeholders for data from the app session context, which are replaced with actual values when the end user interacts with the app.

The Agent Commons module includes the necessary data model, pages, and snippets to seamlessly integrate the agent builder interface into your app and start using agents within your app logic.

Typical Use Cases

Typical use cases for Agent Commons include:

  • Incorporating one or more agentic patterns in the app that involve interactions with an LLM. These patterns may also include microflows as tools, knowledge bases, and guardrails.

  • Enabling prompt updates or improvements without modifying the underlying LLM integration code or low-code application logic. This allows non-developers, such as data scientists, to change prompts and iterate on agent configurations.

  • Supporting rapid iteration on prompts, microflows, knowledge bases, models, and variable placeholders in a playground setup, separate from core app logic.

Features

The Agent Commons module offers the following features:

  • Agent Builder UI components and data model for managing, storing, and rapidly iterating on agent versions at runtime. No app deployment is required to update an agent.

  • Drag and drop operations for calling both single-call and conversational agents from microflows and workflows.

  • Prompt placeholders, allowing dynamic insertion of values based on user or context objects at runtime.

  • Logic to define and run tests individually or in bulk, with result comparisons.

  • Export/import functionality for transporting agents across different app environments (for example, local, acceptance, production).

  • The ability to manage the active agent version used by the app logic in the app environment eliminates the need for redeployment.

Dependencies

The Agent Commons module requires Mendix Studio Pro version 10.21.0 or above.

In addition, install the following modules:

Installation

If you are starting from a blank app or adding agent-building functionality to an existing project, you need to manually install the Agent Commons module from the Mendix Marketplace. Before proceeding, ensure your project includes the latest versions of the required dependencies. Follow the instructions in How to Use Marketplace Content to install the Agent Commons module.

Configuration

To use the Agent Commons functionalities in your app, you must perform the following tasks in Studio Pro:

  1. Assign the relevant module roles to the applicable user roles in the project Security.
  2. Add the Agent Builder UI to your app by using the pages and snippets as a basis.
  3. Ensure that a deployed model is configured.
  4. Define the prompts, add functions, knowledge bases, and test the agent.
  5. Add the agent to the app logic of your specific use case.
  6. Improve and iterate on agent versions.

Configuring the Roles

In the project Security of your app, assign the AgentCommons.AgentAdmin module role to user roles responsible for defining and refining agents, as well as selecting the active agent version used in the running app environment.

Adding the Agent Builder UI to Your App

The module includes a set of reusable pages, layouts, and snippets, allowing you to add the agent builder to your app.

Pages and Layouts

To define the agents at runtime, add the Agent_Overview page (USE_ME > Agent Builder) to your app Navigation, or include the Snippet_Agent_Overview in a page that is already part of your navigation.

From the overview, users can access the Version_Details page to edit prompts and run tests. For more customization, you can refer to the contents of Snippet_Agent_Details.

If you need to adjust the layout or apply other customizations, it is recommended to copy the relevant page into your own module and modify it to match your app styling or use case.

For example, download and run the Agent Builder Starter App to see the pages in action.

Configuring Deployed Models

To interact with LLMs using Agent Commons, you need at least one GenAI connector that adheres to the GenAI Commons principles. To test agent behavior, you must configure at least one Deployed Model for your chosen connector. Refer to the specific connector’s documentation for detailed instructions on setting up the Deployed Model.

Defining the Agent

When the app is running, a user with the AgentAdmin role can set up agents, write prompts, link microflows as tools, and provide access to knowledge bases. Once an agent is associated with a deployed model, it can be tested in an isolated environment, separate from the rest of the app’s logic, to validate its behavior effectively.

Users can create two types of agents:

  • Conversational Agent: Intended for scenarios where the end user interacts through a chat interface, or where the agent is called conversationally by another agent.

  • Single-Call Agent: Designed for isolated agentic patterns such as background processes, subagents in an Agent-as-Tool setup, or any use case that doesn’t require a conversational interface with historical context.

Defining Context Entity

If your agent’s prompt includes variables, your app must define an entity with attributes that match the variable names. An object of this entity serves as the context object, which holds the context data that will be passed when the call agent operation is triggered. For more details, see the Use the agent in the app logic section below.

This object contains the actual values that will be inserted into the prompt texts where the variables were defined. To link the context entity to the agent, select it in the Agent Commons UI. If you have created a new entity, run the app locally first to ensure it appears in the selection list.

The AgentAdmin will see warnings on the Agent Version Details page if:

  • The entity has not been selected

  • The entity’s attributes do not match the defined variables

  • The attribute length is insufficient to hold the actual values when logic is executed in the running app.

Adding Microflows as Tools

To allow your agent to act dynamically and autonomously or to access specific data based on input it determines, microflows can be added as tools. When the agent is invoked, it uses the function calling pattern to execute the required microflows, using the input specified in the model’s response.

For more technical details, see the Function Calling documentation.

Adding Knowledge Bases

For supported knowledge bases registered in your app, you can connect them to agents to enable autonomous retrievals. To set this up, refer to the documentation of the connector provided by your chosen knowledge base provider and follow the instructions for establishing a connection from your app.

To allow the agent to perform semantic searches, add the knowledge base to the agent definition and configure the retrieval parameters, such as metadata filters, the number of chunks to retrieve, and the threshold similarity.

Testing and Refining the Agent

While writing the system prompt (for both conversational and single-call types) or the user prompt (only for the single-call type), the prompt engineer can include variables by enclosing them in double braces, for example, {{variable}}. The actual values of these placeholders are typically known at runtime based on the user’s page context. To test the behavior of the prompts, a test can be executed. The prompt engineer must provide test values for all variables defined in the prompts. Additionally, multiple sets of test values for the variables can be defined and run in bulk. Based on the test results, the prompt engineer can add, remove, or rephrase certain parts of the prompt.

Using the Agent in the App Logic

After a few quick iterations, the first version of the agent is typically ready to be saved and integrated into the application logic for end-user testing. To do this, you can add one of the available operations from the Agent Commons module into your app logic.

Creating a Version

New agents will be created in the draft status by default, meaning they are still being worked on and can be tested using the agent commons module only. Once an agent is ready to be integrated into the app logic (i.e., logic triggered by end users), it must be saved as a version. This will store a snapshot of the prompt texts and the configured microflows as tools and knowledge bases. To select the active version for the agent, use the three-dot ( ) menu option on the agent overview and click Select Version in use.

Calling the Agent from a Microflow

For most use cases, the Call Agent microflow activity can be used. You can find this operation in Studio Pro Toolbox, under the Agents Kit category while editing a microflow.

To use it:

  1. Create a Request object using either the GenAI Commons operation or the Default Preprocessing from ConversationalUI.
  2. Ensure the Agent object is in scope, for example, retrieve it from the database by name.
  3. Pass both the Request and Agent objects to the Call Agent activity.

This action calls the Agent using the specified Request and executes a Chat Completions (With History) operation based on a defined agent. It uses all defined settings, including the selected model, system prompt, tools, knowledge base, and model parameters. The operation returns a Response object containing the assistant’s final message, in the same fashion as the chat completions operations from GenAI Commons.

For more specific use cases, where a context object is required for variable replacement, use the Get Prompt for Context Object. You can find this in Studio Pro Toolbox, under the Agents Kit category while editing a microflow.

This operation returns both the system prompt and user prompt as string attributes within a combined PromptToUse object. These prompt strings can then be passed to a Chat Completions operation.

To use this setup:

  1. Retrieve the relevant Agent one more time (for example, by name) and pass it with your custom context object to the operation.

  2. In a similar way to the Call Agent activity, use the Request_AddAgentCapabilities microflow to apply the agent’s properties to the request.

  3. Finally, place the required Chat Completions operation (with or without history) after this step to invoke the agent.

For a conversational agent, the chat context can be created based on the agent in one convenient operation. Use the New Chat for Agent operation from the Toolbox under the Agents Kit category. Retrieve the agent (for example, by name) and pass it with your custom context object to the operation. Note that this sets the system prompt for the chat context, making it applicable to the entire (future) conversation. Similar to other chat context operations, an action microflow needs to be selected for this microflow action.

Transporting the Agent to Other Environments

With the above microflow logic, the agent version is ready to be tested within the end-user flow, either in a local or test environment. Additionally, the agent can be exported and imported for transport to other environments when needed.

To export the agent, use the export button on the page where the agent is edited, or use the export and import buttons available on the overview page.

If context objects or functions have been modified, ensure that the correct version of the project is deployed before importing the new agent definition. This ensures that the domain model and microflows are aligned with the new agent version.

Improving the Agent

When an agent version is saved, a button is available to create a new draft version. You can use the new draft as a starting point to make small changes or improvements based on feedback, either from testing or after the agent has been live for some time, and new scenarios need to be covered.

Creating Multiple Versions

The new draft version will initially have the same prompt texts, tools, and linked knowledge bases as the latest version. You can then modify the prompt texts to cover additional scenarios, and update the tools and knowledge bases by adding, removing, or editing them as needed. Once the improved agent is ready, it can be saved as a new version.

Managing In-Use Version per Environment

Each time a new version of the agent is created, a decision must be made regarding which version to use in the end-user logic. Mendix recommends evaluating the active version as part of the testing and release process.

When importing new agents into other environments, selecting the in-use version is always a manual step, requiring a conscious decision. The user will be prompted to choose the version to be used as part of the import user flow. Later, you can manage the active version directly from the Agent Overview.

Technical Reference

The module includes technical reference documentation for the available entities, enumerations, activities, and other items that you can use in your application. You can view the information about each object in context by using the Documentation pane in Studio Pro.

The Documentation pane displays the documentation for the currently selected element. To view it, perform the following steps:

  1. In the View menu of Studio Pro, select Documentation.

  2. Click the element for which you want to view the documentation.