GenAI Commons

Last modified: November 21, 2025

Introduction

The GenAI Commons module combines common generative AI patterns found across various models on the market. Platform-supported GenAI-connectors use the underlying data structures and their operations. This makes it easier to develop vendor-agnostic AI-enhanced apps with Mendix, for example by using one of the connectors or the Conversational UI module.

If two different connectors both adhere to the GenAI Commons module, they can be easily swapped, which reduces dependency on the model providers. In addition, the initial implementation of AI capabilities using the connectors becomes a drag-and-drop experience, so that developers can quickly get started. The module exposes useful operations which developers can use to build a request to a large language model (LLM) and to handle the response.

Developers who want to connect to another LLM provider or their own service are advised to use the GenAI Commons module as well. This speeds up the development and ensures that common principles are taken into account. Lastly, other developers or consumers of the connector can adapt to it more quickly.

Limitations

The current scope of the module is focused on text and image generation, as well as embeddings and knowledge base use cases.

Dependencies

The GenAI Commons module requires Mendix Studio Pro version 10.24.0 or above.

You must also download the Community Commons module.

Installation

If you are starting from the Blank GenAI app, or the AI Bot Starter App, the GenAI Commons module is already included and does not need to be downloaded manually.

If you start from a blank app, or have an existing project where you want to include a connector for which the GenAI Commons module is required, you must install GenAI Commons manually. First, install the Community Commons module, and then follow the instructions in How to Use Marketplace Content to install the GenAI Commons module.

Implementation

GenAI Commons is the foundation of large language model implementations within the Mendix Cloud GenAI Connector, OpenAI connector, and the Amazon Bedrock connector, but may also be used to build other GenAI service implementations on top of it by reusing the provided domain model and exposed actions.

Although GenAI Commons technically defines additional capabilities typically found in chat completion APIs, such as image processing (vision) and tools (function calling), it depends on the connector module of choice for whether these are actually implemented and supported by the LLM. To learn which additional capabilities a connector supports and for which models these can be used, refer to the documentation of that connector.

Token Usage

GenAI Commons can help store usage data, allowing admins to understand token usage. Usage data is persisted only if the constant StoreUsageMetrics is set to true (exception in version 5.3.0 and above: if StoreTraces is set to true, Usage data is stored as well). In general, this is only supported for chat completions and embedding operations.

To clean up usage data in a deployed app, you can enable the daily scheduled event ScE_Usage_Cleanup in the Mendix Cloud Portal. Use the Usage_CleanUpAfterDays constant to control for how long token usage data should be persisted.

Lastly, the Conversational UI module provides pages, snippets, and logic to display and export token usage information. For this to work, the module roles UsageMonitoring from both Conversational UI as well as GenAI Commons need to be assigned to the applicable project roles.

Traceability

Traceability was introduced in version 5.3.0 of the GenAI Commons module.

By default, the chat completions operations of GenAI Commons store data in your application's database for traceability reasons. This makes it easier to understand the usage of GenAI in your app and why the model behaved in a certain way, for example, by reviewing tool usage. Trace data is only persisted if the constant StoreTraces is set to true.

As traces may contain sensitive and personally identifiable information, you should determine, on a case-by-case basis, whether storing this data is compliant. To enable read-access to a user (typically an admin user), grant the module role TraceMonitoring to the applicable project roles.

To clean up trace data in a deployed app, you can enable the daily scheduled event ScE_Trace_Cleanup in the Mendix Cloud Portal. Use the Trace_CleanUpAfterDays constant to control the retention period of the trace data.

Technical Reference

The technical purpose of the GenAI Commons module is to define a common domain model for generative AI use cases in Mendix applications. To help you work with the GenAI Commons module, the following sections list the available entities, enumerations, and microflows to use in your application.

Domain Model

The domain model in Mendix is a data model that describes the information in your application domain in an abstract way. For more general information, see the Data in the Domain Model documentation. To learn about where the entities from the domain model are used and relevant during implementation, see the Microflows section below.

DeployedModel

The DeployedModel represents a GenAI model that can be invoked by the Mendix app. It contains a display name and a technical name/identifier. It also contains the name of the microflow to be executed for the specified model and other information relevant to connect to a model. The creation of Deployed Models is handled by the connectors themselves (see their specializations) where admins can configure those at runtime.

The DeployedModel entity replaces the capabilities that were covered by the Connection entity for model invocations in earlier versions of GenAI Commons. For knowledge base interactions, the DeployedKnowledgeBase entity is used.

AttributeDescription
DisplayNameThe display name of the deployed model.
ArchitectureThe architecture of the deployed model; e.g. OpenAI or Amazon Bedrock.
ModelThe model identifier of the LLM provider.
OutputModalityThe type of information the model returns.
MicroflowThe microflow to execute for the specified model and modality.
SupportsSystemPromptAn enum to specify if the model supports system prompts.
SupportsConversationsWithHistoryAn enum to specify if the model supports conversation with history.
SupportsFunctionCallingAn enum to specify if the model supports function calling.
IsActiveA boolean to specify if the model is active/usable with the current authentication settings and user preference.

DeployedKnowledgeBase

The DeployedKnowledgeBase represents a GenAI knowledge base that can be added to the request when calling an LLM. It contains a display name, a technical name (or identifier), the name of the microflow to be executed for the specified knowledge base specialization, and other relevant information to connect to the knowledge base. These objects are created by the connectors themselves (see their specializations), allowing admins to configure them at runtime.

The DeployedKnowledgeBase entity replaces the capabilities covered by the Connection entity for knowledge base interaction in earlier versions of GenAI Commons.

AttributeDescription
DisplayNameThe display name of the deployed knowledge base.
NameThe name of the deployed knowledge base.
ArchitectureThe architecture of the deployed model, for example, Mendix Cloud or Amazon Bedrock.
MicroflowThe microflow to execute to retrieve information for the specified knowledge.
IsActiveA boolean to specify if the knowledge base is active/usable with the current authentication settings and user preference.

InputModality

Accepted input modality of the associated deployed model.

AttributeDescription
ModelModalityThe type of information the model accepts as input.

Usage

This entity represents usage statistics of a call to an LLM. It refers to a complete LLM interaction; in case there are several iterations (for example, recursive processing of function calls), everything should be aggregated into one Usage record.

Following the principles of GenAI Commons, it must be stored based on the response for every successful call to a system of an LLM provider. This is only applicable to text and file operations and embedding operations.

The data stored in this entity is to be used later on for token consumption monitoring.

AttributeDescription
ArchitectureThe architecture of the used deployed model; e.g. OpenAI or Amazon Bedrock.
DeployedModelDisplayNameDisplayName of the DeployedModel.
InputTokensThe amount of tokens consumed by an LLM call that is related to the input.
OutputTokensThe amount of tokens consumed by an LLM call that is related to the output.
TotalTokensThe total amount of tokens consumed by an LLM call.
DurationMillisecondsThe duration in milliseconds of the technical part of the call to the system of the LLM provider. This excludes custom pre and postprocessing but corresponds to a complete LLM interaction.
_DeploymentIdentifierInternal object used to identify the DeployedModel used.

Trace

A trace represents the whole LLM interaction from the first user message until the final assistant's response was returned, including tool calls. The data stored in this entity is to be used later on for traceability use cases.

Trace was introduced in version 5.3.0.

AttributeDescription
TraceIdThe trace ID is set internally to identify a trace.
StartTimeThe start time of the initial model invocation.
EndTimeThe end time after the final model invocation is completed.
DurationMillisecondsThe duration between the start and end of the whole model invocation.
InputThe initial input of the model invocation (usually a user prompt).
OutputThe response of the final message sent by the model (usually an assistant message).
SystemPromptThe system prompt that was used for the model invocation.
HasErrorIndicates if any span call has failed.
_AgentVersionIdThe id of the agent version (if applicable) as sent via the request.
_ConversationIdThe id of the conversation (if applicable) as sent via the request. This is usually created by the model provider.

Span

A span is created for each interaction between Mendix and the LLM (such as chat completions, tool calling, etc.). The generalized object is typically not used; instead, its specializations are used.

AttributeDescription
SpanIdThe span ID is set internally to identify a span.
StartTimeThe start time of the model invocation.
EndTimeThe end time after the model invocation is completed.
DurationMillisecondsThe duration between the start and end of the whole model invocation.
InputThe input of the span.
OutputThe output of the span.
IsErrorIndicates if the call failed. If so, the span's output will contain the error message that was also logged.

Span was introduced in version 5.3.0.

ModelSpan

A model span is created for each interaction between Mendix and the LLM where content is generated (sent as the assistant's message). Typically, this is a request for text generation. In addition to the Span's attributes, it also contains the following:

AttributeDescription
InputTokensNumber of tokens in the request.
OutputTokensNumber of tokens in the generated response.
_DeploymentIdentifierInternal object used to identify the DeployedModel that was used.

ModelSpan was introduced in version 5.3.0.

ToolSpan

A tool span is created for each tool call requested by the LLM. The tool call is processed in GenAI Commons, and the result is sent back to the model. In addition to the Span's attributes, it also contains the following:

AttributeDescription
ToolNameThe name of the tool that was called.
ToolDescriptionThe description of the tool.
_ToolCallIdThe ID of the tool call used by the model to map an assistant message containing a tool call with the output of the tool call (tool message).

ToolSpan was introduced in version 5.3.0.

KnowledgeBaseSpan

A knowledge base span is created for each knowledge base retrieval tool call requested by the LLM. The tool call is processed in GenAI Commons, and the result is sent back to the model. In addition to the ToolSpan's attributes, it also contains the following:

AttributeDescription
ArchitectureThe architecture of the knowledge base, defined by the DeployedKnowledgeBase specialization.
MinimumSimilarityThe minimum similarity score that was specified during the retrieval (usually 0,0 - 1,0).
MaxNumberOfResultsThe maximum number of results that was specified during the retrieval.
KBDisplayNameThe display name of the deployed knowledge base that was specified during the retrieval.

KnowledgebaseSpan was introduced in version 5.3.0.

MCPSpan

An MCP span is created for each tool invocation over the Model Context Protocol via the MCP Client module. The tool call is processed on the MCP server, usually outside of this application, and the result is sent back to the model. In addition to the ToolSpan's attributes, it also contains the following:

AttributeDescription
ServerNameThe name of the server where the tool resides.

MCPSpan was introduced in version 5.4.0.

Request

The Request is an input object for the chat completions operations defined in the platform-supported GenAI-connectors and contains all content-related input needed for an LLM to generate a response for the given chat conversation.

AttributeDescription
_IdThe Id attribute describes the unique identifier of the session. Reuse the same value to continue the same session.
SystemPromptA SystemPrompt provides the model with context, instructions, or guidelines.
MaxTokensMaximum number of tokens per request.
TemperatureTemperature controls the randomness of the model response. Low values generate a more predictable output, while higher values allow creativity and diversity. It is recommended to steer either the temperature or TopP, but not both.
TopPTopP is an alternative to temperature for controlling the randomness of the model response. TopP defines a probability threshold so that only words with probabilities greater than or equal to the threshold will be included in the response. It is recommended to steer either the temperature or TopP, but not both.
ToolChoiceControls which (if any) tool is called by the model. For more information, see the ENUM_ToolChoice section containing a description of the possible values.
_AgentVersionIdThe AgentVersionId is set if the execution of the request was called from an Agent.

Message

A message that is part of the request or the response. Each instance contains data (text, file collection) that needs to be taken into account by the model when processing the completion request.

AttributeDescription
RoleThe role of the message's author. For more information, see the ENUM_Role section.
ContentThe text content of the message.
MessageTypeThe type of the message can be either text or file, where file means that the associated FileCollection should be taken into account. For more information, see the ENUM_MessageType section.
ToolCallIdThe id of the tool call proposed by the model that this message is responding to. This attribute is only applicable for messages with the role tool.

FileCollection

This is an optional collection of files that is part of a Message. It is used for patterns like vision, where image files are sent along with the user message for the model to process. It functions as a wrapper entity for files and has no attributes.

FileContent

This is a file in a collection of files that belongs to a message. Each instance represents a single file. Currently, only files of the type image and document are supported.

AttributeDescription
FileContentDepending on the ContentType, this is either a URL or the base64-encoded file data.
ContentTypeThis describes the type of file data. Supported content types are either URL or base64-encoded file data. For more information, see the ENUM_ContentType section.
FileTypeCurrently only images and documents are supported file types. In general, not all file types might be supported by all AI providers or models. For more information, see the ENUM_FileType.
TextContentAn optional text content describing the file content.
FileExtensionExtension of the file, e.g. png or pdf. Note that this attribute may only be filled if the ContentType equals Base64 and can be empty.
FileNameIf a FileDocument is added, the Filename is extracted automatically.

ToolCollection

This is an optional collection of tools to be sent along with the Request. Using tool call capabilities (also known as function calling) might not be supported by certain AI providers or models. This entity functions as a wrapper entity for tools and has no attributes.

Tool

A tool in the tool collection. This is sent along with the request to expose a list of available tools. In the response, the model can suggest calling a certain tool (or multiple tools in parallel) to retrieve additional data or perform certain actions.

AttributeDescription
NameThe name of the tool to call. This is used by the model in the response to identify which function needs to be called.
DescriptionAn optional description of the tool, used by the model in addition to the name attribute to choose when and how to call the tool.
ToolTypeThe type of the tool. Refer to the documentation supplied by your AI provider for information about the supported types.
MicroflowThe name (string) of the microflow that this tool represents.
MCPServerNameThe name of the MCP server (only appliable for MCP Tools).

Function

A tool of the type function. This is a specialization of Tool and represents a microflow in the same Mendix application. The return value of this microflow when executed as a function is sent to the model in the next iteration and hence must be of type String.

KnowledgeBaseRetrieval

A tool of the type function. This is a specialization of Tool and represents a microflow in the same Mendix application. It is typically used internally inside of connector operations to enable the model with a knowledge base retrieval.

AttributeDescription
MinimumSimilaritySpecifies the minimum similarity score (usually 0-1) of the passed chunk and the knowledge chunks in the knowledge base.
MaxNumberOfResultsSpecifies the maximum number of results that should be retrieved from the knowledge base.

ArgumentInput

For tools which are not executed in the same Mendix application, but still registered with the request and called from the application, ArgumentInput objects are added to the Tool. When the tool is called, the arguments are not passed directly to the microflow, but can be extracted from the Argument of the ToolCall.

AttributeDescription
NameName of the argument.
_TypeData type of the argument, for example, string, number, boolean, enum.
RequiredIndicates if the argument is required for calling the tool.

EnumValue

The EnumValue specifies available keys for "enum" ArgumentInput data types, so that the model is restricted to use valid input.

AttributeDescription
KeyKey of an enumeration.

StopSequence

For many models, StopSequence can pass a list of character sequences (for example a word) along with the request. The model will stop generating content when a word of that list occurs next.

AttributeDescription
SequenceA sequence of characters that would prevent the model from generating further content.

Response

The response returned by the model contains usage metrics and a response message.

AttributeDescription
_ID_The ID attribute describes the unique identifier of the session. Reuse the same value to continue the same session. If no ID was set by the LLM connector, an internal ID is created.
RequestTokensNumber of tokens in the request.
ResponseTokensNumber of tokens in the generated response.
TotalTokensTotal number of tokens (request + response).
DurationMillisecondsDuration in milliseconds for the call to the LLM to be finished.
StopReasonThe reason why the model stopped is to generate further content. See AI provider documentation for possible values.
ResponseTextThe text content of the response message.

ToolCall

A tool call object may be generated by the model in certain scenarios, such as a function call pattern. This entity is only applicable for messages with role assistant.

AttributeDescription
NameThe name of the tool to call. This refers to the Name attribute of one of the Tools in the Request.
ToolTypeThe type of the tool. View AI provider documentation for supported types.
ToolCallIdThis is a model-generated ID of the proposed tool call. It is used by the model to map an assistant message containing a tool call with the output of the tool call (tool message).

Argument

The arguments are used to call the tool, generated by the model in JSON format. Note that the model does not always generate valid JSON and may hallucinate parameters that are not defined by your tool's schema. Mendix recommends validating the arguments in the code before calling the tool. One argument is generated for each primitive input parameter of the selected microflow.

AttributeDescription
KeyThe name of the input parameter as given in the microflow.
ValueThe value that is passed to the input parameter.

Reference

An optional reference for a response message.

AttributeDescription
TitleThe title of the reference.
ContentThe content of the reference.
SourceThe source of the reference, e.g. a URL.
SourceTypeThe type of the source. For more information, see ENUM_SourceType.
IndexUsed to make references identifiable and sortable.

Citation

An optional citation. This entity can visualize the link between a part of the generated text and the actual text in the source on which the generated text was based.

AttributeDescription
StartIndexAn index that marks the beginning of a citation in a larger document.
EndIndexAn index that marks the end of a citation in a larger document.
TextThe part of the generated text that contains a citation.
QuoteContains the cited text from the reference.

ChunkCollection

This entity represents a collection of chunks. It is a wrapper entity for Chunk objects or specialization(s) to pass it to operations that execute embedding calculations or knowledge base interaction.

Chunk

A piece of information (InputText) and the corresponding embeddings vector retrieved from an Embeddings API. This is the relevant entity if you need to generate embedding vectors but do not need to store them in a knowledge base.

AttributeDescription
InputTextThe input text to create the embedding for.
EmbeddingVectorThe corresponding embedding vector of the input text.
_IndexInternal attribute. Do not use.

KnowledgeBaseChunk

This entity represents a discrete piece of knowledge that can be used for embedding and storage operations. As a specialization of Chunk, it is the appropriate entity to use when both generating embedding vectors and storing them in a knowledge base.

AttributeDescription
ChunkIDThis is a system-generated UUID for the chunk in the knowledge base.
HumanReadableIDThis is a front-end reference to the KnowledgeBaseChunk so that users know what it refers to (e.g. URL, document location, human-readable record ID).
MxObjectIDIf the KnowledgeBaseChunk was based on a Mendix object during creation, this will contain the GUID of that object at the time of creation.
MxEntityIf the KnowledgeBaseChunk was based on a Mendix object during creation, this will contain its full entity name at the time of creation.
SimilarityIn case the chunk was retrieved from the knowledge base as part of a similarity search (for example, nearest neighbors retrieval) this will contain the cosine similarity to the input vector for the retrieval that was executed.

MetadataCollection

An optional collection of metadata. This is a wrapper entity for one or more Metadata objects for a KnowledgeBaseChunk.

Metadata

This entity represents additional information to be stored with the KnowledgeBaseChunk in the knowledge base. At the insertion stage, you can link multiple metadata objects to a KnowledgeBaseChunk as needed. These metadata objects consist of key-value pairs used for custom filtering during retrieval. Retrieval operates on an exact string-match basis for each key-value pair, returning records only if they match all metadata records specified in the search criteria.

AttributeDescription
KeyThis is the name of the metadata and typically tells how the value should be interpreted.
ValueThe value of the metadata that provides additional information about the chunk in the context of the given key.

EmbeddingsOptions

An optional input object for the embedding operations to set optional request attributes.

AttributeDescription
DimensionsThe number of dimensions the resulting output embeddings should have.

EmbeddingsResponse

The response returned by the model contains token usage metrics. Not all connectors or models might support token usage metrics.

AttributeDescription
PromptTokensNumber of tokens in the prompt.
TotalTokensTotal number of tokens used in the request.
DurationMillisecondsDuration in milliseconds for the call to be finished.

ImageOptions

An optional input object for the image generation operations to set optional request attributes.

AttributeDescription
HeightThis determines the height of the image.
WidthThis determines the width of the image.
NumberOfImagesThis determines the number of images to be generated.
SeedThis can be used to influence the randomness of the generation. Ensures the reproducibility and consistency of the generated images by controlling the initial state of the random number generator.
CfgScaleThis can be used to influence the randomness of the generation. Adjusts the balance between adherence to the prompt and creative randomness in the image generation process.
ImageGenerationTypeThis describes the type of image generation. Currently, only text to image is supported. For more information, see ENUM_ImageGenerationType.

Microflow Activities

Use the exposed microflows and Java Actions to map the required information for GenAI operations from your custom app implementation to the GenAI model and vice versa.

GenAI (Generate)

Chat completions, embeddings, and image generation operations can be used by passing a DeployedModel object of the desired connector. The action calls the internally assigned microflow of the connector and returns the response. Operations from different connectors can be exchanged very easily without much additional development effort.

It is recommended that you adapt to the same interface when developing custom chat completions or image generation operations, such as integration with different AI providers. The generic interfaces are described below. For more detailed information, refer to the documentation of the connector that you want to use, since it may expect specializations of the generic GenAI common entities as an input.

Chat Completions (with history)

The Chat Completions (with history) operation supports more complex use cases where a list of (historical) messages (for example, comprising the conversation or context so far) is sent as part of the request to the LLM.

Input Parameters
NameTypeNotesDescription
DeployedModelDeployedModelmandatoryThe DeployedModel entity replaces the Connection entity. It contains the name of the microflow to be executed for the specified model and other information relevant to connect to a model. The OutputModality of the DeployedModel needs to be Text.
RequestRequestmandatoryThis is an object that contains messages, optional attribute, and an optional ToolCollection.
Return Value
NameTypeDescription
ResponseResponseA Response object that contains the assistant's response.
Chat Completions (without history)

The Chat Completions (without history) operation supports scenarios where there is no need to send a list of (historic) messages comprising the conversation so far as part of the request.

Input Parameters
NameTypeNotesDescription
UserPromptStringmandatoryA user message is the input from a user.
DeployedModelDeployedModelmandatoryThe DeployedModel entity replaces the Connection entity. It contains the name of the microflow to be executed for the specified model and other information relevant to connecting to a model. The OutputModality of the DeployedModel needs to be Text.
OptionalRequestRequestoptionalThis is an optional object that contains optional attributes and an optional ToolCollection. If no Request is passed, one will be created.
OptionalFileCollectionFileCollectionoptionalThis is an optional collection of files to be sent along with the request to use vision or document chat.
Return Value
NameTypeDescription
ResponseResponseA Response object that contains the assistant's response.
Generate Embeddings (Chunk Collection)

The Generate Embeddings (Chunk Collection) operation allows the invocation of an embeddings API with a ChunkCollection and returns an EmbeddingsResponse object with token usage statistics, if applicable. The response object is associated with the original ChunkCollection used as an input, and the Chunk (or KnowledgeBaseChunk) objects will be updated with their corresponding embedding vector retrieved from the Embeddings API within this microflow.

Input Parameters
NameTypeNotesDescription
ChunkCollectionChunkCollectionmandatoryA ChunkCollection with Chunks for which an embedding vector should be generated. Use operations from GenAI commons to create a ChunkCollection and add Chunks or KnowledgeBaseChunks to it.
DeployedModelDeployedModelmandatoryThe DeployedModel entity replaces the Connection entity. It contains the name of the microflow to be executed for the specified model and other information relevant to connecting to a model. The OutputModality needs to be Embeddings.
EmbeddingOptionsEmbeddingsOptionsoptionalCan be used to pass optional request attributes.
Return Value
NameTypeDescription
EmbeddingsResponseEmbeddingsResponseAn response object that contains the token usage statistics and the corresponding embedding vector as part of a ChunkCollection.
Generate Embeddings (String)

The Generate Embeddings (String) operation allows the invocation of the embeddings API with a String input and returns an EmbeddingsResponse object with token usage statistics, if applicable. The EmbeddingsResponse_GetFirstVector microflow from GenAI Commons can be used to retrieve the corresponding embedding vector in a String representation. This operation supports scenarios where the vector embedding of a single string must be generated, e.g. to perform a nearest neighbor search across an existing knowledge base.

Input Parameters
NameTypeNotesDescription
InputTextStringmandatoryInput text to create the embedding vector.
DeployedModelDeployedModelmandatoryThe DeployedModel entity replaces the Connection entity. It contains the name of the microflow to be executed for the specified model and other information relevant to connecting to a model. The OutputModality needs to be Embeddings.
EmbeddingOptionsEmbeddingsOptionsoptionalCan be used to pass optional request attributes.
Return Value
NameTypeDescription
EmbeddingsResponseEmbeddingsResponseA response object that contains the token usage statistics and the corresponding embedding vector as part of a ChunkCollection
Generate Image

The Generate Image operation supports the generation of images based on a UserPrompt passed as a string. The returned Response contains a FileContent via FileCollection and Message. See microflows in the Handle Response folder to get the image (list).

Input Parameters
NameTypeNotesDescription
DeployedModelDeployedModelmandatoryThe DeployedModel entity replaces the Connection entity. It contains the name of the microflow to be executed for the specified model and other information relevant to connect to a model. The OutputModality needs to be Image.
UserPromptStringmandatoryThis is the description the image will be based on.
ImageOptionsImageOptionsoptionalThis can be used to pass optional request attributes.
Return Value
NameTypeDescription
ResponseResponseA Response object that contains the assistant's response including a FileContent which needs to be used in Get Generated Image (Single) or Get Generated Images (List).

GenAI (Request Building)

The following microflows help you construct the input request structures for the operations defined in the GenAI Commons.

Add Message to Request

This microflow can add a new Message to the Request object. A message represents the conversation text content and optionally has a collection of files attached that need to be taken into account when generating the response (such as images for vision). Make sure to add messages chronologically so that the most recent message is added last.

Input Parameters
NameTypeNotesDescription
RequestRequestmandatoryThis is the request object that contains the functional input for the model to generate a response.
ENUM_MessageRoleENUM_MessageRolemandatoryThe role of the message author.
FileCollectionFileCollectionoptionalThis is an optional collection of files that are part of the message.
ContentStringStringmandatoryThis is the textual content of the message.
Return Value

This microflow does not have a return value.

Create Request

This microflow can be used to create a request for a chat completion operation. This is the request object that contains the top-level functional input for the language model to generate a response.

Input Parameters
NameTypeNotesDescription
SystemPromptStringoptionalA system message can specify the assistant persona or give the model more guidance, context, or instructions. This attribute is optional.
TemperatureDecimaloptionalThis is the sampling temperature. Higher values will make the output more random, while lower values make it more focused and deterministic. This attribute is optional.
MaxTokensInteger/LongDepends on AI provider or modelThis is the maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. This attribute is optional.
TopPDecimaloptionalThis is an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with Top_p probability mass. Mendix generally recommends altering Top_p or Temperature but not both. This attribute is optional.
Return Value
NameTypeDescription
RequestRequestThis is the created request object.
Files: Add File to Collection

Use this microflow to add a file to an existing FileCollection. The File Collection is an optional part of a Message.

Input Parameters
NameTypeNotesDescription
FileCollectionFileCollectionmandatoryThe wrapper object for Files. The File Collection is an optional part of a Message.
URLStringEither URL or FileDocument is required.This is the URL of the file.
FileDocumentSystem.FileDocumentEither URL or FileDocument is required.The file for which the contents are part of a message.
ENUM_FileTypeENUM_FileTypemandatoryThis is the type of the file.
TextContentStringmandatoryAn optional text content describing the file content or giving it a specific name.
Return Value

This microflow does not have a return value.

Files: Initialize Collection with File

To include files within a message, you must provide them in the form of a file collection. This helper microflow creates the file collection and adds the first file. The File Collection is an optional part of a Message object.

Input Parameters
NameTypeNotesDescription
URLStringEither URL or FileDocument is required.This is the URL of the file.
FileDocumentSystem.FileDocumentEither URL or FileDocument is required.The file for which the contents are part of a message.
ENUM_FileTypeENUM_FileTypemandatoryThis is the type of the file.
TextContentStringoptionalAn optional text content describing the file content or giving it a specific name.
Return Value
NameTypeDescription
FileCollectionFileCollectionThis is the created file collection with the new file associated with it.
Tools: Add Function to Request

Adds a new Function to a ToolCollection that is part of a Request. Use this action to expose microflows as tools to the LLM via function calling. If supported by the LLM connector, the chat completion operation calls the right functions based on the LLM response and continues the process until the assistant's final response is returned.

Input Parameters
NameTypeNotesDescription
RequestRequestmandatoryThe request to add the function to.
ToolNameStringmandatoryThe name of the tool to use/call.
ToolDescriptionStringoptionalAn optional description of what the tool does, used by the model to choose when and how to call the tool.
FunctionMicroflowMicroflowmandatoryThe microflow that is called within this function.
Return Value
NameTypeDescription
FunctionFunctionThis is the function object that was added to ToolCollection which is part of the request. This object can be used optionally as input for controlling the tool choice of the Request, see Tools: Set Tool Choice.
Tools: Set Tool Choice

Use this microflow to control how the model should determine which function to leverage (typically to gather additional information). The microflow sets the ToolChoice within a Request. This controls which (if any) function is called by the model. If the ENUM_ToolChoice equals tool, the Tool input is required which will become the tool choice. This will force the model to call that particular tool.

Input Parameters
NameTypeNotesDescription
RequestRequestmandatoryThe request for which to set a tool choice.
ToolToolRequired if ENUM_ToolChoice equals tool.Specifies the tool to be used. Required if the ENUM_ToolChoice equals tool; ignored for all other enumeration values.
ENUM_ToolChoiceENUM_ToolChoicemandatoryDetermines the tool choice. For more information, see the ENUM_ToolChoice section for a list of the available values.
Return Value

This microflow does not have a return value.

Tools: Add Knowledge Base

This tool adds a function that performs a retrieval from a knowledge base to a ToolCollection that is part of a Request. Use this microflow when you have knowledge bases in your application that may be called to retrieve the required information as part of a GenAI interaction. If you want the model to be aware of these microflows, you can use this operation to add them as functions to the request. If supported by the LLM connector, the chat completion operation calls the appropriate knowledge base function based on the LLM response and continue the process until the assistant's final response is returned.

DeployedKnowledgeBase objects have provider-specific specializations, for example, Collection for Mendix Cloud.

Input Parameters
NameTypeNotesDescription
RequestRequestmandatoryThe request to which the knowledge base should be added.
NameStringmandatoryThe name of the knowledge base to use or call. Technically, this is the name of the tool that is passed to the LLM. This needs to be unique per request (if multiple tools/knowledge base retrievals are added).
DescriptionStringoptionalA description of the knowledge base's purpose, used by the model to determine when and how to invoke it.
DeployedKnowledgeBaseObjectmandatoryThe knowledge base that is called within this tool. This object includes a microflow, which is executed when the knowledge base is invoked.
MaxNumberOfResultsIntegeroptionalThis can be used to limit the number of results that should be retrieved.
MinimumSimilarityDecimaloptionalFilters the results to retrieve only chunks with a similarity score greater than or equal to the specified value. The score ranges from 0 (no similarity) to 1.0 (the same vector).
MetadataCollectionObjectoptionalOptional: This contains a list for additional filtering in the retrieve. Only chunks that comply with the metadata labels will be returned.
Return Value

This microflow returns a KnowledgeBaseRetrieval object.

GenAI (Response Handling)

The following microflows handle the response processing.

Get Generated Image (List)

This operation processes a response that was created by an image generation operation. A return entity can be specified using ResponseImageEntity (needs to be of type System.Image or its specialization). A list of images of that type will be created and returned.

Input Parameters
NameTypeNotesDescription
ResponseImageEntityEntitymandatoryThis is to specify the entity of the returned image. Must be of type System.Image or its specializations.
ResponseResponsemandatoryThis is the response that was returned by an image generation operation. It points to a message with the FileContent to create the image.
Return Value
NameTypeDescription
GeneratedImageListList of type determined by ResponseImageEntityThe list of generated images.
Get Generated Image (Single)

This operation processes a response that was created by an image generation operation. A return entity can be specified using ResponseImageEntity (needs to be of type System.Image or its specialization). An image of that type will be created and returned.

Input Parameters
NameTypeNotesDescription
ResponseImageEntityEntitymandatoryThis is to specify the entity of the returned image. Must be of type System.Image or its specializations.
ResponseResponsemandatoryThis is the response that was returned by an image generation operation. It points to a message with the FileContent to create the image.
Return Value
NameTypeDescription
GeneratedImageObject of type determined by ResponseImageEntityThe generated image.
Get References

Use this microflow to get the list of references that may be included in the model response. These can be used to display source information, content, and citations on which the model response text was based according to the language model. References are only available if they were specifically requested from the LLM and mapped from the LLM response into the GenAI Commons domain model.

Input Parameters
NameTypeNotesDescription
ResponseResponsemandatoryThe response object.
Return Value
NameTypeDescription
ReferenceListList of ReferenceThe references with optional citations were part of the response message.
Get Response Text

This microflow can get the content from the latest assistant message over association Response_Message. Use this microflow to get the response text from the latest assistant response message. In many cases, this is the main value needed for further logic after the operation or is displayed to the end user. Note that the content can be directly extracted from the Response's attribute ResponseText.

Input Parameters
NameTypeNotesDescription
ResponseResponsemandatoryThe response object.
Return Value
NameTypeDescription
ResponseTextStringThis is the string Content of the message with role assistant that was generated by the model as a response to a user message.

GenAI (Request Building, Expert)

Configure Stop Sequence

This microflow can be used to add an optional StopSequence to the request. It can be used after the request has been created. If available for the connector and model of choice, stop sequences let models know when to stop generating text.

Input Parameters
NameTypeNotesDescription
RequestRequestmandatoryThis is the request object that contains the functional input for the model to generate a response.
StopSequenceStringmandatoryThis is the stop sequence string, which is used to make the model stop generating tokens at a desired point.
Return Value

This microflow does not have a return value.

Image Generation: Create ImageOptions

This microflow creates new ImageOptions.

Input Parameters
NameTypeNotesDescription
HeightInteger/LongoptionalTo set Width.
WidthInteger/LongoptionalTo set Height.
NumberOfImagesInteger/LongoptionalTo set NumberOfImages to create.
Return Value
NameTypeDescription
ImageOptionsImageOptionsThe newly created ImageOptions object.

GenAI Knowledge Base (Content)

The following microflows and Java actions help you construct the input structures for the operations for knowledge bases and embeddings as defined in GenAI Commons.

Chunks: Add Chunk to ChunkCollection

This microflow adds a new Chunk to the ChunkCollection.

Input Parameters
NameTypeNotesDescription
InputTextStringmandatoryInput text to generate an embedding vector.
ChunkCollectionChunkCollectionmandatoryChunkCollection to add the new Chunks to.
Return Value
NameTypeDescription
ChunkChunkThe added Chunk object.
Chunks: Add KnowledgeBaseChunk to ChunkCollection

This Java action adds a new KnowledgeBaseChunk to the ChunkCollection to create the input for embeddings or knowledge base operations. Optionally, a MetadataCollection can be added for more advanced filtering. Use Initialize MetadataCollection with Metadata to instantiate a MetadataCollection first, if needed.

Input Parameters
NameTypeNotesDocumentation
ChunkCollectionChunkCollectionmandatoryThis is the ChunkCollection to which the KnowledgebaseChunk will be added. This ChunkCollection is the input for other operations.
InputTextStringmandatoryInput text to generate an embedding vector.
HumanReadableIDStringmandatoryThis is a front-end identifier that can be used for showing or retrieving sources in a custom way. If it is not relevant, "empty" must be passed explicitly here.
MxObjectType parameteroptionalThis parameter is used to capture the Mendix object to which the chunk refers. This can be used for finding the record in the Mendix database later on after the retrieval step.
MetadataCollectionMetadataCollectionoptionalThis is an optional MetadataCollection that contains extra information about the KnowledgeBaseChunk. Any key-value pairs can be stored. In the retrieval operations, it is possible to filter on one or multiple metadata key-value pairs.
Return Value
NameTypeDescription
KnowledgeBaseChunkKnowledgeBaseChunkThe added KnowledgeBaseChunk object.
Chunks: Initialize ChunkCollection

This microflow creates a new ChunkCollection and returns it.

Input Parameters

This microflow has no input parameters.

Return Value
NameTypeDescription
ChunkCollectionChunkCollectionThe newly created ChunkCollection object.
Embeddings: Create EmbeddingsOptions

This microflow creates new EmbeddingsOptions.

Input Parameters
NameTypeNotesDescription
DimensionsInteger/LongoptionalThe number of dimensions the resulting output embedding vectors should have. See connector documentation for supported values and models.
Return Value
NameTypeDescription
EmbeddingsOptionsEmbeddingsOptionsThe newly created EmbeddingsOptions object.
Embeddings: Get First Vector from Response

This microflow gets the first embedding vector from the response of an embedding operation.

Input Parameters
NameTypeNotesDescription
EmbeddingsResponseEmbeddingsResponsemandatoryResponse object that gets returned by the embeddings operations.
Return Value
NameTypeDescription
VectorStringThe first vector from the response.
Knowledge Base: Add Metadata to MetadataCollection

This microflow adds a new Metadata object to a given MetadataCollection. Use Initialize MetadataCollection with Metadata to instantiate a MetadataCollection first, if needed.

Input Parameters
NameTypeNotesDescription
KeyStringmandatoryThis is the name of the metadata and typically tells how the value should be interpreted.
ValueStringmandatoryThis is the value of the metadata that provides additional information about the chunk in the context of the given key.
MetadataCollectionMetadataCollectionmandatoryThe MetadataCollection to which the new Metadata object will be added.
Return Value

This microflow does not have a return value.

Knowledge Base: Initialize MetadataCollection with Metadata

This microflow creates a new MetadataCollection and adds a new Metadata. The MetadataCollection will be returned. To add additional Metadata, use Add Metadata to MetadataCollection.

Input Parameters
NameTypeNotesDescription
KeyStringmandatoryThis is the name of the metadata and typically tells how the value should be interpreted.
ValueStringmandatoryThis is the value of the metadata that provides additional information about the chunk in the context of the given key.
Return Value
NameTypeDescription
MetadataCollectionMetadataCollectionThe newly created MetadataCollection object.

Enumerations

ENUM_MessageRole

ENUM_MessageRole provides a list of message author roles.

NameCaptionDescription
userUserA user message is the input from an end-user.
assistantAssistantAn assistant message was generated by the model as a response to a user message.
systemSystemA system message can be used to specify the assistant persona or give the model more guidance and context. This is typically specified by the developer to steer the model response.
toolToolA tool message contains the return value of a tool call as its content. Additionally, a tool message has a ToolCallId that is used to map it to the corresponding previous assistant response which provides the tool call input.

ENUM_MessageType

ENUM_MessageType provides a list of ways of interpreting a message object.

NameCaptionDescription
TextTextThe message represents a normal message and contains text content in the Content attribute.
FileFileThe message contains file data and the files in the associated FileCollection should be taken into account.

ENUM_ContentType

ENUM_ContentType provides a list of possible file content types, which describe how the file data is encoded in the FileContent attribute on the FileContent object that is part of the Message.

NameCaptionDescription
URLUrlThe content of the file can be found on a (publicly available) URL which is provided in the FileContent attribute.
Base64Base64The content of the file can be found as a base64-encoded string in the FileContent attribute.

ENUM_FileType

ENUM_FileType provides a list of file types. Currently, only image and document are supported file types. Not all file types might be supported by all AI providers or models.

NameCaptionDescription
imageImageThe file represents an image (e.g. a .png file).
documentDocumentThe file represents a document (e.g. a .pdf file).

ENUM_ToolChoice

ENUM_ToolChoice provides a list of ways to control which (if any) tool is called by the model. Not all tool choices might be supported by all AI providers or models.

NameCaptionDescription
autoAutoThe model can pick between generating a message or calling a function.
noneNoneThe model does not call a function and instead generates a message.
anyAnyAny function will be called. Not available for all providers and might be changed to auto.
toolToolA particular tool needs to be called, which is the one specified over association ToolCollection_ToolChoice.

ENUM_SourceType

ENUM_SourceType provides a list of source types, which describes how the pointer to the Source attribute on the Reference object should be interpreted to get the source location. Currently, only Url is supported.

NameCaptionDescription
UrlUrlThe Source attribute contains the URL to the source on the internet.

ENUM_ImageGenerationType

ENUM_ImageGenerationType describes how the image generation operation is to be used. Currently, only text to image is supported.

NameCaptionDescription
TEXT_TO_IMAGETEXT_TO_IMAGEThe LLM will generate an image (or multiple images) based on a text description.

ENUM_ModelModality

ENUM_ModelModality describes the modalities that the model supports input or output.

NameCaptionDescription
TextTextThe model supports text.
EmbeddingsEmbeddingsThe model supports embeddings.
ImageImageThe model supports image.
DocumentDocumentThe model supports document.
AudioAudioThe model supports audio.
VideoVideoThe model supports video.
OtherOtherThe model supports another modality.

ENUM_ModelSupport

ENUM_ModelSupport describes if the model supports certain functionality.

NameCaptionDescription
_TrueTrueThe model supports the functionality.
_FalseFalseThe model does not support the functionality.
UnknownUnknownThe support is currently unknown.

Troubleshooting

This section lists possible solutions to known issues.

Outdated JDK Version Causing Errors while Calling a REST API

The Java Development Kit (JDK) is a framework needed by Mendix Studio Pro to deploy and run applications. For more information, see Studio Pro System Requirements. Usually, the correct JDK version is installed during the installation of Studio Pro, but in some cases, it may be outdated. An outdated version can cause exceptions when calling REST-based services with large data volumes, like for example embeddings operations or chat completions with vision.

Mendix has seen the following two exceptions when using JDK versions below jdk-11.0.5.0-hotspot: java.net.SocketException - Connection reset or javax.net.ssl.SSLException - Received fatal alert: record_overflow.

To check your JDK version and update it if necessary, follow these steps:

  1. Check your JDK version – In Studio Pro, go to Edit > Preferences > Deployment > JDK directory. If the path points to a version below jdk-11.0.5.0-hotspot, you need to update the JDK by following the next steps.
  2. Go to Eclipse Temurin JDK 11 and download the .msi file of the latest release of JDK 11.
  3. Open the downloaded file and follow the installation steps. Remember the installation path. Usually, this should be something like C:/Program Files/Eclipse Adoptium/jdk-11.0.22.7-hotspot.
  4. After the installation has finished, restart your computer if prompted.
  5. Open Studio Pro and go to Edit > Preferences > Deployment > JDK directory. Click Browse and select the folder with the new JDK version you just installed. This should be the folder containing the bin folder. Save your settings by clicking OK.
  6. Run the project and execute the action that threw the above-mentioned exception earlier.
    1. You might get an error saying FAILURE: Build failed with an exception. The supplied javaHome seems to be invalid. I cannot find the java executable. In this case, verify that you have selected the correct JDK directory containing the updated JDK version.
    2. You may also need to update Gradle. To do this, go to Edit > Preferences > Deployment > Gradle directory. Click Browse and select the appropriate Gradle version from the Mendix folder. For Mendix 10.10 and above, use Gradle 8.5. For Mendix 10 versions below 10.10, use Gradle 7.6.3. Then save your settings by clicking OK.
    3. Rerun the project.

Migration from Add-On module to App module

As the module has been changed with version 3.0.0 from an add-on to an app module, if you are updating the module the install from marketplace will need a migration to work properly with your application.

The process may look like this:

  1. Backup of data; either as database backup or individual:
    • Incoming associations to protected module’s entities will be deleted
    • Usage data will be lost but can be exported in the ConversationalUI module via the Token Consumption Monitor snippets
  2. Delete Add-On module: GenAICommons
  3. Download the module from the marketplace; note that the module is from now on located under the “Marketplace modules” category in the app explorer.
  4. Test your application locally and verify that everything works as before.
  5. Restore lost data on deployed environments. Usually incoming associations to the protected modules need to be reset.

Conflicted Lib Error After Module Import

If you encounter an error caused by conflicting Java libraries, such as java.lang.NoSuchMethodError: 'com.fasterxml.jackson.annotation.OptBoolean com.fasterxml.jackson.annotation.JsonProperty.isRequired()', try synchronizing all dependencies (App > Synchronize dependencies) and then restart your application.