Mendix Cloud GenAI Connector
Introduction
The Mendix Cloud GenAI connector (delivered as part of GenAI for Mendix) lets you utilize Mendix Cloud GenAI resource packs directly within your Mendix application. It allows you to integrate generative AI by dragging and dropping common operations from its toolbox.
Typical Use Cases
The Mendix Cloud GenAI Connector is commonly used for text generation, embeddings, and knowledge bases. These use cases are described in more detail below:
Text Generation
-
Develop interactive AI chatbots and virtual assistants that can carry out conversations naturally and engagingly.
-
Use state-of-the-art large language models (LLMs) by providers like Anthropic for text comprehension and analysis use cases such as summarization, synthesis, and answering questions about large amounts of text.
-
By using text generation models, you can build applications with features such as:
- Draft documents
- Write computer code
- Answer questions about a knowledge base
- Analyze texts
- Give the software a natural language interface
- Tutor in a range of subjects
- Translate languages
- Simulate characters for games
- Image to text
Knowledge Base
The module enables tailoring generated responses to specific contexts by grounding them in data inside of a collection belonging to a Mendix Cloud GenAI knowledge base resource. This allows for the secure use of private company data or other non-public information when interacting with GenAI models within the Mendix app. It provides a low-code solution to store discrete data (commonly called chunks) in the knowledge base and retrieves relevant information for end-user actions or application processes.
Knowledge bases are often used for:
- Retrieval Augmented Generation (RAG) retrieves relevant knowledge from the knowledge base, incorporates it into a prompt, and sends it to the model to generate a response.
- Semantic search enables advanced search capabilities by considering the semantic meaning of the text, going beyond exact and approximate matching. It allows the knowledge base to be searched for similar chunks effectively.
Embeddings
Convert strings into vector embeddings for various purposes based on the relatedness of texts.
Embeddings are commonly used for the following:
- Search
- Clustering
- Recommendations
- Anomaly detection
- Diversity measurement
- Classification
You can combine embeddings with text generation capabilities and leverage specific sources of information to create a smart chat functionality tailored to your knowledge base.
Features
In the current version, Mendix supports text generation (including function/tool calling, chat with images, and chat with documents), vector embedding generation, knowledge base storage, and retrieval of knowledge base chunks.
Prerequisites
To use this connector, you need configuration keys to authenticate to the Mendix Cloud GenAI services. You can generate keys in the developer portal or ask someone with access to either generate them for you or be added to the team to generate keys yourself.
Dependencies
- Mendix Studio Pro version 9.24.2 or above
- Encryption
- Community Commons
Installation
Add the Dependencies listed above from the Marketplace. On the Marketplace, the Mendix Cloud GenAI connector is bundled inside of GenAI for Mendix which also contains GenAI commons operations and logic. To import this module into your app, follow the instructions in the Use Marketplace Content.
Configuration
After installing the Mendix Cloud GenAI connector, you can find it in the App Explorer inside of the Add-ons section. The connector includes a domain model and several activities to help integrate your app with the Mendix Cloud GenAI service. To implement the connector, simply use its actions in a microflow. You can find the Mendix GenAI actions in the microflow toolbox. Note that the module is protected, meaning it cannot be modified and the microflow logic is not visible. For details about each exposed operation, see the Operations section below or refer to the documentation provided within the module. For more information on Add-on modules, see Consuming Add-on Modules and Solutions.
Follow the steps below to get started:
- Make sure to configure the Encryption module before you connect your app to Mendix Cloud GenAI.
- Add the module role
MxGenAIConnector.Administrator
to your Administrator User roles in the Security settings of your app. - Add the
NAV_ConfigurationOverview_Open
microflow (USE_ME > Configuration) to your Navigation or register your key using theConfiguration_RegisterByString
microflow. - Complete the runtime setup of Mendix Cloud GenAI configuration by navigating to the page through the microflow mentioned above. Import a key generated in the portal or provided to you and click Test Key to validate its functionality.
Operations
Configuration keys are stored persistently after they are imported (either via the UI or the exposed microflow). There are three different types of configurations that reflect the use cases this service supports. The specific operations are described below.
To use the operations, either a DeployedModel
(text, embeddings) or a MxKnowledgebaseConnection
must always be passed as input. The DeployedModel will be created automatically when importing keys at runtime and needs to be retrieved from the database. To initialize a knowledge base operation, use the Connection: Get
toolbox action to create the MxKnowledgebaseConnection
object. It requires a CollectionName
(string) for the right collection inside of the knowledge base resource to be used.
Chat Completions Operation
After following the general setup above, you are ready to use the chat completions microflows in the GenAICommons and MxGenAIConnector modules. You can find Chat Completions (without history)
and Chat Completions (with history)
in the Text & Files folder of the GenAICommons and Retrieve and Generate (MxCloud, without history)
inside of the USE_ME > RetrieveAndGenerate folder of the MxGenAIConnector module. The chat completions microflows are also exposed as microflow actions under the GenAI (Generate) category inside of the Toolbox.
These microflows expect a DeployedModel
as input to determine the connection details.
In chat completions, system prompts and user prompts are two key components that help guide the language model in generating relevant and contextually appropriate responses. For more information on prompt engineering, see the Read More section. Different exposed microflow activities may require different prompts and logic for how the prompts must be passed, as described in the following sections. For more information on message roles, see the ENUM_MessageRole enumeration in GenAI Commons.
Apart from Retrieve and Generate (MxCloud, without history)
, the chat completion operations support Function Calling, Vision, and Document Chat.
For more inspiration or guidance on how to use the above-mentioned microflows in your logic, Mendix recommends downloading the GenAI Showcase App, which demonstrates a variety of examples.
Chat Completions (without History)
The microflow activity Chat Completions (without history) supports scenarios where there is no need to send a list of (historic) messages comprising the conversation so far as part of the request.
Chat Completions (with History)
The microflow activity Chat completions (with history) supports more complex use cases where a list of (historical) messages (for example, the conversation or context so far) is sent as part of the request to the LLM.
Chat Completions (Retrieve & Generate)
The microflow activity Retrieve and Generate (MxCloud, without history)
simplifies Retrieve and Generate
use cases without history. By providing a user prompt, the knowledge base is searched for similar knowledge chunks, which are then passed to the model. The model is instructed to base its response on the retrieved knowledge while referring to the source used to generate the response. This operation requires a Request which is associated to a RetrieveAndGenerateRequest_Extension
pointing to a MxKnowledgebaseConnection
object. Please use the flow shown below as orientation when setting up your logic to make sure that all is implemented as required:
A SystemPrompt
can be provided through the Request
and other filter options can be set when initializing the RetrieveAndGenerateRequest_Extension
(for example through metadata).
The returned Response
includes References if the model used them to generate its response. In some cases, a knowledge chunk consists of two texts: one for the semantic search step and another for the generation step. For example, when solving a problem based on historical solutions, the semantic search identifies similar problems using their descriptions, while the generation step produces a solution based on the corresponding historical solutions. In those cases, you can add MetaData with the key knowledge
to the chunks during the insertion stage, allowing the model to base its response on the specified metadata rather than the input text.
Additionally, to utilize the Source
attribute of the references, you can include MetaData
with the key sourceUrl
. Finally, the HumanReadableId
of a chunk is used to display the reference’s title in the response.
Function Calling
Function calling enables LLMs to connect with external tools to gather information, execute actions, convert natural language into structured data, and much more. Function calling thus enables the model to intelligently decide when to let the Mendix app call one or more predefined function microflows to gather additional information to include in the assistant’s response.
The model does not call the function but rather returns a tool called JSON structure that is used to build the input of the function (or functions) so that they can be executed as part of the chat completions operation. Functions in Mendix are essentially microflows that can be registered within the request to the LLM. The connector takes care of handling the tool call response and executing the function microflows until the API returns the assistant’s final response.
Function microflows take a single input parameter of type string or no input parameter and must return a string. Currently, adding a ToolChoice for function calling is not supported by the Mendix Cloud GenAI Connector.
Function calling is a highly effective capability and should be used with caution. Function microflows run in the context of the current user, without enforcing entity access. You can use $currentUser
in XPath queries to ensure that you retrieve and return only information that the end-user is allowed to view; otherwise, confidential information may become visible to the current end-user in the assistant’s response.
Mendix also strongly advises that you build user confirmation logic into function microflows that have a potential impact on the world on behalf of the end-user. Some examples of such microflows include sending an email, posting online, or making a purchase.
You can use function calling in all chat completions operations by adding a ToolCollection
with a Function
via the Tools: Add Function to Request operation.
For more information, see Function Calling.
Vision
Vision enables the model to interpret and analyze images, allowing them to answer questions and perform tasks related to visual content. This integration of computer vision and language processing enhances the model’s comprehension and makes it valuable for tasks involving visual information. To ensure vision inside the connector, an optional FileCollection containing one or multiple images must be sent with a single message.
For Chat Completions (without history), OptionalFileCollection
is an optional input parameter. For Chat completions (with history), a FileCollection
can optionally be added to individual user messages using Add Message to Request.
In the entire conversation, you can pass up to 20 images that are smaller than 3.75 MB each and with a height and width of a maximum 8000 pixels. The following types are accepted: PNG, JPEG, JPG, GIF, and WebP.
Document Chat
Document chat enables the model to interpret and analyze documents, such as PDFs or Excel files, allowing them to answer questions and perform tasks related to the content. To use document chat, an optional FileCollection containing one or multiple documents must be sent along with a single message.
For Chat Completions (without history), OptionalFileCollection
is an optional input parameter. For Chat completions (with history), a FileCollection
can optionally be added to individual user messages using Add Message to Request.
In the entire conversation, you can pass up to five documents that are smaller than 4.5 MB each. The following file types are accepted: PDF, CSV, DOC, DOCX, XLS, XLSX, HTML, TXT, and MD.
When adding a document to the FileCollection
, you can optionally use the TextContent
parameter to pass the file name. Ensure the file name excludes its extension before passing it to the file collection.
Note that the model uses the file name when analyzing documents, which could make it vulnerable to prompt injection. Depending on your use case, you may choose to modify the string or not pass it at all.
Knowledge Base Operations
To implement knowledge base logic into your Mendix application, you can use the actions in the USE_ME > Knowledge Base folder or under the GenAI Knowledge Base (Content) or Mendix Cloud Knowledge Base categories in the Toolbox. These actions require a specialized Connection of type MxKnowledgeBaseConnection
that determines the model and endpoint to use. Additionally, the collection name must be passed when creating the object and it must be associated with a Configuration
object. Please note that for Mendix Cloud a knowledge base resource may contain several collections (tables).
Dealing with knowledge bases involves two main stages:
You do not need to manually add embeddings to a chunk, as the connector handles this internally. To see all existing knowledge bases for a configuration, go to the Knowledge Base tab on the Mendix Cloud GenAI Configuration page and refresh the view on the right. Alternatively, use the Get Collections
action to retrieve a synchronized list of collections inside of your knowledge base resource to include in your module. Lastly, you can delete a collection using the Delete Collection
action.
Knowledge Base Insertion
Data Chunks
To add data to the knowledge base, you need discrete pieces of information and create knowledge base chunks for each one. Use the GenAICommons operations to first initialize a ChunkCollection object, and then add a KnowlegdebaseChunk object to it for each piece of information. Both can be found in the Toolbox inside of the GenAI Knowledge Base (Content) category.
Chunking Strategy
Dividing data into chunks is crucial for model accuracy, as it helps optimize the relevance of the content. The best chunking strategy is to keep a balance between reducing noise by keeping chunks small and retaining enough content within a chunk to get relevant results. Creating overlapping chunks can help preserve more context while maintaining a fixed chunk size. It is recommended to experiment with different chunking strategies to decide the best strategy for your data. In general, if chunks are logical and meaningful to humans, they will also make sense to the model. A chunk size of approximately 1500 characters with overlapping chunks has been proven to be effective for longer texts in the past.
The chunk collection can then be stored in the knowledge base using one of the following operations:
Add Data Chunks to Your Knowledge Base
Use the following toolbox actions inside the Mendix Cloud Knowledge Base toolbox category to populate knowledge data into the knowledge base:
Embed & Insert
embeds a list of chunks (passed via a ChunkCollection) and inserts them into the knowledge base.Embed & repopulate KB
is similar to theEmbed & Insert
, but deletes all existing chunks from the knowledge base before inserting the new chunks.Embed & Replace
replaces existing chunks in the knowledge base that match the associated Mendix object which was passed via Add KnowledgeBaseChunk to ChunkCollection action at the insertion stage.
Additionally, use the following toolbox actions to delete chunks:
Delete for Object
deletes all chunks (and related metadata) from the collection that was associated with a passed Mendix object at the insertion stage.Delete for List
is similar to theDelete for Object
, but a list of Mendix objects is passed instead.
When data in your Mendix app that is relevant to the knowledge base changes, it is usually necessary to keep the knowledge base chunks in sync. Whenever a Mendix Object changes, the affected chunks must be updated. Depending on your use case, the Embed & Replace
and Delete for Objects
can be conveniently used in event handler microflows.
The example below shows how to repopulate a knowledge base using a list of Mendix objects:
Knowledge Base Retrieval
The following toolbox actions can be used to retrieve knowledge data from the knowledge base (and associate it with your Mendix data):
-
Retrieve
retrieves knowledge base chunks from the knowledge base. You can use pagination via theOffset
andMaxNumberOfResults
parameters or apply filtering via aMetadataCollection
orMxObject
. (Scroll down to see all available input parameters of this operation) -
Retrieve & Associate
is similar to theRetrieve
but associates the returned chunks with a Mendix object if they were linked at the insertion stage.You must define your entity specialized fromKnowledgeBaseChunk
, which is associated to the entity that was used to pass a MendixObject during the insertion stage. -
Embed & Retrieve Nearest Neighbors
retrieves a list of type KnowledgeBaseChunk from the knowledge base that are most similar to a givenContent
by calculating the cosine similarity of its vectors. -
Embed & Retrieve Nearest Neighbors & Associate
combines the above actionsRetrieve & Associate
andEmbed & Retrieve Nearest Neighbors
.
Embedding Operations
If you are working directly with embedding vectors for specific use cases that do not include knowledge base interaction (for example clustering or classification), the below operations are relevant. For practical examples and guidance, consider referring to the GenAI Showcase Application showcase to see how these embedding-only operations can be used.
To implement embeddings into your Mendix application, you can use the microflows in the Knowledge Bases & Embeddings folder inside of the GenAICommons module. Both microflows for embeddings are exposed as microflow actions under the GenAI (Generate) category in the Toolbox in Mendix Studio Pro.
These microflows require a DeployedModel that determines the model and endpoint to use. Depending on the selected operation, an InputText
String or a ChunkCollection needs to be provided.
Embeddings (String)
The microflow activity Generate Embeddings (String) supports scenarios where the vector embedding of a single string must be generated. This input string can be passed directly as the TextInput
parameter of this microflow. Note that the parameter EmbeddingsOptions is optional. Use the exposed microflow Embeddings: Get First Vector from Response to retrieve the generated embeddings vector.
Embeddings (ChunkCollection)
The microflow activity Generate Embeddings (ChunkCollection) supports the more complex scenario where a collection of Chunk objects is vectorized in a single API call, such as when converting a collection of text strings (chunks) from a private knowledge base into embeddings. Instead of calling the API for each string, executing a single call for a list of strings can significantly reduce HTTP overhead. The embedding vectors returned after a successful API call will be stored as an EmbeddingVector
attribute in the same Chunk
object. Use the exposed microflows of GenAI Commons Chunks: Initialize ChunkCollection, Chunks: Add Chunk to ChunkCollection, or Chunks: Add KnowledgeBaseChunk to ChunkCollection to construct the input.
To create embeddings, it does not matter whether the ChunkCollection contains Chunks or its specialization KnowledgeBaseChunks. Note that the knowledge base operations handle the embedding generation themselves internally.
Technical Reference
The module includes technical reference documentation for the available entities, enumerations, activities, and other items you can use in your application. You can view the information about each object in context by using the Documentation pane in Studio Pro.
The Documentation pane displays the documentation for the currently selected element. To view it, perform the following steps:
-
In the View menu of Studio Pro, select Documentation.
-
Click the element for which you want to view the documentation.
Implementing GenAI with the Showcase App
For more inspiration or guidance on how to use microflows in your logic, Mendix recommends downloading the GenAI Showcase App, which demonstrates a variety of example use cases and applies almost all of the Mendix Cloud GenAI operations. The starter apps in the Mendix Components list can also be used as inspiration or simply adapted for a specific use case.
Read More
For Anthropic Claude-specific documentation, refer to: