Tuning JVM memory settings for a Mendix app running in Mendix on Kubernetes
Introduction
Mendix Runtime and any Mendix apps run in a Java Virtual Machine. Most Java Virtual Machines (including the one used in Mendix on Kubernetes) are based on the HotSpot Java VM. The HotSpot VM uses a garbage collector to free up unused memory. However, running the garbage collector causes a negative performance impact. The Hotspot VM tries to only run garbage collection when absolutely necessary (for example, when memory usage reaches a certain threshold).
Memory Tuning
In most cases, Mendix on Kubernetes does not need advanced memory tuning. Starting with Operator version 2.26, the default memory settings have been aligned with Mendix Cloud and are based on typical Mendix app usage patterns.
Types of Memory
In the HotSpot VM, memory is split into two regions, Heap and Non-Heap.
- Heap memory is used to store temporary data, including microflow variables, entities, and internal states of the Mendix Runtime and third-party libraries.
- Non-Heap memory stores code, internal Java VM data and data from non-Java components, such as the BAPI Connector or machine learning modules from ML Kit.
There is a fixed boundary between Heap and Non-Heap memory. Any memory reserved for the heap cannot be used for any other purpose. All remaining memory is assigned to the Non-Heap region.

It is not possible to dynamically adjust the amount of Heap and Non-Heap regions of a running app. This needs to be specified before starting an app.
Default Memory Allocation
Starting with Operator version 2.26, the default memory settings use the following rules:
| Container memory limit | JVM non-heap size |
|---|---|
| Less than 2 GB | Half of the limit (but at least 300 MB) |
| Between 2 and 4 GB | 1 GB |
| Between 4 and 8 GB | 1.5 GB |
| Between 8 and 16 GB | 2 GB |
| Between 16 and 32 GB | 3 GB |
| 32 GB or more | 4 GB |
All other memory is allocated to the heap.
Metrics-Based Tuning
kubectl top pods or data collected by kube-state-metrics), the total memory usage will not reflect internal details. For example, the JVM Heap might be underutilized, but will show as used memory in the container memory usage. To have a clear picture of how memory is allocated and used, use Prometheus metrics.For examples how to read JVM memory usage graphs, see Java Memory Usage.
Adjusting Memory in Mendix on Kubernetes
In a Standalone environment, the JVM heap size can be adjusted in the jvmMemorySettings section of the MendixApp CR.
For example:
apiVersion: privatecloud.mendix.com/v1alpha1
kind: MendixApp
metadata:
# ...
# omitted lines for brevity
# ...
spec:
# ...
# omitted lines for brevity
# ...
# Add or update this section:
jvmMemorySettings:
heapLimit: 1700Mi # example: set heap limit to 1700 megabytesRemoving the jvmMemorySettings section will switch to the default memory allocation settings.
Starting with Operator version 2.26, the JVM heap size should be adjusted as documented in this section. If you have an environment using any of the following custom JVM options, they will not be applied to the app configuration and will instead be removed:
-Xms- minimal heap size-Xmx- heap size limit-XX:MinRAMPercentage- heap size limit, in percentages for environments with less than 250MB of container memory-XX:MaxRAMPercentage- heap size limit, in percentages for environments with more than 250MB of container memory-XX:InitialRAMPercentage- minimal heap size, in percentages
Addressing Memory Issues
On a typical desktop system and many server environments, reaching or exceeding the memory limit is not a problem. The operating system will move some of the memory contents to a storage device such as an SSD or hard drive. However, this will likely cause a major performance penalty, and in Kubernetes is disabled by default.
When an app tries to use more memory than its container memory limit, it will be terminated by Kubernetes (OOMKilled) and will effectively crash. To prevent performance degradation and the app from crashing, it might need its memory to be adjusted.
Addressing Out Of Memory Crashes
An OOMKilled (Out Of Memory-killed) event happens when an app attempts to use more memory than specified in its container limit:

This shows as a container restarting with an OOMKilled exit code:

To address this issue, choose one of the following options:
- Recommended fix: increase the core resource memory (requests and limits).
- Alternatively, if the Prometheus metrics show that the heap memory usage is low, decrease the heap memory size.
Addressing Java.lang.OutOfMemoryError Exceptions
A java.lang.OutOfMemoryError exception happens when the JVM heap is full and does not have enough available memory to perform an action (such as a Microflow or a Java Custom Action).

This shows as a java.lang.OutOfMemoryError: Java heap space error message in the logs.
To address this issue, choose one of the following options:
- Recommended fix: increase the core resource memory (requests and limits).
- Alternatively, if the container has a lot of free, unused memory, increase the heap memory size.