To allow you to manage the deployment of your apps to Red Hat OpenShift and Kubernetes, you first need to create a cluster and add at least one namespace in the Mendix Portal. This will provide you with the information you need to deploy the Mendix Operator and Mendix Gateway Agent in your OpenShift or Kubernetes context and create a link to the Environments pages of your Mendix app through the Interactor.
This document explains how to set up the cluster in Mendix.
Once you have created your namespace, you can invite additional team members who can then create or view environments in which their apps are deployed, depending on the rights you give them. For more information on the relationship between Mendix environments, Kubernetes namespaces, and Kubernetes clusters, see Containerized Mendix App Architecture, below.
To create a cluster in your OpenShift context, you need the following:
A supported Kubernetes platform; for more information, see Supported Versions
An administration account for your OpenShift or Kubernetes platform
OpenShift CLI installed (see Getting started with the CLI on the Red Hat OpenShift website for more information) if you are creating clusters on OpenShift
Kubectl installed if you are deploying to another Kubernetes platform (see Install and Set Up kubectl on the Kubernetes webside for more information)
A command line terminal that supports the console API and mouse interactions. In Windows, this could be PowerShell or the Windows Command Prompt. See Terminal limitations, below, for a more detailed explanation.
Connected Environments
Should you consider using a connected environment, the following URLs should be safelisted in your cluster's operating system, as these URLs point to services or resources required by the Connected Environments' infrastructure.
All services listed in the table below use the HTTPS protocol (port 443).
Select Mendix for Private Cloud from the top menu bar in the Mendix Portal.
Click Register Cluster.
Enter the following information:
Installation Type – Choose Global Installation if you want a single Operator namespace to manage multiple namespaces, or Namespace Installation if you want the Operator to only manage one namespace. For more information, see Global Operator.
Cluster Name – The name that you want to give the cluster which you are creating.
Cluster Type – Choose the correct type for your cluster. For more information, see Supported Providers.
Description – An optional description of the cluster which will be displayed under the cluster name in the Cluster Manager.
Click Create.
Adding a Namespace for Connected Cluster
You now need to add a namespace to your cluster. Your cluster can contain several namespaces, see Containerized Mendix App Architecture, below for more information.
To add a namespace, do the following:
Click Details (
) on the top right of the page:
Click Add Namespace:
Enter the following details:
Namespace – this is the namespace in your platform; this must conform to the namespace naming conventions of the cluster: all lower-case with hyphens allowed within the name
Installation type – if you want to create environments and deploy your app from the Mendix Portal, choose Connected, but if you only want to control your deployments through the Mendix Operator using the CLI, choose Standalone
If you would like to add a namespace to be added in the Standalone cluster, do the following:
Click Details (
) on the top right of the page:
Click Add Namespace.
Enter the following details:
Namespace – This is the namespace in your platform; this must conform to the namespace naming conventions of the cluster: all lower-case with hyphens allowed within the name.
Installation type – Choose Standalone.
Click Next.
Once you click on Next, you will be redirected to the Installation pop up page from where you can download the mxpc-cli and get the command to install the namespace in the cluster.
For existing namespaces, if you would like to download the executables for mxpc-cli, you can go here
In above page, once you do a JSON format, you will get the links for mxpc-cli for different available versions.
Installing and Configuring the Mendix Operator
You can install and run the Mendix Operator in either Global or Standard mode. In Global mode, the Operator is installed once for all available namespaces, whereas in Standard mode, it is installed separately for each namespace where a Mendix app is deployed. For more information, see:
Licensing the Application with Private Cloud License Manager
You can license the Operator and Runtime of your application by configuring the Operator configuration with License Manager details. In order to start using Private Cloud License Manager, you need to first download the PCLM executable available in the Installation page. For more information, see Private Cloud License Manager. The PCLM executable is available for download from this page.
In order to configure PCLM, make sure that the Operator version is 2.11.0 and above.
In the context of the Global Operator, it is necessary to configure both the managed namespace and the Global Operator namespace with the License Manager details. For more information, see Private Cloud License Manager.
Advanced Operator Configuration
Before updating the Operator with the advanced configurations, make sure to go through the Introduction to Operators which explains how Operators work in Mendix on Kubernetes.
For Global Operator scenarios, if the Operator configuration in the managed namespace differs from the configuration in the Global Operator namespace, the configuration from the managed namespace will always take precedence.
Some advanced configuration options of the Mendix Operator are not yet available in the Configuration Tool.
These options can be changed by editing the OperatorConfiguration custom resource directly in Kubernetes.
Look at Supported Providers to ensure that your planned configuration is supported by Mendix on Kubernetes.
To start editing the OperatorConfiguration, use the following commands (replace {namespace} with the namespace where the operator is installed):
Changing options which are not documented here can cause the Mendix Operator to configure environments incorrectly. Mendix recommends making a backup before applying any changes.
Runtime Base Image
Starting from version 2.15.0, the OperatorConfiguration contains allows to specify the base OS image tag template.
The Operator will parse the MDA file metadata and use this metadata to fill in the JavaVersion field.
apiVersion:privatecloud.mendix.com/v1alpha1kind:OperatorConfiguration# ...# omitted lines for brevity# ...spec:baseOSImageTagTemplate:'ubi9-1-jre{{.JavaVersion}}-entrypoint'
At the moment, the baseOSImageTagTemplate can be set to one of the following values:
ubi8-1-jre{{.JavaVersion}}-entrypoint - to use Red Hat UBI 8 Micro images; this option can be used for some cases where backward compatibility is needed.
ubi9-1-jre{{.JavaVersion}}-entrypoint - to use Red Hat UBI 9 Micro images; this is the default option.
Future Studio Pro releases will have an option to use alternative (newer) LTS versions of Java, such as Java 17 or Java 21.
If an app's MDA was built using a newer Java version, Mendix Operator 2.15.0 (and newer versions) will detect this and use a base image with the same major Java version that was used to build the MDA. Because of that, Java 17 or Java 21-based applications should use the Operator in version 2.15.0 or above.
Endpoint (network) Configuration
The OperatorConfiguration contains the following user-editable options for network configuration:
When using Ingress for network endpoints:
apiVersion:privatecloud.mendix.com/v1alpha1kind:OperatorConfiguration# ...# omitted lines for brevity# ...spec:# Endpoint (Network) configurationendpoint:# Endpoint type: ingress, openshiftRoute or servicetype:ingress# Optional, can be omitted: Service annotationsserviceAnnotations:# example: custom AWS CLB configurationservice.beta.kubernetes.io/aws-load-balancer-backend-protocol:tcpservice.beta.kubernetes.io/aws-load-balancer-ssl-cert:arn:aws:acm:eu-west-1:account:certificate/idservice.beta.kubernetes.io/aws-load-balancer-ssl-ports:"443"# Ingress configuration: used only when type is set to ingressingress:# Optional, can be omitted: annotations which should be applied to all Ingress Resourcesannotations:# default annotation: allow uploads of files up 500 MB in the NGINX Ingress Controllernginx.ingress.kubernetes.io/proxy-body-size:500m# example: use the specified cert-manager ClusterIssuer to generate TLS certificates with Let's Encryptcert-manager.io/cluster-issuer:staging-issuer# example: deny access to /rest-docnginx.ingress.kubernetes.io/configuration-snippet:| location /rest-doc {
deny all;
return 403;
}# App URLs will be generated for subdomains of this domain, unless an app is using a custom appURLdomain:mendix.example.com# Enable or disable TLSenableTLS:true# Optional: name of a kubernetes.io/tls secret containing the TLS certificate# This example is a template which lets cert-manager to generate a unique certificate for each apptlsSecretName:'{{.Name}}-tls'# Optional: specify the Ingress class nameingressClassName:alb# Optional, can be omitted : specify the Ingress pathpath:"/"# Optional, can be omitted : specify the Ingress pathTypepathType:ImplementationSpecific# ...# omitted lines for brevity# ...
When using OpenShift Routes for network endpoints:
apiVersion:privatecloud.mendix.com/v1alpha1kind:OperatorConfigurationspec:# Endpoint (Network) configurationendpoint:# Endpoint type: ingress, openshiftRoute, or servicetype:openshiftRoute# OpenShift Route configuration: used only when type is set to openshiftRouteopenshiftRoute:# Optional, can be omitted: annotations which should be applied to all Ingress Resourcesannotations:# example: use HSTS headershaproxy.router.openshift.io/hsts_header:max-age=31536000;includeSubDomains;preload# Optional: App URLs will be generated for subdomains of this domain, unless an app is using a custom appURLdomain:mendix.example.com# Enable or disable TLSenableTLS:true# Optional: name of a kubernetes.io/tls secret containing the TLS certificate# This example is the name of an existing secret, which should be a wildcard matching subdomains of the domain nametlsSecretName:'mendixapps-tls'
When using Services for network endpoints (without an Ingress or OpenShift route):
apiVersion:privatecloud.mendix.com/v1alpha1kind:OperatorConfigurationspec:# Endpoint (Network) configurationendpoint:# Endpoint type: ingress, openshiftRoute, or servicetype:service# Optional, can be omitted: the Service typeserviceType:LoadBalancer# Optional, can be omitted: Service annotationsserviceAnnotations:# example: annotations required for AWS NLBservice.beta.kubernetes.io/aws-load-balancer-type:externalservice.beta.kubernetes.io/aws-load-balancer-nlb-target-type:ipservice.beta.kubernetes.io/aws-load-balancer-scheme:internet-facing# Optional, can be omitted: Service portsservicePorts:- 80- 443
You can change the following options:
type: – select the Endpoint type, possible options are ingress, openshiftRoute and service; this parameter is also configured through the Configuration Tool
ingress: - specify the Ingress configuration, required when type is set to ingress
openshiftRoute: - specify the OpenShift Route configuration, required when type is set to openshiftRoute
annotations: - optional, can be used to specify the Ingress or OpenShift Route annotations, can be a template: {{.Name}} will be replaced with the name of the CR for the Mendix app, and {{.Domain}} will be replaced with the application's domain name
serviceAnnotations: - optional, can be used to specify the Service annotations, can be a template: {{.Name}} will be replaced with the name of the CR for the Mendix app, and {{.Domain}} will be replaced with the application's domain name
ingressClassName: - optional, can be used to specify the Ingress Class name
path: - optional, can be used to specify the Ingress path; default value is /
pathType: - optional, can be used to specify the Ingress pathType; if not set, no pathType will be specified in Ingress objects
domain: - optional for openshiftRoute, required for ingress, used to generate the app domain in case no app URL is specified; if left empty when using OpenShift Routes, the default OpenShift apps domain will be used; this parameter is also configured through the Configuration Tool
enableTLS: - allows you to enable or disable TLS for the Mendix App's Ingress or OpenShift Route
tlsSecretName: - optional name of a kubernetes.io/tls secret containing the TLS certificate, can be a template: {{.Name}} will be replaced with the name of the CR for the Mendix app; if left empty, the default TLS certificate from the Ingress Controller or OpenShift Router will be used
serviceType: - can be used to specify the Service type, possible options are ClusterIP and LoadBalancer; if not specified, Services will be created with the ClusterIP type
servicePorts: - can be used to specify a list of custom ports for the Service; if not specified, Services will use be created with port 8080
When switching between Ingress and OpenShift Routes, you need to restart the Mendix Operator for the changes to be fully applied.
Mendix App Deployment settings
The OperatorConfiguration contains the following user-editable options for configuring Mendix app Deployments (Pods):
apiVersion:privatecloud.mendix.com/v1alpha1kind:OperatorConfigurationspec:# Optional: provide Mendix app Pods to get a Kubernetes Service Account tokenruntimeAutomountServiceAccountToken:true# Optional: annotations for Mendix app PodsruntimeDeploymentPodAnnotations:# example: inject the Linkerd proxy sidecarlinkerd.io/inject:enabled# example: enable Prometheus metrics scrapingprometheus.io/path:/metricsprometheus.io/port:'8900'prometheus.io/scrape:'true'
You can change the following options:
runtimeAutomountServiceAccountToken: – specify if Mendix app Pods should get a Kubernetes Service Account token; defaults to false; should be set to true when using Linkerd Automatic Proxy Injection
runtimeDeploymentPodAnnotations: – specify default annotations for Mendix app Pods
Mendix App Resource Customization
The Deployment object that controls the pod of a given Mendix application contains user-editable options for fine-tuning the execution to the application's runtime resources.
The Deployment object as a name in the following format:
<internal environment name>-master
Below is an example of the Deployment definition of an app. In this example, the Deployment definition is called b8nn6lq5-master:
apiVersion:apps/v1kind:Deployment# ...# omitted lines for brevity# ...spec:progressDeadlineSeconds:600replicas:1revisionHistoryLimit:0# ...# omitted lines for brevity# ...template:metadata:# ...# omitted lines for brevity# ...creationTimestamp:nulllabels:app:b8nn6lq5component:mendix-appnode-type:masterspec:automountServiceAccountToken:falsecontainers:- env:- name:M2EE_ADMIN_LISTEN_ADDRESSESvalue:127.0.0.1- name:M2EE_ADMIN_PORTvalue:"9000"- name:M2EE_ADMIN_PASSvalueFrom:secretKeyRef:key:adminpasswordname:b8nn6lq5-m2eeimage:image-registry.openshift-image-registry.svc:5000/test-app/b8nn6lq5imagePullPolicy:Alwaysports:- containerPort:8080name:mendix-appprotocol:TCPname:mendixlivenessProbe:failureThreshold:3httpGet:path:/m2ee-sidecar/v1/livezport:8800scheme:HTTPinitialDelaySeconds:60periodSeconds:15successThreshold:1timeoutSeconds:3readinessProbe:failureThreshold:3httpGet:path:/m2ee-sidecar/v1/readyzport:8800scheme:HTTPinitialDelaySeconds:5periodSeconds:1successThreshold:1timeoutSeconds:1terminationGracePeriodSeconds:300resources:limits:cpu:1memory:512Miephemeral-storage:4Mirequests:cpu:100mmemory:512Miephemeral-storage:4Mi# ...# omitted lines for brevity# ...
Resource Definition via Operator Configuration Manifest
For a given namespace, all the resource information is aggregated in the mendix-operator-configuration manifest. This centralizes and overrides all the configuration explained above. For an example of the Operator configuration manifest, see below. Note that the below configuration is just for reference purpose.
apiVersion:privatecloud.mendix.com/v1alpha1kind:OperatorConfiguration# ...# omitted lines for brevity# ...spec:sidecarResources:limits:cpu:250mmemory:32Miephemeral-storage:4Mirequests:cpu:100mmemory:16Miephemeral-storage:4MimetricsSidecarResources:limits:cpu:100mmemory:32Miephemeral-storage:4Mirequests:cpu:100mmemory:16Miephemeral-storage:4MibuildResources:limits:cpu:'1'memory:256Miephemeral-storage:2Girequests:cpu:250mmemory:64Miephemeral-storage:2GiruntimeResources:limits:cpu:1000mmemory:512Miephemeral-storage:256Mirequests:cpu:100mmemory:512Miephemeral-storage:256MiruntimeLivenessProbe:initialDelaySeconds:60periodSeconds:15runtimeReadinessProbe:initialDelaySeconds:5periodSeconds:1# startup probes are deprecated in Mendix Operator 2.15.0runtimeStartupProbe:failureThreshold:30periodSeconds:10runtimeTerminationGracePeriodSeconds:300
The following fields can be configured:
liveness, readiness, and terminationGracePeriodSeconds – these are used for all Mendix app deployments in the namespace; any changes made in the deployments will be discarded and overwritten with values from the OperatorConfiguration resource
sidecarResources – this is used for all m2ee-sidecar containers in the namespace
metricsSidecarResources – this is used for all m2ee-metrics containers in the namespace
runtimeResources – this is used for mendix-runtime containers in the namespace (but this is overwritten if the Mendix app CRD has a resources block)
buildResources – this is used for the main container in *-build pods
Mendix Operator 2.15.0 uses an improved liveness probe that runs a healthcheck of the Mendix Runtime.
As soon as the Mendix Runtime begins the startup process, the liveness check will return a valid response - as long as the Mendix Runtime is starting and passes its internal healthchecks.
The liveness probe will begin returning valid responses just a few seconds after the Runtime container starts, and this removes the need to use startup probes.
Starting from Mendix Operator 2.15.0, startup probes are no longer used, and changing their settings will have no effect.
Starting from Mendix Operator 2.17.0, the liveness probe health check depends on the runtime_status command to check if an app is running (starting or started) and in a healthy state.
If a check_health microflow is configured, its status will also be validated.
An app will return a successful health check status if all of these conditions are true:
The Runtime replies to ping calls (any reply is accepted).
Mendix Operator versions 2.15 and 2.16 assumed an invalid ping reply to be an error, and failed the liveness probe. The Mendix Runtime will return a fail reply to ping calls if at any point a message was logged with a critical log level (which is reserved for errors that require immediate attention, and that caused some apps to restart when running in Operator 2.15 or 2.16. Mendix Operator 2.17 ignores the ping reply and will no longer restart apps that have logged a critical log message.
The Runtime's status is created, starting, running or stopping.
If the Runtime is running, and a healthcheck microflow is configured, the healthcheck microflow needs to return a healthy state. If there is no check_health microflow configured, or the Runtime's state is not running, this condition is ignored.
Starting from Mendix Operator 2.23.0, environments running in leaderless mode use the Mendix Runtime's built-in liveness and readiness checks.
When another runtimeLeaderSelection mode is used (default, unspecified assigned mode, or none), the healthcheck microflow is used, as described above.
Customize Liveness Probe to Resolve Crash Loopback Scenarios
The liveness probe informs the cluster whether the pod is dead or alive. If the pod fails to respond to the liveness probe, the pod will be restarted (this is called a crash loopback).
The readiness probe, on the other hand, is designed to check if the cluster is allowed to send network traffic to the pod. If the pod fails this probe, requests will no longer be sent to the pod.
The configuration of the Readiness probe does not help to resolve crash loopback scenarios. In fact increasing its parameters might degrade the performance of your app, since any malfunction or error recovery will take longer to be acknowledged by the cluster.
Let us now analyze the liveness probe section from the application deployment example, above:
initialDelaySeconds – the number of seconds after the container has started that the probe is initiated. Minimum value is 0.
periodSeconds – how often (in seconds) to perform the probe. Default is 10 seconds. Minimum value is 1.
timeoutSeconds – the number of seconds after which the probe times out. Default is 3 second. Minimum value is 1.
successThreshold – the number of consecutive successes required before the probe is considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup Probes. Minimum value is 1.
failureThreshold – the number of times Kubernetes will retry when a probe fails before giving up. Giving up in case of a liveness probe means restarting the container. Defaults to 3. Minimum value is 1.
If we are deploying a large application that takes much longer to start than the defined 60 seconds, we will observe it restarting multiple times. To solve this scenario we must edit field initialDelaySeconds for the Liveness probe to a substantially larger value.
Mendix Operator 2.15.0 uses an improved liveness probe that runs a healthcheck of the Mendix Runtime.
The default settings for the liveness probe should work for almost every use case, and should not be modified unless instructed by Mendix Support.
Customize Startup Probes for Slow Starting Applications
If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startup probe.
A startup probe should be used when the application in your container could take a significant amount of time to reach its normal operating state. Applications that would crash or throw an error if they handled a liveness or readiness probe during startup need to be protected by a startup probe. This ensures the container doesn't continually restart due to failing health checks before it has finished launching. Using a startup probe is much better than increasing initialDelaySeconds on readiness or liveness probes. Startup probes defer the execution of liveness and readiness probes until a container indicates it is able to handle them because Kubernetes doesn't direct the other probe types to the container if it has a startup probe that hasn't yet succeeded.
You can see an example of a startup probe configuration below:
In this example, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provide a fast response to container deadlocks. If the startup probe never succeeds, the container is killed after 300s and subject to the pod's restartPolicy.
If you misconfigure a startup probe, for example you don't allow enough time for the startup probe to succeed, the kubelet might restart the container prematurely, causing your container to continually restart.
Startup probes are available in the Mendix on Kubernetes Operator version 2.6.0 and above.
In Kubernetes version 1.19, startup probes are still a beta feature.
Mendix Operator 2.15.0 uses an improved liveness probe that runs a healthcheck of the Mendix Runtime.
Startup probes are no longer used, and changing the startupProbe settings will have no effect.
Customize terminationGracePeriodSeconds for Gracefully Shutting Down the Application Pod
Using terminationGracePeriodSeconds, the application is given a certain amount of time to terminate. The default value is 300 seconds. This time can be configured using the terminationGracePeriodSeconds key in the pod's spec and so if your pod usually takes longer than 300 seconds to shut down, you can increase the grace period. You can do that by setting the terminationGracePeriodSeconds key in the pod YAML.
terminationGracePeriodSeconds:300
The terminationGracePeriodSeconds setting is available in the Mendix on Kubernetes Operator version 2.6.0 and above.
Customize Container Resources: Memory and CPU
The resources following section shows an example configuration of the resources section from the example application deployment, above. Note that the configuration is just for reference purpose.
This section allows the configuration of the lower and upper resource boundaries, the requests and limits respectively.
The settings in the example above mean that
the container will always receive at least the resources set in requests
if the server node where a pod is running has enough of a given resource available the container can be granted resource than its requests
a container will never be granted more than its resource limits
Meaning of CPU
Limits and requests for CPU resources are measured in cpu units. One CPU, in this context, is equivalent to 1 vCPU/Core for cloud providers and 1 hyperthread on bare-metal Intel processors.
Fractional requests are allowed. For instance, in this example, we are requesting 100m, which can be read as one hundred millicpu, and limiting to a maximum of 1 CPU (1000m).
A precision finer than 1m is not allowed.
Meaning of Memory
Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi
For instance, in the example above, we are requesting and limiting memory usage to roughly 512MiB.
Modifying the resource configuration should be performed carefully as that might have direct implications on the performance of your application, and the resource usage of the server node.
Customize Runtime Metrics
Mendix on Kubernetes provides a Prometheus API, which can be used to collect metrics from Mendix apps.
runtimeMetricsConfiguration allows you to specify the default metrics configuration for a namespace.
Any configuration values from runtimeMetricsConfiguration can be overridden for an environment using the MendixApp CR (see Generating Metrics for more details).
An example of the runtimeMetricsConfiguration in the operator configuration manifest is given below.
You can set the following metrics configuration values:
mode: metrics mode, native or compatibility. native mode is only available for Mendix 9.7 and above. See Metrics Generation Modes in Monitoring Environments in Mendix on Kubernetes for more information.
interval: Interval between Prometheus scrapes specified in ISO 8601 duration format (for example, 'PT1M' would be an interval of one minute). This should be aligned with your Prometheus configuration. If left empty it defaults to 1 minute (matching the default Prometheus scrape interval). This attribute is only applicable when mode is native.
mxAgentConfig: configuration for the Java instrumentation agent; collects additional metrics such as microflow execution times; can be left empty to disable the instrumentation agent. This attribute is only applicable when mode is native.
mxAgentInstrumentationConfig: instrumentation configuration for the Java instrumentation agent; collects additional metrics such as microflow execution times; can be left empty to use the default instrumentation config. This attribute is only applicable when mode is native, and mxAgentConfig is not empty.
The Mendix environment can be configured to use a specific Kubernetes ServiceAccount instead of the default ServiceAccount.
To achieve this, you need to add the annotation privatecloud.mendix.com/environment-account: true (for security reasons, any account matching an environment name but without this annotation cannot be attached to environments).
The service account can be customized Mendix on Kubernetes Operator version 2.7.0 and above.
If required, you can use additional annotations. For example, in order to authenticate with AWS services instead of with static credentials, you can attach an AWS IAM role to an environment and use IRSA.
Autoscaling
Mendix on Kubernetes is compatible with multiple types of Kubernetes autoscalers.
To optimize resource utilization, autoscaling can terminate running instances of an app.
When autoscaling scales down an app or Kubernetes node, microflows in affected pods will be terminated, and the terminating pod will no longer accept new HTTP connections.
Cluster Autoscaling
The Kubernetes cluster autoscaler monitors resource usage and automatically adjusts the size of the cluster based on its resource needs.
Mendix on Kubernetes is compatible with cluster autoscaling. To install and enable cluster autoscaling, follow your cluster vendor's recommended way of configuring the cluster autoscaler.
Horizontal Pod Autoscaling
You need to have the Mendix Operator version 2.4.0 or above installed in your namespace to use horizontal pod autoscaling.
Horizontal pod autoscaling is a standard Kubernetes feature
and can automatically add or remove pods based on metrics, such as average CPU usage.
Enabling horizontal pod autoscaling allows you to increase processing capacity during peak loads and reduce resource usage during periods of low activity.
Horizontal pod autoscaling can be combined with cluster autoscaling, so that the cluster and environment are automatically optimized for the current workload.
To enable horizontal pod autoscaling for an environment, run the following command:
Replace {namespace} with the namespace name, and {envname} with the MendixApp CR name (the environment internal name).
Use --cpu-percent to specify the target CPU usage, and --min--max to specify minimum and maximum number of replicas.
To configure additional horizontal pod autoscaling, run the following command:
Replace {namespace} with the namespace name, and {envname} with the MendixApp CR name (the environment internal name).
The Kubernetes Horizontal pod autoscaling documentation explains additional available autoscaling options.
The Mendix Runtime is based on Java, which pre-allocates memory and typically never releases it.
Memory-based metrics should not be used for autoscaling.
When an environment is scaled (manually or automatically), it will not be restarted. Adjusting the number of replicas will not cause downtime - as long as the number of replicas is greater than zero.
Scaling an environment up (increasing the number of replicas) adds more pods - without restarting any already running pods; once the additional pods become available, they will start receiving HTTP (or HTTPS) requests.
Scaling an environment down (decreasing the number of replicas) removes some of the running pods - without restarting remaining pods; all HTTP (or HTTPS) traffic will be routed to the remaining pods.
Vertical Pod Autoscaling
Vertical pod autoscaling can automatically configure CPU and memory resources and requirements for a pod.
The Mendix Runtime is based on Java, which pre-allocates memory and typically never releases it.
Memory-based metrics should not be used for autoscaling.
Mendix Operator version 2.4.0 or above has the APIs required by the vertical pod autoscaler.
Vertical pod autoscaling is still an experimental, optional Kubernetes add-on.
Mendix recommends using horizontal pod autoscaling to adjust environments to meet demand.
Vertical pod autoscaling cannot be combined with horizontal pod autoscaling.
Log Format
Runtime Log Format
Mendix Operator version 2.11.0 or above allows you to specify the log format used by Mendix apps.
To specify the log format, add a runtimeLogFormatType entry to OperatorConfiguration:
apiVersion:privatecloud.mendix.com/v1alpha1kind:OperatorConfigurationspec:# ...# Other configuration options values# Optional: log format typeruntimeLogFormatType:json
You can set runtimeLogFormatType to one of the following values:
plain: – default option, produces plaintext logs in the following format:
2023-03-21 14:36:14.607 INFO - M2EE: Added admin request handler '/prometheus' with servlet class 'com.mendix.metrics.prometheus.PrometheusServlet'
json: – produces JSON logs in the following format:
{"node":"M2EE","level":"INFO","message":"Added admin request handler '/prometheus' with servlet class 'com.mendix.metrics.prometheus.PrometheusServlet'","timestamp":1679409374607}
In the json format, newline characters will be sent as \n (as specified in the JSON spec). You might need to configure your log viewer tool to display \n as line breaks.
For example, to correctly display newline characters in Grafana, use the Escape newlines button.
Log Levels
Mendix Operator version 2.19.0 and above allows you to configure the log levels for your Operator pods.
Following log levels can be configured:
L0 : Fatal Log Level
L1 : Error Log Level
L2 : Warn Log Level
L3 : Info Log Level
L4 : Debug Log Level
L5 : Trace Log Level
The log level can be set in the mendix-operator deployment yaml:
kind:DeploymentapiVersion:apps/v1spec:# ...# Other configuration options values# Optional: custom pod labelsspec:containers:- resources:# ...# Other configuration options valuesname:mendix-operatorcommand:- mendix-operatorenv:# ...# Other configuration options values- name:LOG_LEVELvalue:L1
By default, the log level value is set to L1 level for operator pods.
Pod Labels
General Pod Labels
Mendix Operator version 2.13.0 or above allows you to specify default pod labels for app-related pods: task pods (build and storage provisioners) and runtime (app) pods.
To specify default pod labels for a namespace, specify them in customPodLabels.general in OperatorConfiguration:
apiVersion:privatecloud.mendix.com/v1alpha1kind:OperatorConfigurationspec:# ...# Other configuration options values# Optional: custom pod labelscustomPodLabels:# Optional: general pod labels (applied to all app-related pods)general:# Example: enable Azure Workload Identityazure.workload.identity/use:"true"
Alternatively, for Standalone clusters, pod labels can be specified in the MendixApp CR for a specific app.
The Mendix Operator uses some labels for internal use. To avoid conflicts with these internal pod labels, please avoid using labels starting with the privatecloud.mendix.com/ prefix.
Delaying App Shutdown
In some situations, shutting down a replica immediately can cause isses. For example, the Azure Gateway Ingress Controller needs up to 90 seconds to remove a pod from its routing table. Stopping an app pod immediately would still send traffic to the pod for a few minutes, causing random 502 errors to appear in the client web browser.
You can add or change the timeout by adding a runtimeTerminationDelaySeconds value to the OperatorConfiguration CR:
apiVersion:privatecloud.mendix.com/v1alpha1kind:OperatorConfiguration# ...# omitted lines for brevity# ...spec:runtimeTerminationDelaySeconds:90
For example, if you set runtimeTerminationDelaySeconds to 90, the app continues to run for 90 seconds after a pod receives a shutdown signal.
In most cases, this option is only needed when an app is partially scaled down (for example, by a Horizontal pod autoscaler), and is still running.
Some container runtimes or network configurations prevent a terminating pod from receiving traffic or opening new connections. The Mendix Runtime can still use its existing database connections from the connection pool and keep processing any running microflows and requests, but uploading files or calling external REST services may fail.
Read-only RootFS
Mendix app container images are locked down by default - they run as a non-root user, cannot request elevated permissions, and file ownership and permissions prevent modification of system and critical paths. Kubernetes allows you to lock down containers even further, by mounting the container filesystem as read-only if the container's security context specifies readOnlyRootFilesystem: true. With this option enabled, any files and paths from the container image cannot be modified by any user.
Starting from Mendix Operator version 2.21.0, all system containers and pods use readOnlyRootFilesystem by default. It is possible to specify if an environment's app container should also have a read-only filesystem. For Mendix apps, the readOnlyRootFilesystem option is off by default, as some Java actions in marketplace modules might expect some paths to be writable.
If you enable the runtimeReadOnlyRootFilesystem option in the MendixApp CRD (for standalone clusters) or in the Mendix on Kubernetes Portal, the Mendix app container also uses a read-only root filesystem. As Mendix apps needs certain paths to be writable, an emptyDir is used for writable paths. Each path is mounted as a separate subPath to keep data separated. The emptyDir size is set to the ephemeral-storageresource limit.
In addition to internal Mendix Runtime paths, /tmp is mounted for any temporary files that might be created through Java actions. For Java actions to work correctly, ensure that they only create files in /tmp, for example, by using the File.createTempFile or File.createTempDirectory Java methods.
If your app works without issues when read-only root filesystem is enabled, it is best to enable it wherever possible. We recommend using a non-production environment to validate that your app keeps working correctly with a read-only RootFS.
Enabling the runtimeReadOnlyRootFilesystem option causes the model/resources directory to be empty. If your app (or a Marketplace module such as SAML) uses the model/resources directory for resources such as configuration data, consider moving those resources to another location (for example, model/userlib) or loading them from FileDocument entities.
GKE Autopilot Workarounds
In GKE Autopilot, one of the key features is its ability to automatically adjust resource settings based on the observed resource utilization of the containers. GKE Autopilot verifies the resource allocations and limits for all containers, and makes adjustments to deployments when the resources are not as per its requirements.
As a result, there can be a continuous back-and-forth interaction between Mx4PC and GKE Autopilot, where both entities engage in a loop, attempting to counteract each other's modifications to deployments and pods.
To address this issue, you can configure the Mendix Operator to align with GKE's requirements. This involves setting the resources (specifically, the CPU, memory, and ephemeral storage) to be equal to the limits defined in the OperatorConfiguration for both the sidecar and metrics-sidecar containers. Along with this, you must ensure that the resource limits for the CPU, memory, and ephemeral storage are equal to the resource requests in the Mendix on Kubernetes Portal. For more information on setting the core resources on the Portal, see Custom Core Resource Plan.
You must also create a patch file for configuring the core resources in the OperatorConfiguration, as in the following example:
Google Kubernetes Engine (GKE) requires a balanced allocation of CPU and memory resources. If a container requests a substantial amount of memory, it should also correspondingly request more CPU cores. For detailed information on resource requests, you can refer to the Resource Requests in Autopilot documentation provided by Google Kubernetes Engine.
Cluster and Namespace Management
Once it is configured, you can manage your cluster and namespaces through the Mendix Portal.
Cluster Overview
Go to the Cluster Manager page by clicking Cluster Manager in the top menu of the Clouds page of the Mendix Portal.
From this page you can see a summary of your clusters with all their namespaces and an indication of the namespace status and how long it has been running (runtime).
Managing the Cluster
Here you can perform the following actions on the entire cluster:
Delete the cluster by clicking Delete (
)
Rename the cluster or edit its description by clicking Edit (
)
Invite another cluster manager
You can also see the activities logged for all you clusters by clicking Activity in the cluster overview page. This shows the following:
When a cluster has been added
When a cluster description is added
When name of the cluster is changed
When cluster description is changed
If you prefer the individual to join as a cluster manager automatically, without requiring them to manually accept the invitation, you can enable the Automatically accept invites option.
The Automatically accept invites option is applicable only when the invited users have the same email domain as yours.
When you add a cluster manager, the user will have most of the access which the original cluster manager had, such as the abilities to add a namespace, add a member, change the permissions of the cluster member, and delete another cluster manager.
The only limitations are that:
An added cluster manager will not be able to operate on or manage the environments created in the namespaces which are already in the cluster — they need to be added as a member of the application if they want to manage existing environments in the namespaces.
Cluster managers who are added to the cluster cannot remove the cluster manager who created the cluster.
When you delete a cluster, this removes the cluster from the Mendix Portal. However, it will not remove the associated namespace from your platform. You will need to explicitly delete the namespace using the tools provided by your platform.
Managing Roles and Permissions
It is now possible to manage the roles and permissions for the namespace member by clicking Roles and Permissions in the left navigation pane.
Below are the predefined roles with default permissions; these roles are built-in and cannot be edited:
Administrator - This role gives the cluster manager full access to the namespace, the permissions for which are shown in the figure below.
Developer - This role gives the developer with the permission which are shown in the figure below.
In addition to the predefined roles, you can create customised roles with the required permissions which you want to assign to the namespace member. Cluster managers can create a role once, and then reuse it across multiple namespaces.
To create a role, click Create Role in the top right.
This option allows the cluster manager to create, edit, and delete roles and permissions.
Once a role is created, you can assign it to the namespace member by clicking Invite Member under Members section on the Namespace Overview page. You can select the role from the dropdown.
Once the role is assigned, it cannot be deleted until the role is removed from the assigned members.
Existing namespace members who have been given custom permissions will continue to use those custom permissions. However, those custom permissions will no longer be editable. To update a permission, reassign an existing role or create a custom role on the Roles and Permissions page.
Namespace Management
If you are a member of a namespace, you can also manage a namespace in the cluster.
Click the Details button for the namespace you want to manage.
On the namespace management page, there are a number of tabs which allow you to manage aspects of your namespace :
Apps
Members
Operate
Plans
Installation
Additional information
Customization
PCLM Statistics
See the sections below for more information.
You can also delete your namespace from the cluster manager by clicking Delete Namespace (
) in the top right.
If there are any environments associated with the namespace, you cannot delete the namespace until the environments associated with it are deleted.
When you delete a namespace, this removes the namespace from the cluster in the Mendix Portal. However, it will not remove the namespace from your platform. You will need to explicitly delete the namespace using the tools provided by your platform.
In the case of a Global Operator managed namespace, the managed namespace will not be deleted from the cluster. You must delete it from the cluster manually. Additionally, you also need to remove the managed namespace from the list of managed namespaces in the Operator configuration of the main namespace.
In order to delete the namespace from the cluster, perform the following steps:
Ensure that all the environments under this namespaces are removed. You can check the list of environments under this namespace using the following command:
For OpenShift:
oc -n {namespace} get mendixapp
For Kubernetes:
kubectl -n {namespace} get mendixapp
If any Mendix apps still exist in the namespace, you can delete them by using the following command, where internalId is the ID of the environment:
Wait until the storage provisioner completes the process of deleting the storage instance related to the environment. You can check if there are any existing storage instances by running the following command:
For OpenShift:
oc -n {namespace} get storageinstance
For Kubernetes:
kubectl -n {namespace} get storageinstance
If there are any failed storage instances, you can check their logs by running the following command:
For OpenShift:
oc -n {namespace} log {storageinstance-name}
For Kubernetes:
kubectl -n {namespace} log {storageinstance-name}
If there are any remaining storage instances, you can delete then by using the following command:
After manually removing the storage instance, manually clean up any resources associated with it, such as the database, S3 bucket or associated AWS IAM account in the cluster.
Once all the storage instances are deleted successfully, yon can now delete the namespace from the cluster by using the following command:
oc delete ns {namespace}
For Kubernetes:
kubectl delete ns {namespace}
You can also see an activity log containing the following information for all namespaces within the cluster:
When a namespace has been added
When a namespace has been deleted
When a cluster manager has been added
When a cluster manager invitation is removed
When a cluster manager accepts the invitation
When a cluster manager is removed from the cluster
When a new database plan is added in a namespace
When a database plan is deactivated
When a new storage plan is added in a namespace
When a storage plan is deactivated
When Metrics/Alerts/Logs/Backups URLs are added in the namespace
When Metrics/Alerts/Logs/Backups URLs are removed from the namespace
When Metrics/Alerts/Logs/Backups URLs are changed in the namespace
When a user is invited as a namespace member
When a user invitation for namespace member is removed
When a user accepts the invitation as a namespace member
When a user is removed as a namespace member
When user's permission is changed in the namespace
When environment configurations are added, updated, or removed
When Runtime Metrics configurations are added, updated, or deleted
When developer mode is enabled in the namespace
When developer mode is disabled in the namespace
When deployment strategy is enabled for an environment in a namespace
When deployment strategy is disabled for an environment in a namespace
Apps
The Apps tab of namespace details in the cluster manager page lists all the app environments which are deployed to this namespace.
If you are a team member of the app, click Details to go to the Environment Details page for that app.
You can only see the environment details of an app if you are a member of the team with the appropriate authorization.
If you are a cluster administrator, you can also click Configure to configure the environment by adding annotations for pods, ingress, and service.
Configure Environment
You can add, edit, and delete annotations for your environment.
You need to have the Mendix Operator version 1.12.0 or above installed in your namespace to configure all the available annotations. You need version 1.11.0 to use pod annotations.
To add a new annotation, do the following.
Click Add.
Choose the Annotation type from the dropdown.
Enter the Key and the Value for the annotation.
Click Save.
You can also Edit or Delete an existing annotation by selecting it and clicking the appropriate button.
The new value for the annotation will only be applied when the application is restarted.
Mendix Operator version 2.14.0 (and older) don't remove ingress or service annotations when an annotation is removed from the Mendix on Kubernetes Portal or in the MendixApp CR.
This is addressed in Mendix Operator version 2.15.0; if you need to remove an ingress or service annotation, please upgrade to the latest Mendix Operator version first.
You can configure the runtime metrics for the environment in the Runtime section. For more information, see Customize Runtime Metrics.
You can also configure the pod labels for the environment in the Labels section. For more information, see App Pod Labels.
Starting from Operator 2.20.0 onwards, it is now also possible to set the deployment strategy for an environment. This allows you to update an app with reduced downtime by performing a rolling update. To use this feature, you must enable the Reduced App Downtime Strategy option. For more information, see Deployment Strategy
Members
By default, the cluster manager, who created the cluster in Mendix, and anyone added as a cluster manager has full administration rights to the cluster and its namespaces. These cluster managers will also need to be given the appropriate permissions on the Kubernetes or OpenShift Cluster. The administration rights are:
Add and delete namespaces
Add, activate, or deactivate plans
Invite and manage users
The following rights are available to the cluster creator, and members of a namespace with appropriate authorization:
Set up operating URLs for the namespace
View all environments in the namespace
Manage own environments – user can create and manage an environment in any namespace in the cluster
The following actions require the appropriate access to the namespace and access to the app environment as a team member with appropriate authorization:
Manage environment- user can navigate to the environment details section and edit the environment name and core resources
Deploy App – user can deploy a new app to the environment
Scale App – user can change the number of replicas
Start App
Stop App
Modify MxAdmin Password
Edit App Constants
Manage App Scheduled Events
View App Logs
View App Alerts
View App Metrics
Manage App Backups
Manage Debugger
Manage TLS configurations
Manage Custom Runtime Settings
Manage Log levels
Manage Client Certificates
Manage Custom Environment Variables and JVM options
Manage Runtime Metrics Configuration
The Members tab allows you to manage the list of members of the namespace and control what rights they have.
Adding Members
You can invite additional members to the namespace, and configure their role depending on what they should be allowed to do.
The Members tab displays a list of current members (if any).
Click Invite Member.
Enter the Email of the person you want to invite.
If you prefer the individual to join as a namespace member automatically, without requiring them to manually accept the invitation, you can enable the Automatically accept invites option.
The Automatically accept invites option is applicable only when the invited users have the same email domain as yours.
Give them the rights they need. This can be:
Developer – a standard set of rights needed by a developer, these are listed on the screen
Administrator – a standard set of rights needed by an administrator, these are listed on the screen
Custom – This option is now deprecated.
The custom permission if needed to be edited, a role need to be assigned with appropriate permissions. See Roles and Permissions for more information.
If an application is in the Stopped state, the scaling does not come into effect until the application is Started. This means that you have to click Start application in order for the changes to be sent to the cluster.
Along with this, we have also decoupled the permission for modifying the MxAdmin password and managing environments.
Click Send Invite to send an invite to this person.
If you have not enabled the Automatically accept invites option, the user will receive an email and will be required to follow a link to confirm that they want to join this namespace. They will need to be logged in to Mendix when they follow the confirmation link.
Editing and Removing Members
You can change the access rights for, or completely remove, existing members.
Click Edit next to the member you want to change.
Either:
Make changes and click Save.
Click Remove member to remove this member completely. You will be asked to confirm this action.
Operate
The Operate tab allows you to add a set of links which are used when users request an operations page for their app in Apps.
The following pages can be configured:
Metrics
Alerts
Logs
Backups
The specification of these pages is optional.
Open the Operate tab, enter the URLs relevant to your namespace, and click Save for each one.
Plans
The Plans tab shows you the database and storage plans which are currently configured for your namespace.
Deactivating a Plan
Enable the toggle button next to the name of the plan you wish to deactivate. You cannot remove plans from within the cluster manager, but you can deactivate them to ensure that developers cannot create environments using the plan. Any environments currently using the plan will not be affected by this setting.
Activating a Plan
Disable the toggle button next to the name of the plan you wish to activate. The plan can then be used by developers when they create an environment to deploy their apps.
Deleting a Plan
You can only delete storage or database plans if they are not used in any of your environments, regardless of whether they are active or inactive.
After you delete a plan, the action cannot be reverted or undone through the portal. Deleting the plan does not remove it from the cluster. To delete the plan from the cluster, a separate action is required, which can be accomplished by executing the following command:
Here, you can create customized plan for your core resources.
Click Add New Plan.
Provide a name to the plan under Plan Name.
Provide the required CPU Limits, CPU Request, Memory Limit, Memory Request, Ephemeral Storage Request and Ephemeral Storage Limit based on your choice.
.
Click OK button to save the customized resource plan.
In order to make the customized plan available to the customer, make sure to enable the toggle button next Use custom core resources plans.
Ephemeral Storage is a temporary storage attached to the lifecycle of a pod. Hence, with the deletion of pod, the data stored in the ephemeral storage is also lost.
Once you enable the Use custom core resources plans button, you cannot switch back to the default core plans until you delete all the environments using the custom core plans and disable Use custom core resources plans button. A warning message with the same information is displayed when trying to enable this feature.
Installation
The Installation tab shows you the Configuration Tool which you used to create the namespace, together with the parameters which are used to configure the agent. You can use the Configuration Tool again to change the configuration of your namespace by pasting the command into a command line terminal as described in Running the Configuration Tool, above. You can also download the Configuration Tool again, if you wish.
In case of Global Operator Managed namespace, you will see the Configuration tab instead of the Installation tab. For more information, see Global Operator Namespace
Additional Information
This tab shows information on the versions of the various components installed in your namespace.
Customization
This tab allows the cluster manager to customize the enablement of the secret store, developer mode for the developers, and product type for the PCLM Runtime License.
Enabling the External Secrets Store option allows users to retrieve the following secrets from an external secrets store:
Database plan
Storage plan
MxAdmin password
Custom runtime settings
MxApp constants
If you want to use the secret store for custom runtime settings or MxApp constants, the Mendix Operator must be in version 2.10.0 or later. Database plan, storage plan, and MxAdmin password are available from version 2.9.0 onwards.
Enabling the Development DTAP Mode option allows users to change the type of an environment to Development. By default, the DTAP mode is set to Production mode. If this option is enabled, the type of an environment can be changed to Development mode on the Environment Details page.
If PCLM is configured, the default product type for Runtime licenses is set to standard. However, if the product type for PCLM Runtime licenses in the license server differs from Standard, you can customize it here. To check the product type of the Runtime license, navigate to the PCLM Statistics page, and then select Runtime in the Select type field.
The product type value to be entered is case-sensitive. Ensure that the value matches exactly with the product type of the Runtime license on the PCLM Statistics page.
The selected product type will be applied to all environments within this namespace, and associated environments will adopt the license of this specific product type.
PCLM Statistics
This tab shows information about claimed licenses, operator licenses and runtime licenses.
Select Claim to view a list of licenses from the license bundle which are claimed in the namespace.
Select Operator to view a list of all the Operator licenses in the bundle.
Select Runtime to view a list of all the Runtime licenses in the bundle.
Select Export in Excel to export the above lists.
If you would like to see the license payload, click Show License Payload.
If you want to use the Private Cloud License Manager, the Mendix Operator must be in version 2.11.0 or later.
If Global Operator is configured with Private Cloud License Manager, you can view the Runtime and Operator list of licenses for the main namespace, and only the list of claims for the managed namespace.
Current Limitations
Storage Provisioning
If the Operator fails to provision or deprovision storage (a database or file storage), it will not retry the operation. If there is a failed *-database or *-file pod, you'll need to do the following:
Check the failed pod logs for the error message.
Troubleshoot and fix the cause of this error.
Delete the failed pod to retry the process again.
Restart Required When Switching Between Ingress and OpenShift Route
Starting with Mendix Operator version 1.5.0, the operator will monitor only one network resource type: Ingress or OpenShift route.
If you switch between Ingress and OpenShift Route, you will need to restart the Mendix Operator so that it can monitor the right network resource (replace {namespace} with the namespace where the Operator is installed). This can be done as follows:
The Windows version of the Configuration Tool must be run in a terminal that supports the Windows console API and has mouse support. PowerShell and the Windows Command Prompt are supported.
When running PowerShell or the Windows Command Prompt from the new Windows Terminal, mouse clicks are not supported.
Run PowerShell or the Windows Command Prompt terminal as a standalone app.
Some previously released versions of Mendix on Kubernetes required using Git Bash in Windows.
Starting from Mendix Operator version 1.9.0, Git Bash is no longer required.
Linux and macOS
When running the installation tool over SSH, make sure that the SSH client supports terminal emulation and has mouse support enabled.
ssh.exe in Windows doesn't support mouse click forwarding and another SSH client should be used instead, such as MobaXterm or PuTTY.
Configuration Tool - Known Issues
When restoring a previously saved session, some UI elements (such as drop-downs and checkboxes) will not use the saved session and will revert to their default values.
For example, the Authentication dropdown for Postgres will always switch to static authentication.
Selecting the correct value from those drop-downs will restore the state of the form and any fields which might not be visible.
Troubleshooting
Status Reporting
This section covers an issue which can arise where Mendix cannot recover automatically and manual intervention may be required.
Under some circumstances changes in the status of the cluster, namespaces, and environments will not be updated automatically. To ensure you are seeing the current status, you may need to click the Refresh button on the screen (not the browser page refresh button).
Windows PowerShell
This section covers how to troubleshoot an issue you may find when running the installation tool with Windows PowerShell Terminal.
Enable Copy and Paste in Windows PowerShell
If you are unable to copy and paste in the installation tool, you may need to enable it from the Windows PowerShell Properties. Open the Properties menu by right clicking the header or by pressing Alt + Space.
Select the Options tab and enable Use Ctrl+Shift+C/V as Copy/Paste
You can now copy and paste with Ctrl + Shift + C and Ctrl + Shift + V in the terminal.
Unable to Click a Button
If you highlight a button instead of clicking the button, you may need to disable the Quick Edit Mode from the Windows PowerShell Properties.
After disabling the option you need to enable the new settings. You can do this by navigating to other page by pressing a shortcut key, or reopening the installer tool by closing it with Ctrl + C and reopening the tool with the installation command.
Containerized Mendix App Architecture
Within your cluster you can run one, or several, Mendix apps. Each app runs in an environment, and each environment is in a namespace. You can see the relationship between the Mendix environments and the Kubernetes namespaces in the image below.
To ensure that every app deployed to a namespace has a unique name, the environment will have an Environment UUID added to the environment name when it is deployed to ensure that it is unique in the project. This also ensures the app cannot have the same name as the Mendix tools used to deploy the app. See Deploying a Mendix App to a Mendix on Kubernetes Cluster for more information.