Creating a Private Cloud Cluster

Last modified: February 26, 2024

1 Introduction

To allow you to manage the deployment of your apps to Red Hat OpenShift and Kubernetes, you first need to create a cluster and add at least one namespace in the Mendix Developer Portal. This will provide you with the information you need to deploy the Mendix Operator and Mendix Gateway Agent in your OpenShift or Kubernetes context and create a link to the Environments pages of your Mendix app through the Interactor.

This document explains how to set up the cluster in Mendix.

Once you have created your namespace, you can invite additional team members who can then create or view environments in which their apps are deployed, depending on the rights you give them. For more information on the relationship between Mendix environments, Kubernetes namespaces, and Kubernetes clusters, see Containerized Mendix App Architecture, below.

2 Prerequisites for Creating a Cluster

To create a cluster in your OpenShift context, you need the following:

  • A supported Kubernetes platform; for more information, see Supported Versions
  • An administration account for your OpenShift or Kubernetes platform
  • OpenShift CLI installed (see Getting started with the CLI on the Red Hat OpenShift website for more information) if you are creating clusters on OpenShift
  • Kubectl installed if you are deploying to another Kubernetes platform (see Install and Set Up kubectl on the Kubernetes webside for more information)
  • A command line terminal that supports the console API and mouse interactions. In Windows, this could be PowerShell or the Windows Command Prompt. See Terminal limitations, below, for a more detailed explanation.

2.1 Connected Environments

Should you consider using a connected environment, the following URLs should be safelisted in your cluster’s operating system, as these URLs point to services or resources required by the Connected Environments’ infrastructure.

URL Description
https://interactor-bridge.private-cloud.api.mendix.com Websocket based main communication API
https://package-store-prod-2.s3-accelerate.amazonaws.com/ Registry for downloading MDA artifacts
https://private-cloud.registry.mendix.com Docker registry for downloading Runtime base images
https://subscription-api.mendix.com Service to verify call-home license

3 Creating a Cluster and Namespace

3.1 Creating a Cluster

  1. Click Cloud Settings on the Settings page of your Mendix app.

  2. Click Mendix for Private Cloud.

  3. Click Set up Mendix for Private Cloud.

  4. Open the Global Navigation menu and select Deployment.

  5. Select Mendix for Private Cloud from the top menu bar in the Developer Portal.

  6. Click Register Cluster.

  7. Enter the following information:

    1. Installation Type – Choose Global Installation if you want a single operator namespace to manage multiple namespaces or just a single operator namespace. For more information, see Global Operator.

    2. Cluster Name – The name that you want to give the cluster which you are creating.

    3. Cluster Type – Choose the correct type for your cluster. For more information, see Supported Providers.

    4. Description – An optional description of the cluster which will be displayed under the cluster name in the Cluster Manager.

  8. Click Create.

3.2 Adding a Namespace

You now need to add a namespace to your cluster. Your cluster can contain several namespaces, see Containerized Mendix App Architecture, below for more information.

To add a namespace, do the following:

  1. Click Details ( ) on the top right of the page:

  2. Click Add Namespace:

  3. Enter the following details:

    • Namespace – this is the namespace in your platform; this must conform to the namespace naming conventions of the cluster: all lower-case with hyphens allowed within the name
    • Installation type – if you want to create environments and deploy your app from the Mendix Developer Portal, choose Connected, but if you only want to control your deployments through the Mendix Operator using the CLI, choose Standalone
  4. Click Done to create the namespace.

4. Installing and Configuring the Mendix Operator

You can install and run the Mendix Operator in either Global or Standard mode. In Global mode, the Operator is installed once for all available namespaces, whereas in Standard mode, it is installed separately for each namespace where a Mendix app is deployed. For more information, see:

5 Licensing the Application with Private Cloud License Manager

You can license the Operator and Runtime of your application by configuring the Operator configuration with License Manager details. In order to start using Private Cloud License Manager, you need to first download the PCLM executable available in the Installation page. For more information, see Private Cloud License Manager. The PCLM executable is available for download from this page.

6 Advanced Operator Configuration

Some advanced configuration options of the Mendix Operator are not yet available in the Configuration Tool. These options can be changed by editing the OperatorConfiguration custom resource directly in Kubernetes.

Look at Supported Providers to ensure that your planned configuration is supported by Mendix for Private Cloud.

To start editing the OperatorConfiguration, use the following commands (replace {namespace} with the namespace where the operator is installed):

For OpenShift:

oc -n {namespace} edit operatorconfiguration mendix-operator-configuration

For Kubernetes:

kubectl -n {namespace} edit operatorconfiguration mendix-operator-configuration

6.1 Runtime Base Image

Starting from version 2.15.0, the OperatorConfiguration contains allows to specify the base OS image tag template.

The Operator will parse the MDA file metadata and use this metadata to fill in the JavaVersion field.

1
2
3
4
5
6
7
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
# ...
# omitted lines for brevity
# ...
spec:
  baseOSImageTagTemplate: 'ubi8-1-jre{{.JavaVersion}}-entrypoint'

At the moment, the baseOSImageTagTemplate can be set to one of the following values:

  • ubi8-1-jre{{.JavaVersion}}-entrypoint - to use Red Hat UBI 8 Micro images; this is the default option.
  • ubi9-1-jre{{.JavaVersion}}-entrypoint - to use Red Hat UBI 9 Micro images; this option can be used to use a newer OS and improve security scores..

6.2 Endpoint (network) Configuration

The OperatorConfiguration contains the following user-editable options for network configuration:

When using Ingress for network endpoints:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
# ...
# omitted lines for brevity
# ...
spec:
  # Endpoint (Network) configuration
  endpoint:
    # Endpoint type: ingress, openshiftRoute or service
    type: ingress
    # Optional, can be omitted: Service annotations
    serviceAnnotations:
      # example: custom AWS CLB configuration
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:account:certificate/id
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    # Ingress configuration: used only when type is set to ingress
    ingress:
      # Optional, can be omitted: annotations which should be applied to all Ingress Resources
      annotations:
        # default annotation: allow uploads of files up 500 MB in the NGINX Ingress Controller
        nginx.ingress.kubernetes.io/proxy-body-size: 500m
        # example: use the specified cert-manager ClusterIssuer to generate TLS certificates with Let's Encrypt
        cert-manager.io/cluster-issuer: staging-issuer
        # example: deny access to /rest-doc
        nginx.ingress.kubernetes.io/configuration-snippet: |
          location /rest-doc {
            deny all;
            return 403;
          }          
      # App URLs will be generated for subdomains of this domain, unless an app is using a custom appURL
      domain: mendix.example.com
      # Enable or disable TLS
      enableTLS: true
      # Optional: name of a kubernetes.io/tls secret containing the TLS certificate
      # This example is a template which lets cert-manager to generate a unique certificate for each app
      tlsSecretName: '{{.Name}}-tls'
      # Optional: specify the Ingress class name
      ingressClassName: alb
      # Optional, can be omitted : specify the Ingress path
      path: "/"
      # Optional, can be omitted : specify the Ingress pathType
      pathType: ImplementationSpecific
# ...
# omitted lines for brevity
# ...

When using OpenShift Routes for network endpoints:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
spec:
  # Endpoint (Network) configuration
  endpoint:
    # Endpoint type: ingress, openshiftRoute, or service
    type: openshiftRoute
    # OpenShift Route configuration: used only when type is set to openshiftRoute
    openshiftRoute:
      # Optional, can be omitted: annotations which should be applied to all Ingress Resources
      annotations:
        # example: use HSTS headers
        haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload
      # Optional: App URLs will be generated for subdomains of this domain, unless an app is using a custom appURL
      domain: mendix.example.com
      # Enable or disable TLS
      enableTLS: true
      # Optional: name of a kubernetes.io/tls secret containing the TLS certificate
      # This example is the name of an existing secret, which should be a wildcard matching subdomains of the domain name
      tlsSecretName: 'mendixapps-tls'

When using Services for network endpoints (without an Ingress or OpenShift route):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
spec:
  # Endpoint (Network) configuration
  endpoint:
    # Endpoint type: ingress, openshiftRoute, or service
    type: service
    # Optional, can be omitted: the Service type
    serviceType: LoadBalancer
    # Optional, can be omitted: Service annotations
    serviceAnnotations:
      # example: annotations required for AWS NLB
      service.beta.kubernetes.io/aws-load-balancer-type: external
      service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
      service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    # Optional, can be omitted: Service ports
    servicePorts:
      - 80
      - 443

You can change the following options:

  • type: – select the Endpoint type, possible options are ingress, openshiftRoute and service; this parameter is also configured through the Configuration Tool
  • ingress: - specify the Ingress configuration, required when type is set to ingress
  • openshiftRoute: - specify the OpenShift Route configuration, required when type is set to openshiftRoute
  • annotations: - optional, can be used to specify the Ingress or OpenShift Route annotations, can be a template: {{.Name}} will be replaced with the name of the CR for the Mendix app, and {{.Domain}} will be replaced with the application’s domain name
  • serviceAnnotations: - optional, can be used to specify the Service annotations, can be a template: {{.Name}} will be replaced with the name of the CR for the Mendix app, and {{.Domain}} will be replaced with the application’s domain name
  • ingressClassName: - optional, can be used to specify the Ingress Class name
  • path: - optional, can be used to specify the Ingress path; default value is /
  • pathType: - optional, can be used to specify the Ingress pathType; if not set, no pathType will be specified in Ingress objects
  • domain: - optional for openshiftRoute, required for ingress, used to generate the app domain in case no app URL is specified; if left empty when using OpenShift Routes, the default OpenShift apps domain will be used; this parameter is also configured through the Configuration Tool
  • enableTLS: - allows you to enable or disable TLS for the Mendix App’s Ingress or OpenShift Route
  • tlsSecretName: - optional name of a kubernetes.io/tls secret containing the TLS certificate, can be a template: {{.Name}} will be replaced with the name of the CR for the Mendix app; if left empty, the default TLS certificate from the Ingress Controller or OpenShift Router will be used
  • serviceType: - can be used to specify the Service type, possible options are ClusterIP and LoadBalancer; if not specified, Services will be created with the ClusterIP type
  • servicePorts: - can be used to specify a list of custom ports for the Service; if not specified, Services will use be created with port 8080

6.3 Mendix App Deployment settings

The OperatorConfiguration contains the following user-editable options for configuring Mendix app Deployments (Pods):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
spec:
  # Optional: provide Mendix app Pods to get a Kubernetes Service Account token
  runtimeAutomountServiceAccountToken: true
  # Optional: annotations for Mendix app Pods
  runtimeDeploymentPodAnnotations:
    # example: inject the Linkerd proxy sidecar
    linkerd.io/inject: enabled
    # example: enable Prometheus metrics scraping
    prometheus.io/path: /metrics
    prometheus.io/port: '8900'
    prometheus.io/scrape: 'true'

You can change the following options:

  • runtimeAutomountServiceAccountToken: – specify if Mendix app Pods should get a Kubernetes Service Account token; defaults to false; should be set to true when using Linkerd Automatic Proxy Injection
  • runtimeDeploymentPodAnnotations: – specify default annotations for Mendix app Pods

6.4 Mendix App Resource Customization

The Deployment object that controls the pod of a given Mendix application contains user-editable options for fine-tuning the execution to the application’s runtime resources.

The Deployment object as a name in the following format:

<internal environment name>-master

Below is an example of the Deployment definition of an app. In this example, the Deployment definition is called b8nn6lq5-master:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
apiVersion: apps/v1
kind: Deployment
# ...
# omitted lines for brevity
# ...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 0
  # ...
  # omitted lines for brevity
  # ...
  template:
    metadata:
      # ...
      # omitted lines for brevity
      # ...
      creationTimestamp: null
      labels:
        app: b8nn6lq5
        component: mendix-app
        node-type: master
    spec:
      automountServiceAccountToken: false
      containers:
      - env:
        - name: M2EE_ADMIN_LISTEN_ADDRESSES
          value: 127.0.0.1
        - name: M2EE_ADMIN_PORT
          value: "9000"
        - name: M2EE_ADMIN_PASS
          valueFrom:
            secretKeyRef:
              key: adminpassword
              name: b8nn6lq5-m2ee
        image: image-registry.openshift-image-registry.svc:5000/test-app/b8nn6lq5
        imagePullPolicy: Always
        ports:
          - containerPort: 8080
          name: mendix-app
          protocol: TCP
        name: mendix
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /m2ee-sidecar/v1/healthz
            port: 8800
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 15
          successThreshold: 1
          timeoutSeconds: 3
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: mendix-app
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 1
          successThreshold: 1
          timeoutSeconds: 1
        terminationGracePeriodSeconds: 300
        resources:
          limits:
            cpu: 1
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 512Mi
# ...
# omitted lines for brevity
# ...

6.4.1 Resource Definition via Operator Configuration Manifest

For a given namespace, all the resource information is aggregated in the mendix-operator-configuration manifest. This centralizes and overrides all the configuration explained above. For an example of the Operator configuration manifest, see below. Note that the below configuration is just for reference puropose.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
# ...
# omitted lines for brevity
# ...
spec:
  sidecarResources:
    limits:
      cpu: 250m
      memory: 32Mi
    requests:
      cpu: 100m
      memory: 16Mi
  metricsSidecarResources:
    limits:
      cpu: 100m
      memory: 32Mi
    requests:
      cpu: 100m
      memory: 16Mi
  buildResources:
    limits:
      cpu: '1'
      memory: 256Mi
    requests:
      cpu: 250m
      memory: 64Mi
  runtimeResources:
    limits:
      cpu: 1000m
      memory: 512Mi
    requests:
      cpu: 100m
      memory: 512Mi
  runtimeLivenessProbe:
    initialDelaySeconds: 60
    periodSeconds: 15
  runtimeReadinessProbe:
    initialDelaySeconds: 5
    periodSeconds: 1
  # startup probes are deprecated in Mendix Operator 2.15.0
  runtimeStartupProbe:
    failureThreshold: 30
    periodSeconds: 10
  runtimeTerminationGracePeriodSeconds: 300

The following fields can be configured:

  • liveness, readiness, and terminationGracePeriodSeconds – these are used for all Mendix app deployments in the namespace; any changes made in the deployments will be discarded and overwritten with values from the OperatorConfiguration resource
  • sidecarResources – this is used for all m2ee-sidecar containers in the namespace
  • metricsSidecarResources – this is used for all m2ee-metrics containers in the namespace
  • runtimeResources – this is used for mendix-runtime containers in the namespace (but this is overwritten if the Mendix app CRD has a resources block)
  • buildResources – this is used for the main container in *-build pods

6.4.2 Customize Liveness Probe to Resolve Crash Loopback Scenarios

The liveness probe informs the cluster whether the pod is dead or alive. If the pod fails to respond to the liveness probe, the pod will be restarted (this is called a crash loopback).

The readiness probe, on the other hand, is designed to check if the cluster is allowed to send network traffic to the pod. If the pod fails this probe, requests will no longer be sent to the pod.

Let us now analyze the liveness probe section from the application deployment example, above:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
livenessProbe:
  failureThreshold: 3
  httpGet:
    path: /m2ee-sidecar/v1/healthz
    port: 8800
    scheme: HTTP
  initialDelaySeconds: 60
  periodSeconds: 15
  successThreshold: 1
  timeoutSeconds: 1

The following fields can be configured:

  • initialDelaySeconds – the number of seconds after the container has started that the probe is initiated. Minimum value is 0.
  • periodSeconds – how often (in seconds) to perform the probe. Default is 10 seconds. Minimum value is 1.
  • timeoutSeconds – the number of seconds after which the probe times out. Default is 3 second. Minimum value is 1.
  • successThreshold – the number of consecutive successes required before the probe is considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup Probes. Minimum value is 1.
  • failureThreshold – the number of times Kubernetes will retry when a probe fails before giving up. Giving up in case of a liveness probe means restarting the container. Defaults to 3. Minimum value is 1.

6.4.3 Customize Startup Probes for Slow Starting Applications

If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startup probe.

A startup probe should be used when the application in your container could take a significant amount of time to reach its normal operating state. Applications that would crash or throw an error if they handled a liveness or readiness probe during startup need to be protected by a startup probe. This ensures the container doesn’t continually restart due to failing health checks before it has finished launching. Using a startup probe is much better than increasing initialDelaySeconds on readiness or liveness probes. Startup probes defer the execution of liveness and readiness probes until a container indicates it is able to handle them because Kubernetes doesn’t direct the other probe types to the container if it has a startup probe that hasn’t yet succeeded.

You can see an example of a startup probe configuration below:

1
2
3
4
5
6
7
startupProbe:
  httpGet:
    path: /
    port: mendix-app
    scheme: HTTP
  failureThreshold: 30
  periodSeconds: 10

In this example, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provide a fast response to container deadlocks. If the startup probe never succeeds, the container is killed after 300s and subject to the pod’s restartPolicy.

6.4.4 Customize terminationGracePeriodSeconds for Gracefully Shutting Down the Application Pod

Using terminationGracePeriodSeconds, the application is given a certain amount of time to terminate. The default value is 300 seconds. This time can be configured using the terminationGracePeriodSeconds key in the pod’s spec and so if your pod usually takes longer than 300 seconds to shut down, you can increase the grace period. You can do that by setting the terminationGracePeriodSeconds key in the pod YAML.

terminationGracePeriodSeconds: 300

6.4.5 Customize Container Resources: Memory and CPU

The resources following section shows an example configuration of the resources section from the example application deployment, above. Note that the configuration is just for reference purpose.

1
2
3
4
5
6
7
resources:
  limits:
    cpu: 1
    memory: 512Mi
  requests:
    cpu: 100m
    memory: 512Mi

This section allows the configuration of the lower and upper resource boundaries, the requests and limits respectively.

The settings in the example above mean that

  • the container will always receive at least the resources set in requests
  • if the server node where a pod is running has enough of a given resource available the container can be granted resource than its requests
  • a container will never be granted more than its resource limits
6.4.5.1 Meaning of CPU

Limits and requests for CPU resources are measured in cpu units. One CPU, in this context, is equivalent to 1 vCPU/Core for cloud providers and 1 hyperthread on bare-metal Intel processors.

Fractional requests are allowed. For instance, in this example, we are requesting 100m, which can be read as one hundred millicpu, and limiting to a maximum of 1 CPU (1000m).

A precision finer than 1m is not allowed.

6.4.5.2 Meaning of Memory

Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi

For instance, in the example above, we are requesting and limiting memory usage to roughly 512MiB.

6.5 Customize Runtime Metrics

Mendix for Private Cloud provides a Prometheus API, which can be used to collect metrics from Mendix apps.

runtimeMetricsConfiguration allows you to specify the default metrics configuration for a namespace. Any configuration values from runtimeMetricsConfiguration can be overridden for an environment using the MendixApp CR (see Generating Metrics for more details).

An example of the runtimeMetricsConfiguration in the operator configuration manifest is given below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
    # …
spec:
  runtimeMetricsConfiguration:
    mode: native
    interval: "PT1M"
    mxAgentConfig: |-
      {
        "requestHandlers": [
          {
            "name": "*"
          }
        ],
        "microflows": [
          {
            "name": "*"
          }
        ],
        "activities": [
          {
            "name": "*"
          }
        ]
      }      
    mxAgentInstrumentationConfig: |-
      {
      }      
  # …

You can set the following metrics configuration values:

  • mode: metrics mode, native or compatibility. native mode is only available for Mendix 9.7 and above. See Metrics Generation Modes in Monitoring Environments in Mendix for Private Cloud for more information.
  • interval: Interval between Prometheus scrapes specified in ISO 8601 duration format (for example, ‘PT1M’ would be an interval of one minute). This should be aligned with your Prometheus configuration. If left empty it defaults to 1 minute (matching the default Prometheus scrape interval). This attribute is only applicable when mode is native.
  • mxAgentConfig: configuration for the Java instrumentation agent; collects additional metrics such as microflow execution times; can be left empty to disable the instrumentation agent. This attribute is only applicable when mode is native.
  • mxAgentInstrumentationConfig: instrumentation configuration for the Java instrumentation agent; collects additional metrics such as microflow execution times; can be left empty to use the default instrumentation config. This attribute is only applicable when mode is native, and mxAgentConfig is not empty.

For more information about collecting metrics in Mendix for Private Cloud, see Monitoring Environments in Mendix for Private Cloud.

6.6 Customize Service Account

The Mendix environment can be configured to use a specific Kubernetes ServiceAccount instead of the default ServiceAccount.

To achieve this, you need to add the annotation privatecloud.mendix.com/environment-account: true (for security reasons, any account matching an environment name but without this annotation cannot be attached to environments).

If required, you can use additional annotations. For example, in order to authenticate with AWS services instead of with static credentials, you can attach an AWS IAM role to an environment and use IRSA.

6.7 Autoscaling

Mendix for Private Cloud is compatible with multiple types of Kubernetes autoscalers.

6.7.1 Cluster Autoscaling

The Kubernetes cluster autoscaler monitors resource usage and automatically adjusts the size of the cluster based on its resource needs.

Mendix for Private Cloud is compatible with cluster autoscaling. To install and enable cluster autoscaling, follow your cluster vendor’s recommended way of configuring the cluster autoscaler.

6.7.2 Horizontal Pod Autoscaling

Horizontal pod autoscaling is a standard Kubernetes feature and can automatically add or remove pods based on metrics, such as average CPU usage.

Enabling horizontal pod autoscaling allows you to increase processing capacity during peak loads and reduce resource usage during periods of low activity. Horizontal pod autoscaling can be combined with cluster autoscaling, so that the cluster and environment are automatically optimized for the current workload.

To enable horizontal pod autoscaling for an environment, run the following command:

kubectl -n {namespace} autoscale mendixapp {envname} --cpu-percent=50 --min=1 --max=10

Replace {namespace} with the namespace name, and {envname} with the MendixApp CR name (the environment internal name). Use --cpu-percent to specify the target CPU usage, and --min --max to specify minimum and maximum number of replicas.

To configure additional horizontal pod autoscaling, run the following command:

kubectl -n {namespace} edit horizontalpodautoscaler {envname}

Replace {namespace} with the namespace name, and {envname} with the MendixApp CR name (the environment internal name). The Kubernetes Horizontal pod autoscaling documentation explains additional available autoscaling options.

When an environment is scaled (manually or automatically), it will not be restarted. Adjusting the number of replicas will not cause downtime - as long as the number of replicas is greater than zero. Scaling an environment up (increasing the number of replicas) adds more pods - without restarting any already running pods; once the additional pods become available, they will start receiving HTTP (or HTTPS) requests. Scaling an environment down (decreasing the number of replicas) removes some of the running pods - without restarting remaining pods; all HTTP (or HTTPS) traffic will be routed to the remaining pods.

6.7.3 Vertical Pod Autoscaling

Vertical pod autoscaling can automatically configure CPU and memory resources and requirements for a pod.

Mendix Operator version 2.4.0 or above has the APIs required by the vertical pod autoscaler.

6.8 Log format

6.8.1 Runtime log format

Mendix Operator version 2.11.0 or above allows you to specify the log format used by Mendix apps.

To specify the log format, add a runtimeLogFormatType entry to OperatorConfiguration:

1
2
3
4
5
6
7
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
spec:
  # ...
  # Other configuration options values
  # Optional: log format type
  runtimeLogFormatType: json

You can set runtimeLogFormatType to one of the following values:

  • plain: – default option, produces plaintext logs in the following format:

    2023-03-21 14:36:14.607 INFO - M2EE: Added admin request handler '/prometheus' with servlet class 'com.mendix.metrics.prometheus.PrometheusServlet'
    
  • json: – produces JSON logs in the following format:

    1
    
    {"node":"M2EE","level":"INFO","message":"Added admin request handler '/prometheus' with servlet class 'com.mendix.metrics.prometheus.PrometheusServlet'","timestamp":1679409374607}
    

6.9 Pod labels

6.9.1 General pod labels

Mendix Operator version 2.13.0 or above allows you to specify default pod labels for app-related pods: task pods (build and storage provisioners) and runtime (app) pods.

To specify default pod labels for a namespace, specify them in customPodLabels.general in OperatorConfiguration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
spec:
  # ...
  # Other configuration options values
  # Optional: custom pod labels
  customPodLabels:
    # Optional: general pod labels (applied to all app-related pods)
    general:
      # Example: enable Azure Workload Identity
      azure.workload.identity/use: "true"

Alternatively, for Standalone clusters, pod labels can be specified in the MendixApp CR for a specific app.

6.10 GKE Autopilot Workarounds

In GKE Autopilot, one of the key features is its ability to automatically adjust resource settings based on the observed resource utilization of the containers. GKE Autopilot verifies the resource allocations and limits for all containers, and makes adjustments to deployments when the resources are not as per its requirements.

As a result, there can be a continuous back-and-forth interaction between Mx4PC and GKE Autopilot, where both entities engage in a loop, attempting to counteract each other’s modifications to deployments and pods.

To address this issue, you can configure the Mendix Operator to align with GKE’s requirements. This involves setting the resources (specifically, the CPU, memory, and ephemeral storage) to be equal to the limits defined in the OperatorConfiguration for both the sidecar and metrics-sidecar containers. Along with this, you must ensure that the resource limits for the CPU, memory, and ephemeral storage are equal to the resource requests in the Private Cloud Portal. For more information on setting the core resources on the Portal, see Custom Core Resource Plan.

You must also create a patch file for configuring the core resources in the OperatorConfiguration, as in the following example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
spec:
  buildResources:
    limits:
      cpu: "1"
      memory: 256Mi
    requests:
      cpu: "1"
      memory: 256Mi
  metricsSidecarResources:
    limits:
      cpu: 100m
      memory: 32Mi
    requests:
      cpu: 100m
      memory: 32Mi

Run the following command in order to update the core resources in the OperatorConfiguration:

kubectl -n {namespace} patch OperatorConfiguration mendix-operator-configuration --type merge -p "$(cat <patchedFile>)"

7 Cluster and Namespace Management

Once it is configured, you can manage your cluster and namespaces through the Developer Portal.

7.1 Cluster Overview

Go to the Cluster Manager page by clicking Cluster Manager in the top menu of the Clouds page of the Developer Portal.

From this page you can see a summary of your clusters with all their namespaces and an indication of the namespace status and how long it has been running (runtime).

7.1.1 Managing the Cluster

Here you can perform the following actions on the entire cluster:

  • Delete the cluster by clicking Delete ( )
  • Rename the cluster or edit its description by clicking Edit ( )
  • Invite another cluster manager

You can also see the activities logged for all you clusters by clicking Activity in the cluster overview page. This shows the following:

  • When a cluster has been added
  • When a cluster description is added
  • When name of the cluster is changed
  • When cluster description is changed

7.2 Namespace Management

If you are a member of a namespace, you can also manage a namespace in the cluster.

Click the Details button for the namespace you want to manage.

On the namespace management page, there are a number of tabs which allow you to manage aspects of your namespace :

  • Apps
  • Members
  • Operate
  • Plans
  • Installation
  • Additional information
  • Customization
  • PCLM Statistics

See the sections below for more information.

You can also delete your namespace from the cluster manager by clicking Delete Namespace ( ) in the top right.

If there are any environments associated with the namespace, you cannot delete the namespace until the environments associated with it are deleted.

When you delete a namespace, this removes the namespace from the cluster in the Developer Portal. However, it will not remove the namespace from your platform. You will need to explicitly delete the namespace using the tools provided by your platform.

In order to delete the namespace from the cluster, perform the following steps:

  1. Ensure that all the environments under this namespaces are removed. You can check the list of environments under this namespace using the following command:

    For OpenShift:

    1
    
    oc -n {namespace} get mendixapp
    

    For Kubernetes:

    1
    
    kubectl -n {namespace} get mendixapp
    
  2. If any Mendix apps still exist in the namespace, you can delete them by using the following command, where internalId is the ID of the environment:

    For OpenShift:

    1
    
    oc -n {namespace} delete mendixapp {internalId}
    

    For Kubernetes:

    1
    
    kubectl -n {namespace} delete mendixapp {internalId}
    
  3. Wait until the storage provisioner completes the process of deleting the storage instance related to the environment. You can check if there are any existing storage instances by running the following command:

    For OpenShift:

    1
    
    oc -n {namespace} get storageinstance
    

    For Kubernetes:

    1
    
    kubectl -n {namespace} get storageinstance
    
  4. If there are any failed storage instances, you can check their logs by running the following command:

    For OpenShift:

    1
    
    oc -n {namespace} log {storageinstance-name}
    

    For Kubernetes:

    1
    
    kubectl -n {namespace} log {storageinstance-name}
    
  5. If there are any remaining storage instances, you can delete then by using the following command:

    For OpenShift:

    1
    
    oc patch -n {namespace} storageinstance {name} --type json -p='[{"op": "remove", "path": "/metadata/finalizers"}]'
    

    For Kubernetes:

    1
    
    kubectl patch -n {namespace} storageinstance {name} --type json -p='[{"op": "remove", "path": "/metadata/finalizers"}]'
    
  6. After manually removing the storage instance, manually clean up any resources associated with it, such as the database, S3 bucket or associated AWS IAM account in the cluster.

  7. Once all the storage instances are deleted successfully, yon can now delete the namespace from the cluster by using the following command:

    1
    
    oc delete ns {namespace}
    

    For Kubernetes:

    1
    
    kubectl delete ns {namespace}
    

You can also see an activity log containing the following information for all namespaces within the cluster:

  • When a namespace has been added
  • When a namespace has been deleted
  • When a cluster manager has been added
  • When a cluster manager invitation is removed
  • When a cluster manager accepts the invitation
  • When a cluster manager is removed from the cluster
  • When a new database plan is added in a namespace
  • When a database plan is deactivated
  • When a new storage plan is added in a namespace
  • When a storage plan is deactivated
  • When Metrics/Alerts/Logs/Backups URLs are added in the namespace
  • When Metrics/Alerts/Logs/Backups URLs are removed from the namespace
  • When Metrics/Alerts/Logs/Backups URLs are changed in the namespace
  • When a user is invited as a namespace member
  • When a user invitation for namespace member is removed
  • When a user accepts the invitation as a namespace member
  • When a user is removed as a namespace member
  • When user’s permission is changed in the namespace
  • When environment configurations are added, updated, or removed
  • When Runtime Metrics configurations are added, updated, or deleted
  • When developer mode is enabled in the namespace
  • When developer mode is disabled in the namespace

7.2.1 Apps

The Apps tab of namespace details in the cluster manager page lists all the app environments which are deployed to this namespace.

If you are a team member of the app, click Details to go to the Environment Details page for that app.

If you are a cluster administrator, you can also click Configure to configure the environment by adding annotations for pods, ingress, and service.

7.2.1.1 Configure Environment

You can add, edit, and delete annotations for your environment.

To add a new annotation, do the following.

  1. Click Add.
  2. Choose the Annotation type from the dropdown.
  3. Enter the Key and the Value for the annotation.
  4. Click Save.

You can also Edit or Delete an existing annotation by selecting it and clicking the appropriate button.

You can configure the runtime metrics for the environment in the Runtime section. For more information, see Customize Runtime Metrics.

You can also configure the pod labels for the environment in the Labels section. For more information, see App Pod Labels.

7.2.2 Members

By default, the cluster manager, who created the cluster in Mendix, and anyone added as a cluster manager has full administration rights to the cluster and its namespaces. These cluster managers will also need to be given the appropriate permissions on the Kubernetes or OpenShift Cluster. The administration rights are:

  • Add and delete namespaces
  • Add, activate, or deactivate plans
  • Invite and manage users

The following rights are available to the cluster creator, and members of a namespace with appropriate authorization:

  • Set up operating URLs for the namespace
  • View all environments in the namespace
  • Manage own environments – user can create and manage an environment in any namespace in the cluster

The following actions require the appropriate access to the namespace and access to the app environment as a team member with appropriate authorization:

  • Manage environment- user can navigate to the environment details section and edit the environment name and core resources
  • Deploy App – user can deploy a new app to the environment
  • Scale App – user can change the number of replicas
  • Start App
  • Stop App
  • Modify MxAdmin Password
  • Edit App Constants
  • Manage App Scheduled Events
  • View App Logs
  • View App Alerts
  • View App Metrics
  • Manage App Backups
  • Manage Debugger
  • Manage TLS configurations
  • Manage Custom Runtime Settings
  • Manage Log levels
  • Manage Client Certificates
  • Manage Custom Environment Variables and JVM options
  • Manage Runtime Metrics Configuration

The Members tab allows you to manage the list of members of the namespace and control what rights they have.

7.2.2.1 Adding Members

You can invite additional members to the namespace, and configure their role depending on what they should be allowed to do.

  1. The Members tab displays a list of current members (if any).

  2. Click Invite Member.

  3. Enter the Email of the person you want to invite.

  4. Give them the rights they need. This can be:

    1. Developer – a standard set of rights needed by a developer, these are listed on the screen
    2. Administrator – a standard set of rights needed by an administrator, these are listed on the screen
    3. Custom – you can select a custom set of rights by checking the box next to each role you want to give to this person

    With custom permissions, we have now decoupled the permissions for Scale, Start and Stop operations. If an application is in the Stopped state, the scaling does not come into effect until the application is Started. This means that you have to click Start application in order for the changes to be sent to the cluster. Along with this, we have also decoupled the permission for modifying the MxAdmin password and managing environments.

  5. Click Send Invite to send an invite to this person.

  6. The user will receive an email and will be required to follow a link to confirm that they want to join this namespace. They will need to be logged in to Mendix when they follow the confirmation link.

7.2.2.2 Editing and Removing Members

You can change the access rights for, or completely remove, existing members.

  1. Click Edit next to the member you want to change.

  2. Either:

    1. Make changes and click Save.

    2. Click Remove member to remove this member completely. You will be asked to confirm this action.

7.2.3 Operate

The Operate tab allows you to add a set of links which are used when users request an operations page for their app in the Developer Portal. The following pages can be configured:

  • Metrics
  • Alerts
  • Logs
  • Backups

The specification of these pages is optional.

Open the Operate tab, enter the URLs relevant to your namespace, and click Save for each one.

7.2.4 Plans

The Plans tab shows you the database and storage plans which are currently configured for your namespace.

7.2.4.1 Deactivating a Plan

Enable the toggle button next to the name of the plan you wish to deactivate. You cannot remove plans from within the cluster manager, but you can deactivate them to ensure that developers cannot create environments using the plan. Any environments currently using the plan will not be affected by this setting.

7.2.4.2 Activating a Plan

Disable the toggle button next to the name of the plan you wish to activate. The plan can then be used by developers when they create an environment to deploy their apps.

7.2.4.3 Deleting a Plan

You can only delete storage or database plans if they are not used in any of your environments, regardless of whether they are active or inactive.

7.2.5 Custom Core Resource Plan

Here, you can create customized plan for your core resources.

  1. Click Add New Plan.

  2. Provide a name to the plan under Plan Name.

  3. Provide the required CPU Limits, CPU Request, Memory Limit, Memory Request, Ephemeral Storage Request and Ephemeral Storage Limit based on your choice.

    .

  4. Click OK button to save the customized resource plan.

  5. In order to make the customized plan available to the customer, make sure to enable the toggle button next Use custom core resources plans.

7.2.6 Installation

The Installation tab shows you the Configuration Tool which you used to create the namespace, together with the parameters which are used to configure the agent. You can use the Configuration Tool again to change the configuration of your namespace by pasting the command into a command line terminal as described in Running the Configuration Tool, above. You can also download the Configuration Tool again, if you wish.

7.2.7 Additional Information

This tab shows information on the versions of the various components installed in your namespace.

7.2.8 Customization

This tab allows the cluster manager to customize the enablement of the secret store, developer mode for the developers, and product type for the PCLM Runtime License.

Enabling the External Secrets Store option allows users to retrieve the following secrets from an external secrets store:

  • Database plan
  • Storage plan
  • MxAdmin password
  • Custom runtime settings
  • MxApp constants

Enabling the Development Mode option will allow users to change the type of an environment to Development.

If PCLM is configured, the default product type for Runtime licenses is set to standard. However, if the product type for PCLM Runtime licenses in the license server differs from Standard, you can customize it here. To check the product type of the Runtime license, navigate to the PCLM Statistics page, and then select Runtime in the Select type field.

The selected product type will be applied to all environments within this namespace, and associated environments will adopt the license of this specific product type.

7.2.9 PCLM Statistics

This tab shows information about claimed licenses, operator licenses and runtime licenses.

Select Claim to view a list of licenses from the license bundle which are claimed in the namespace.

Select Operator to view a list of all the Operator licenses in the bundle.

Select Runtime to view a list of all the Runtime licenses in the bundle.

Select Export in Excel to export the above lists.

If you would like to see the license payload, click Show License Payload.

For more information, see Private Cloud License Manager.

8 Current Limitations

8.1 Storage Provisioning

If the Operator fails to provision or deprovision storage (a database or file storage), it will not retry the operation. If there is a failed *-database or *-file pod, you’ll need to do the following:

  1. Check the failed pod logs for the error message.
  2. Troubleshoot and fix the cause of this error.
  3. Delete the failed pod to retry the process again.

8.2 Restart Required When Switching Between Ingress and OpenShift Route

Starting with Mendix Operator version 1.5.0, the operator will monitor only one network resource type: Ingress or OpenShift route.

If you switch between Ingress and OpenShift Route, you will need to restart the Mendix Operator so that it can monitor the right network resource (replace {namespace} with the namespace where the Operator is installed). This can be done as follows:

For OpenShift:

1
2
oc -n {namespace} scale deployment mendix-operator --replicas=0
oc -n {namespace} scale deployment mendix-operator --replicas=1

For Kubernetes:

1
2
kubectl -n {namespace} scale deployment mendix-operator --replicas=0
kubectl -n {namespace} scale deployment mendix-operator --replicas=1

8.3 Terminal limitations

8.3.1 Windows

The Windows version of the Configuration Tool must be run in a terminal that supports the Windows console API and has mouse support. PowerShell and the Windows Command Prompt are supported.

8.3.2 Linux and macOS

When running the installation tool over SSH, make sure that the SSH client supports terminal emulation and has mouse support enabled.

ssh.exe in Windows doesn’t support mouse click forwarding and another SSH client should be used instead, such as MobaXterm or PuTTY.

9 Troubleshooting

9.1 Status Reporting

This section covers an issue which can arise where Mendix cannot recover automatically and manual intervention may be required.

Under some circumstances changes in the status of the cluster, namespaces, and environments will not be updated automatically. To ensure you are seeing the current status, you may need to click the Refresh button on the screen (not the browser page refresh button).

9.2 Windows PowerShell

This section covers how to troubleshoot an issue you may find when running the installation tool with Windows PowerShell Terminal.

9.2.1 Enable Copy and Paste in Windows PowerShell

If you are unable to copy and paste in the installation tool, you may need to enable it from the Windows PowerShell Properties. Open the Properties menu by right clicking the header or by pressing Alt + Space.

Select the Options tab and enable Use Ctrl+Shift+C/V as Copy/Paste

You can now copy and paste with Ctrl+Shift+C and Ctrl+Shift+V in the terminal.

9.2.2 Unable to Click a Button

If you highlight a button instead of clicking the button, you may need to disable the Quick Edit Mode from the Windows PowerShell Properties.

After disabling the option you need to enable the new settings. You can do this by navigating to other page by pressing a shortcut key, or reopening the installer tool by closing it with Ctrl+C and reopening the tool with the installation command.

10 Containerized Mendix App Architecture

Within your cluster you can run one, or several, Mendix apps. Each app runs in an environment, and each environment is in a namespace. You can see the relationship between the Mendix environments and the Kubernetes namespaces in the image below.

To ensure that every app deployed to a namespace has a unique name, the environment will have an Environment UUID added to the environment name when it is deployed to ensure that it is unique in the project. This also ensures the app cannot have the same name as the Mendix tools used to deploy the app. See Deploying a Mendix App to a Private Cloud Cluster for more information.