Monitoring Environments in Mendix for Private Cloud

Last modified: November 29, 2023

1 Introduction

Mendix for Private Cloud provides a Prometheus API that can be scraped by a local Prometheus server. This API can also be used by other monitoring solutions that support scraping the Prometheus API.

The metrics API can only be accessed inside the Kubernetes cluster, and metrics are never sent to the Mendix Private Cloud Portal. To collect, store, and display metrics, you will need to install a local monitoring solution.

Mendix for Private Cloud writes all logs to the standard output (stdout and stderr). Any Kubernetes log processing solution should be able to read and collect those logs. This document shows an example how to use Grafana Loki and Promtail to collect those logs.

This document will help you quickly set up a solution for monitoring Mendix for Private Cloud environments. You can customize this solution to match the requirements of your team or organization.

1.1 Metrics Generation Modes

Mendix Operator v2.4.0 and above offers several modes for collecting and generating metrics.

Mode Native Compatibility
Mendix Operator version v2.4.0 and above v2.1.0 and above
Supported Mendix versions 9.7 and above 7.23 and above
Metrics activities Yes No
Microflow execution times Yes No
Custom metrics Yes No
Rigid format No Yes
Metrics generated by Mendix Runtime m2ee-metrics sidecar

Mendix 9.7 and above can generate Prometheus metrics directly in the Runtime, which allows the generation of custom or app-specific metrics. Setting the metrics generation of a Mendix for Private Cloud environment to native mode will collect Prometheus metrics directly from the Mendix Runtime. Depending on the specific Mendix Runtime version used, there might be small differences between the metrics names and labels.

Mendix 9.6 and below cannot generate Prometheus metrics, they provide a fixed set of metrics through the admin port JSON API. When metrics generation is set to compatibility mode, Mendix for Private Cloud adds an additional m2ee-metrics sidecar to the environment’s pods. This sidecar will act as an adapter, listening to Prometheus scrape requests, collecting metrics from the Mendix Runtime admin port, and converting those metrics into the Prometheus format. The metrics names and labels generated by the m2ee-metrics sidecar are rigid and will not change between Mendix versions.

For backwards compatibility reasons, native and compatibility metrics use different labels and metrics names. Each mode requires a separate dashboard.

2 Installing Monitoring Tools

If you already have installed Prometheus, Loki, and Grafana in your cluster, you can skip this section and go directly to enable metrics scraping.

This section contains a quick start guide on how to install Grafana and its dependencies in a cluster by using the Loki Helm chart. In addition, this section explains how to install and configure a logging solution based on Loki.

2.1 Prerequisites

Before installing Grafana, make sure you have installed Helm and can access your Kubernetes cluster.

Download the latest version of the Grafana Helm chart using the following commands:

1
2
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

2.2 Installation in Kubernetes

This section documents how to install Grafana and Prometheus into a Kubernetes cluster. For installation in OpenShift, use the Installation in OpenShift instructions.

2.2.1 Preparations

Create a new namespace (replace {namespace} with the namespace name, for example grafana):

1
kubectl create namespace {namespace}

Use the following command to create a secret containing the Grafana admin password: replace {namespace} with the namespace name (for example grafana); {username} with the admin username (for example admin); and {password} with the admin password:

1
kubectl --namespace {namespace} create secret generic grafana-admin --from-literal=admin-user={username} --from-literal=admin-password={password}

This username and password can be used later to log into Grafana.

2.2.2 Install the Grafana Loki Stack

Run the following commands in a Bash console, (replace {namespace} with the namespace name, for example grafana):

1
2
3
4
5
6
NAMESPACE={namespace}
helm upgrade --install loki grafana/loki-stack --version='^2.8.0' --namespace=${NAMESPACE} --set grafana.enabled=true,grafana.persistence.enabled=true,grafana.persistence.size=1Gi,grafana.initChownData.enabled=false,grafana.admin.existingSecret=grafana-admin \
--set prometheus.enabled=true,prometheus.server.persistentVolume.enabled=true,prometheus.server.persistentVolume.size=50Gi,prometheus.server.retention=7d \
--set loki.persistence.enabled=true,loki.persistence.size=10Gi,loki.config.chunk_store_config.max_look_back_period=168h,loki.config.table_manager.retention_deletes_enabled=true,loki.config.table_manager.retention_period=168h \
--set promtail.enabled=true,promtail.containerSecurityContext.privileged=true,promtail.containerSecurityContext.allowPrivilegeEscalation=true \
--set prometheus.nodeExporter.enabled=false,prometheus.alertmanager.enabled=false,prometheus.pushgateway.enabled=false

This Helm chart will install and configure Grafana, Prometheus, Loki, and their dependencies.

You might need to adjust some parameters to match the scale and requirements of your environment:

  • grafana.persistence.size – specifies the volume size used by Grafana to store its configuration;
  • prometheus.server.persistentVolume.size – specifies the volume size used by Prometheus to store metrics;
  • prometheus.server.retention – specifies how long metrics are kept by Prometheus before they will be discarded;
  • loki.persistence.size – specifies the volume size used by Loki to store logs;
  • loki.config.chunk_store_config.max_look_back_period – specifies the maximum retention period for storing chunks (compressed log entries);
  • loki.config.table_manager.retention_period – specifies the maximum retention period for storing logs in indexed tables;
  • promtail.enabled – specifies whether the Promtail component should be installed (required for collecting Mendix app environment logs).

For more details see the Loki installation guide.

If your Kubernetes cluster requires a StorageClass to be specified, add the following arguments to the helm upgrade command (replace {class} with a storage class name, for example, gp2):

1
--set grafana.persistence.storageClassName={class},loki.persistence.storageClassName={class},prometheus.server.persistentVolume.storageClass={class}

2.2.3 Expose the Grafana Web UI

Create an Ingress object to access Grafana from your web browser: replace {namespace} with the namespace name (for example grafana); {domain} with the domain name (for example grafana.mendix.example.com:

1
2
3
kubectl --namespace={namespace} create ingress loki-grafana \
--rule="{domain}/*=loki-grafana:80,tls" \
--default-backend="loki-grafana:80"

2.3 Installation in OpenShift

This section documents how to install Grafana and Prometheus into an OpenShift 4 cluster. These instructions have not been validated with OpenShift 3. For all other cluster types, use Installation in Kubernetes instructions.

Prometheus and Grafana which are included with OpenShift can only be used to monitor the OpenShift cluster itself and cannot be used to display Mendix app metrics.

To monitor Mendix app environments, you will need to install a separate copy of Grafana and Prometheus.

2.3.1 Preparations

Use the following command to create a new project: replace {project} with the project name (for example grafana):

oc new-project {project}

Use the following command to create a secret containing the Grafana admin password: replace {project} with the project name (for example grafana); {username} with the admin username (for example admin); and {password} with the admin password:

oc --namespace {project} create secret generic grafana-admin --from-literal=admin-user={username} --from-literal=admin-password={password}

This username and password can be used later to log into Grafana.

By default, OpenShift restricts UIDs and group IDs that can be used by containers in a project.

To get a valid UID range, run the following command to get the project annotations: (replace {project} with the project name, for example grafana):

oc describe project {project}

and note the value of the openshift.io/sa.scc.uid-range annotation. This annotation specifies the starting UID and range of UIDs allowed to be used in the project, for example, openshift.io/sa.scc.uid-range=1001280000/10000 means that the project accepts UIDs from 1001280000 to 1001289999.

Choose a UID from the allowed range, for example 1001280000.

2.3.2 Add Permissions to Collect Container Logs

To read logs from Pods (including logs from Mendix app environments), the Loki stack uses Promtail.

Promtail runs a pod on every Kubernetes node, and this pod reads local container logs from the host system. Promtail pods require elevated permissions to read those logs.

To allow the Promtail to read the container logs in OpenShift, run the following command: replace {project} with the project name (for example grafana):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
PROJECT={project}
cat <<EOF | oc apply -f -
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  name: loki-promtail
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: []
defaultAddCapabilities: []
fsGroup:
  type: RunAsAny
groups: []
priority: null
readOnlyRootFilesystem: true
requiredDropCapabilities: 
- ALL
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: MustRunAs
supplementalGroups:
  type: RunAsAny
forbiddenSysctls:
- '*'
users:
- system:serviceaccount:${PROJECT}:loki-promtail
volumes:
- 'configMap'
- 'secret'
- 'hostPath'
EOF

2.3.3 Install the Grafana Loki Stack

Run the following commands in a Bash console: replace {uid} with the UID chosen in the previous step (for example 1001280000); and {project} with the project name (for example grafana):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
PROJECT={project}
GRAFANA_UID={uid}
helm upgrade --install loki grafana/loki-stack --version='^2.8.0' --namespace=${PROJECT} --set grafana.enabled=true,grafana.persistence.enabled=true,grafana.persistence.size=1Gi,grafana.initChownData.enabled=false,grafana.admin.existingSecret=grafana-admin \
--set prometheus.enabled=true,prometheus.server.persistentVolume.enabled=true,prometheus.server.persistentVolume.size=50Gi,prometheus.server.retention=7d \
--set loki.persistence.enabled=true,loki.persistence.size=10Gi,loki.config.chunk_store_config.max_look_back_period=168h,loki.config.table_manager.retention_deletes_enabled=true,loki.config.table_manager.retention_period=168h \
--set promtail.enabled=true,promtail.containerSecurityContext.privileged=true,promtail.containerSecurityContext.allowPrivilegeEscalation=true \
--set prometheus.nodeExporter.enabled=false,prometheus.alertmanager.enabled=false,prometheus.pushgateway.enabled=false \
--set grafana.securityContext.runAsUser=${GRAFANA_UID},grafana.securityContext.runAsGroup=0,grafana.securityContext.fsGroup=${GRAFANA_UID} \
--set prometheus.server.securityContext.runAsUser=${GRAFANA_UID},prometheus.server.securityContext.runAsGroup=0,prometheus.server.securityContext.fsGroup=${GRAFANA_UID} \
--set prometheus.kube-state-metrics.securityContext.runAsUser=${GRAFANA_UID},prometheus.kube-state-metrics.securityContext.runAsGroup=0,prometheus.kube-state-metrics.securityContext.fsGroup=${GRAFANA_UID} \
--set loki.securityContext.runAsUser=${GRAFANA_UID},loki.securityContext.runAsGroup=0,loki.securityContext.fsGroup=${GRAFANA_UID}

This Helm chart will install and configure Grafana, Prometheus, Loki, and their dependencies.

You might need to adjust some parameters to match the scale and requirements of your environment:

  • grafana.persistence.size – specifies the volume size used by Grafana to store its configuration;
  • prometheus.server.persistentVolume.size – specifies the volume size used by Prometheus to store metrics;
  • prometheus.server.retention – specifies how long metrics are kept by Prometheus before they will be discarded;
  • loki.persistence.size – specifies the volume size used by Loki to store logs;
  • loki.config.chunk_store_config.max_look_back_period – specifies the maximum retention period for storing chunks (compressed log entries);
  • loki.config.table_manager.retention_period – specifies the maximum retention period for storing logs in indexed tables;
  • promtail.enabled – specifies if the Promtail component should be installed (required for collecting Mendix app environment logs).

For more details see the Loki installation guide.

2.3.4 Expose the Grafana Web UI

Use the following command to create an OpenShift Route object to access Grafana from your web browser: replace {project} with the project name (for example grafana):

oc --namespace {project} create route edge loki-grafana --service=loki-grafana --insecure-policy=Redirect

To get the Grafana web UI URL (domain), run the following command: replace {project} with the project name (for example grafana):

oc --namespace {project} get route loki-grafana -o jsonpath="{.status.ingress[*].host}"

3 Enable Metrics Scraping

To collect Mendix app environment metrics for a specific environment, Prometheus needs to discover and scrape pods with the following annotations:

  • privatecloud.mendix.com/component: mendix-app
  • privatecloud.mendix.com/app: Environment internal name

Each Mendix app pod listens on port 8900 and provides a /metrics path that can be called by Prometheus to get metrics from a specific app Pod.

Prometheus supports multiple ways to set up metrics scraping. The easiest way is to use pod annotations. It is possible to specify annotations for all Mendix app environments in the namespace, or set annotations only for specific environments.

3.1 Enable Scraping for Entire Namespace

To enable scraping annotations for all environments in a namespace, add the following runtimeDeploymentPodAnnotations in the Mendix App Deployment settings:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
spec:
  # Existing configuration
  # ...
  runtimeDeploymentPodAnnotations:
    # Existing annotations
    # ...
    # Add these new annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: '8900'
    prometheus.io/scrape: 'true'

Then restart the Mendix Operator.

3.2 Enable Scraping for a Specific Environment

If you would like to enable Prometheus scraping only for a specific environment, you can add the Prometheus scraping annotations just for that environment.

3.2.1 Enable Scraping in Connected Mode

  1. Go to the Cluster Manager page by clicking Cluster Manager in the top menu of the Clouds page of the Developer Portal.

  2. Click Details next to the namespace where your environment is deployed.

  3. Click Configure next to the environment name where Prometheus scraping should be enabled.

  4. Click Quick setup within Pod annotations:

  5. Check the Prometheus Metrics checkbox and click Close:

  6. Click Apply Changes:

3.2.2 Enable Scraping in Standalone Mode

Open an environment’s MendixApp CR for editing and add the following pod annotations:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: privatecloud.mendix.com/v1alpha1
kind: MendixApp
metadata:
  name: example-mendixapp
spec:
  # Existing configuration
  # ...
  runtimeDeploymentPodAnnotations:
    # Existing annotations
    # ...
    # Add these new annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: '8900'
    prometheus.io/scrape: 'true'

Save and apply the changes.

4 Setting up a Grafana Dashboard

Mendix for Private Cloud offers a reference dashboard that looks similar to Mendix Cloud Metrics.

In addition, this dashboard will display Mendix app and Runtime logs.

4.1 Import the Dashboard

To install the reference dashboard, download the dashboard JSON to a local file using the links below. There are two dashboards available at the moment. If necessary you can install both at the same time:

Import the downloaded JSON into Grafana:

  1. Open Grafana in a web browser using the domain name, admin username and password from Section 2.

  2. Click Create, then Import:

  3. Then click Upload JSON file and select the dashboard JSON you downloaded earlier.

  4. Select Prometheus from the Prometheus data source dropdown, and Loki from the Loki data source dropdown. If necessary, rename the dashboard and change its uid. Press Import to import the dashboard into Grafana.

4.2 Using the Dashboard

Click Dashboards, then Manage and click Mendix app dashboard (native) or Mendix app dashboard (compatibility mode) to open the dashboard:

Select the Namespace, Environment internal name and Pod name from the dropdowns to see the metrics and logs for a specific Pod:

Metrics are displayed per pod and not aggregated on a namespace or environment level. Every time an app is restarted or scaled up, this will add new pods or replace existing pods with new ones. You will need to select the currently running pod from the dropdown to monitor its metrics and logs.

To provide Mendix app developers with quick access to the dashboard, you can set the Metrics and Logs links in the namespace configuration.

The Developer Portal supports placeholder (template) variables in Metrics and Logs links:

  • {namespace} will be replaced with the environment namespace;
  • {environment_name} will be replaced with the environment internal name.

For example, if you have imported the reference dashboard JSON with default parameters, set Metrics and Logs links to the following:

https://grafana.mendix.example.com/d/4csBnmWnk/mendix-app-dashboard?var-namespace={namespace}&var-environment_id={environment_name}

(replace grafana.mendix.example.com with the Grafana domain name used in your cluster).

When a Mendix app developer clicks a Metrics or Logs link in the Developer Portal, the {namespace} and {environment_name} placeholders will be replaced with that environment’s namespace and name, and the Mendix app developer will just need to select a Pod name in the Grafana dashboard dropdown.

To set the Metrics and Logs links:

  1. Go to the Cluster Manager page by clicking Cluster Manager in the top menu of the Clouds page of the Developer Portal.

  2. Click Details next to the namespace where your environment is deployed.

  3. Open the Operate tab, enter dashboard URL for the Metrics and Logs links, and click Save for each one.

5 Generating Metrics

It is possible to specify a default metrics configuration for the namespace in Advanced Operator Configuration. For each attribute, an environment can provide a custom value; this value will be used instead of the namespace default value.

Here is an example metrics configuration. This block can be added to the OperatorConfiguration CR (default configuration for the namespace) or MendixApp CR (customized configuration for a specific environment):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
spec:

  # Metrics configuration
  runtimeMetricsConfiguration:
    mode: native
    interval: "PT1M"
    mxAgentConfig: |-
      {
        "requestHandlers": [
          {
            "name": "*"
          }
        ],
        "microflows": [
          {
            "name": "*"
          }
        ],
        "activities": [
          {
            "name": "*"
          }
        ]
      }      
    mxAgentInstrumentationConfig: |-
      {
      }      
  # …

5.1 Compatibility Metrics Mode

To enable compatibility metrics mode, set the mode attribute to compatibility. In this mode, all other runtimeMetricsConfiguration attributes are ignored.

5.1.1 Enable Compability Metrics in Connected Mode

  1. Open the Environments page for your app in the Developer Portal and click Details next to the environment where compatibility mode should be used.

  2. Click the Runtime tab.

  3. Click Enable next to the Custom Configuration of Runtime Metrics Configuration, then click Save.

  4. Click Edit next to Mode.

  5. Set Mode to compatibility and click Save and Apply.

5.1.2 Enable Compatibility Metrics in Standalone Mode

Open an environment’s MendixApp CR for editing and set the mode attribute in runtimeMetricsConfiguration to compatibility:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: privatecloud.mendix.com/v1alpha1
kind: MendixApp
metadata:
  name: example-mendixapp
spec:
  # Existing configuration
  # ...
  # Metrics configuration
  runtimeMetricsConfiguration:
    # Set mode to compatibility
    mode: compatibility

Save and apply the changes.

5.2 Disable All Metrics Collection

To completely disable metrics collection, delete the runtimeMetricsConfiguration block from the OperatorConfiguration CR, and update the environment to use the default metrics configuration.

5.2.1 Disable Metrics in Connected Mode

  1. Open the Environments page for your app in the Developer Portal and click Details next to the environment where compatibility mode should be used.

  2. Click the Runtime tab.

  3. Click Enable next to the Custom Configuration of Runtime Metrics Configuration, then click Save.

  4. Click Edit next to Mode.

  5. Set Mode to default and click Save and Apply.

5.2.2 Disable Metrics in Standalone Mode

Open the environment’s MendixApp CR for editing and delete the runtimeMetricsConfiguration block:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: privatecloud.mendix.com/v1alpha1
kind: MendixApp
metadata:
  name: example-mendixapp
spec:
  # Existing configuration
  # ...
  # Delete this runtimeMetricsConfiguration block
  runtimeMetricsConfiguration:
    ...

Save and apply the changes.

5.3 Native Metrics Mode

To enable native metrics mode, set the mode attribute to native.

If your Prometheus has a custom scrape interval (default is 1 minute), you should specify it in interval to ensure the correct time window is used for max and average metrics. The interval field should be specified in ISO 8601 Duration format (for example, ‘PT1M’). If interval is empty (not specified), the default value of 1 minute will be used.

Native metrics are generated by the Mendix Runtime’s Micrometer component.

The Metrics.Registries configuration key will be generated automatically by the Mendix Operator. If an environment has a manually assigned Metrics.Registries key, it will be used instead of the automatically generated key.

It is also possible to add extra tags (Prometheus labels) by specifying them in the Metrics.ApplicationTags custom setting.

5.3.1 Enable Native Metrics in Connected Mode

  1. Open the Environments page for your app in the Developer Portal and click Details next to the environment where compatibility mode should be used.

  2. Click the Runtime tab.

  3. Click Enable next to the Custom Configuration of Runtime Metrics Configuration, then click Save.

  4. Click Edit next to Mode.

  5. Set Mode to default, then click Save.

  6. Set a custom value for MxAgent Config.

    This parameter is optional and can be left empty. For more information about MxAgent see Configuring the Java Instrumentation Agent, below.

  7. Click Apply Changes.

5.3.2 Configure additional Native Metrics options in Connected Mode

After an environment is switched into native metrics mode, it is possible to configure additional options for that environment.

  1. Go to the Cluster Manager page by clicking Cluster Manager in the top menu of the Clouds page of the Developer Portal.

  2. Click Details next to the namespace where your environment is deployed.

  3. Click Configure next to the environment name where the native metrics mode should be used.

  4. Click the Runtime tab.

  5. Set custom values for Interval and MxAgent Instrumentation Config by clicking the Edit button.

    These parameters are optional and can be left empty. For more information about MxAgent see Configuring the Java Instrumentation Agent, below.

  6. Click Apply Changes

5.3.3 Enable Native Metrics in Standalone Mode

Open an environment’s MendixApp CR for editing and set the mode attribute to native:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: privatecloud.mendix.com/v1alpha1
kind: MendixApp
metadata:
  name: example-mendixapp
spec:
  # Existing configuration
  # ...
  # Metrics configuration
  runtimeMetricsConfiguration:
    # Set mode to native
    mode: native
    # Optional: set the scrape interval
    interval: "PT1M"
    # Optional: set the agent config
    mxAgentConfig: |-
      {
      }      
    # Optional: set the agent instrumentation config
    mxAgentInstrumentationConfig: |-
      {
      }      
  # …

If your Prometheus setup is using a custom scrape interval, specify the interval in the interval attribute in ISO 8601 Duration format (for example, ‘PT1M’).

If you would like to collect additional metrics, specify a non-empty configuration for mxAgentConfig, see Configuring the Java Instrumentation Agent,below, for more details.

Save and apply the changes.

5.3.4 Configuring the Java Instrumentation Agent

By specifying a value for mxAgentConfig, you can enable the Mendix Java instrumentation agent and collect additional metrics such as the execution times of microflows, OData/SOAP/REST endpoints, and client activity.

You can specify which request handlers, microflows, and activities are reported to Prometheus using a JSON configuration with the following format (note that this is the syntax and not an example of this custom setting):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
  "requestHandlers": [
    {
      "name": "*" | "<requesthandler>"
    }
  ],
  "microflows": [
    {
      "name": "*" | "<microflow>"
    }
  ],
  "activities": [
    {
      "name": "*" | "<activity>"
    }
  ]
}
Value What Is Sent Note
"name": "*" All Default
"name": "<requesthandler>" All request handler calls of this type click Request Handlers1 below to see the list of options
"name": "<microflow>" Each time this microflow is run The format is <module>.<microflow>
For example, TrainingManagement.ACT_CancelScheduledCourse
"name": "<activity>" All activities of this type click Activities2 below to see the list of options

[1 ] Request Handlers (click to see list)

The following Mendix request handler calls will be passed to Prometheus:

Request Handler Call Type Namespace
WebserviceRequestHandler SOAP requests mx.soap.time
ServiceRequestHandler OData requests mx.odata.time
RestRequestHandler REST requests mx.rest.time
ProcessorRequestHandler REST, ODATA, SOAP doc requests mx.client.time
ClientRequestHandler /xas requests (general queries for data in data grids, sending changes to the server, and triggering the execution of microflows) mx.client.time
FileRequestHandler File upload/download requests mx.client.time
PageUrlRequestHandler /p requests mx.client.time

You can find help in analyzing some of these values in Metrics.

[2] Activities (click to see list)

The following Mendix activities can be passed to Prometheus:

  • CastObject
  • ChangeObject
  • CommitObject
  • CreateObject
  • DeleteObject
  • RetrieveObject
  • RollbackObject
  • AggregateList
  • ChangeList
  • ListOperation
  • JavaAction
  • Microflow
  • CallRestService
  • CallWebService
  • ImportWithMapping
  • ExportWithMapping

Example

The following example will send logs for:

  • All request handlers
  • The microflow After_Startup in the module Administration
  • The CreateObject and DeleteObject activities
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
  "requestHandlers": [
    {
      "name": "*"
    }
  ],
  "microflows": [
    {
      "name": "Administration.After_Startup"
    }
  ],
  "activities": [
    {
      "name": "CreateObject"
    },
    {
      "name": "DeleteObject"
    }
  ]
}

Advanced instrumentation configuration can be specified through mxAgentInstrumentationConfig. If this attribute is not supplied, the default instrumentation configuration will be used.