Monitoring Environments in Mendix for Private Cloud
Introduction
Mendix for Private Cloud provides a Prometheus API that can be scraped by a local Prometheus server. This API can also be used by other monitoring solutions that support scraping the Prometheus API.
The metrics API can only be accessed inside the Kubernetes cluster, and metrics are never sent to the Mendix Private Cloud Portal. To collect, store, and display metrics, you will need to install a local monitoring solution.
Mendix for Private Cloud writes all logs to the standard output (stdout
and stderr
).
Any Kubernetes log processing solution should be able to read and collect those logs.
This document shows an example how to use Grafana Loki and Promtail to collect those logs.
This document will help you quickly set up a solution for monitoring Mendix for Private Cloud environments. You can customize this solution to match the requirements of your team or organization.
Metrics Generation Modes
Mendix Operator v2.4.0 and above offers several modes for collecting and generating metrics.
Mode | Native | Compatibility |
---|---|---|
Mendix Operator version | v2.4.0 and above | v2.1.0 and above |
Supported Mendix versions | 9.7 and above | 7.23 and above |
Metrics activities | Yes | No |
Microflow execution times | Yes | No |
Custom metrics | Yes | No |
Rigid format | No | Yes |
Metrics generated by | Mendix Runtime | m2ee-metrics sidecar |
Mendix 9.7 and above can generate Prometheus metrics directly in the Runtime, which allows the generation of custom or app-specific metrics.
Setting the metrics generation of a Mendix for Private Cloud environment to native
mode will collect Prometheus metrics directly from the Mendix Runtime.
Depending on the specific Mendix Runtime version used, there might be small differences between the metrics names and labels.
Mendix 9.6 and below cannot generate Prometheus metrics, they provide a fixed set of metrics through the admin port JSON API.
When metrics generation is set to compatibility
mode, Mendix for Private Cloud adds an additional m2ee-metrics
sidecar to the environment’s pods.
This sidecar will act as an adapter, listening to Prometheus scrape requests, collecting metrics from the Mendix Runtime admin port,
and converting those metrics into the Prometheus format.
The metrics names and labels generated by the m2ee-metrics
sidecar are rigid and will not change between Mendix versions.
For backwards compatibility reasons, native
and compatibility
metrics use different labels and metrics names.
Each mode requires a separate dashboard.
Installing Monitoring Tools
If you already have installed Prometheus, Loki, and Grafana in your cluster, you can skip this section and go directly to enable metrics scraping.
This section contains a quick start guide on how to install Grafana and its dependencies in a cluster by using the Loki Helm chart. In addition, this section explains how to install and configure a logging solution based on Loki.
These instructions have been simplified to make the installation process as easy as possible.
Before installing Prometheus, Loki, and Grafana in a production environment, consult with your cluster administrator and IT security teams to ensure that this logging/monitoring solution is compliant with your organization’s security policies.
Prerequisites
Before installing Grafana, make sure you have installed Helm and can access your Kubernetes cluster.
Download the latest version of the Grafana Helm chart using the following commands:
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
Installation in Kubernetes
This section documents how to install Grafana and Prometheus into a Kubernetes cluster. For installation in OpenShift, use the Installation in OpenShift instructions.
Preparations
Create a new namespace (replace {namespace}
with the namespace name, for example grafana
):
kubectl create namespace {namespace}
Use the following command to create a secret containing the Grafana admin password: replace {namespace}
with the namespace name (for example grafana
); {username}
with the admin username (for example admin
); and {password}
with the admin password:
kubectl --namespace {namespace} create secret generic grafana-admin --from-literal=admin-user={username} --from-literal=admin-password={password}
This username and password can be used later to log into Grafana.
Install the Grafana Loki Stack
Run the following commands in a Bash console, (replace {namespace}
with the namespace name, for example grafana
):
NAMESPACE={namespace}
helm upgrade --install loki grafana/loki-stack --version='^2.8.0' --namespace=${NAMESPACE} --set grafana.enabled=true,grafana.persistence.enabled=true,grafana.persistence.size=1Gi,grafana.initChownData.enabled=false,grafana.admin.existingSecret=grafana-admin \
--set prometheus.enabled=true,prometheus.server.persistentVolume.enabled=true,prometheus.server.persistentVolume.size=50Gi,prometheus.server.retention=7d \
--set loki.persistence.enabled=true,loki.persistence.size=10Gi,loki.config.chunk_store_config.max_look_back_period=168h,loki.config.table_manager.retention_deletes_enabled=true,loki.config.table_manager.retention_period=168h \
--set promtail.enabled=true,promtail.containerSecurityContext.privileged=true,promtail.containerSecurityContext.allowPrivilegeEscalation=true \
--set prometheus.nodeExporter.enabled=false,prometheus.alertmanager.enabled=false,prometheus.pushgateway.enabled=false
This Helm chart will install and configure Grafana, Prometheus, Loki, and their dependencies.
You might need to adjust some parameters to match the scale and requirements of your environment:
- grafana.persistence.size – specifies the volume size used by Grafana to store its configuration;
- prometheus.server.persistentVolume.size – specifies the volume size used by Prometheus to store metrics;
- prometheus.server.retention – specifies how long metrics are kept by Prometheus before they will be discarded;
- loki.persistence.size – specifies the volume size used by Loki to store logs;
- loki.config.chunk_store_config.max_look_back_period – specifies the maximum retention period for storing chunks (compressed log entries);
- loki.config.table_manager.retention_period – specifies the maximum retention period for storing logs in indexed tables;
- promtail.enabled – specifies whether the Promtail component should be installed (required for collecting Mendix app environment logs).
For more details see the Loki installation guide.
If your Kubernetes cluster requires a StorageClass to be specified, add the following arguments to the helm upgrade
command (replace {class}
with a storage class name, for example, gp2
):
--set grafana.persistence.storageClassName={class},loki.persistence.storageClassName={class},prometheus.server.persistentVolume.storageClass={class}
Expose the Grafana Web UI
Create an Ingress object to access Grafana from your web browser: replace {namespace}
with the namespace name (for example grafana
); {domain}
with the domain name (for example grafana.mendix.example.com
:
kubectl --namespace={namespace} create ingress loki-grafana \
--rule="{domain}/*=loki-grafana:80,tls" \
--default-backend="loki-grafana:80"
The Ingress object configuration depends on how the Ingress Controller is set up in your cluster.
You might need to adjust additional Ingress parameters, for example specify the ingress class, annotations, or TLS configuration.
The domain name needs to be configured so that it resolves to the Ingress Controller’s IP address.
You can use the same wildcard domain name as other Mendix apps - for example, if you’re using mendix.example.com as the Mendix for Private Cloud domain name,
you can use grafana.mendix.example.com
as the domain name for Grafana.
Installation in OpenShift
This section documents how to install Grafana and Prometheus into an OpenShift 4 cluster. These instructions have not been validated with OpenShift 3. For all other cluster types, use Installation in Kubernetes instructions.
Prometheus and Grafana which are included with OpenShift can only be used to monitor the OpenShift cluster itself and cannot be used to display Mendix app metrics.
To monitor Mendix app environments, you will need to install a separate copy of Grafana and Prometheus.
Preparations
Use the following command to create a new project: replace {project}
with the project name (for example grafana
):
oc new-project {project}
Use the following command to create a secret containing the Grafana admin password: replace {project}
with the project name (for example grafana
); {username}
with the admin username (for example admin
); and {password}
with the admin password:
oc --namespace {project} create secret generic grafana-admin --from-literal=admin-user={username} --from-literal=admin-password={password}
This username and password can be used later to log into Grafana.
By default, OpenShift restricts UIDs and group IDs that can be used by containers in a project.
To get a valid UID range, run the following command to get the project annotations: (replace {project}
with the project name, for example grafana
):
oc describe project {project}
and note the value of the openshift.io/sa.scc.uid-range
annotation.
This annotation specifies the starting UID and range of UIDs allowed to be used in the project, for example, openshift.io/sa.scc.uid-range=1001280000/10000
means that the project accepts UIDs from 1001280000 to 1001289999.
Choose a UID from the allowed range, for example 1001280000.
Add Permissions to Collect Container Logs
To read logs from Pods (including logs from Mendix app environments), the Loki stack uses Promtail.
Promtail runs a pod on every Kubernetes node, and this pod reads local container logs from the host system. Promtail pods require elevated permissions to read those logs.
Promtail can be replaced with other similar components, for example Fluentd, Fluent Bit, Filebeat, or Azure Container Insights.
All of these use the same mechanism for reading logs, and replacing Promtail with an alternative will still require logs to be collected using a privileged container.
To allow the Promtail to read the container logs in OpenShift, run the following command: replace {project}
with the project name (for example grafana
):
PROJECT={project}
cat <<EOF | oc apply -f -
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: loki-promtail
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: []
defaultAddCapabilities: []
fsGroup:
type: RunAsAny
groups: []
priority: null
readOnlyRootFilesystem: true
requiredDropCapabilities:
- ALL
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
supplementalGroups:
type: RunAsAny
forbiddenSysctls:
- '*'
users:
- system:serviceaccount:${PROJECT}:loki-promtail
volumes:
- 'configMap'
- 'secret'
- 'hostPath'
EOF
Install the Grafana Loki Stack
Run the following commands in a Bash console: replace {uid}
with the UID chosen in the previous step (for example 1001280000); and {project}
with the project name (for example grafana
):
PROJECT={project}
GRAFANA_UID={uid}
helm upgrade --install loki grafana/loki-stack --version='^2.8.0' --namespace=${PROJECT} --set grafana.enabled=true,grafana.persistence.enabled=true,grafana.persistence.size=1Gi,grafana.initChownData.enabled=false,grafana.admin.existingSecret=grafana-admin \
--set prometheus.enabled=true,prometheus.server.persistentVolume.enabled=true,prometheus.server.persistentVolume.size=50Gi,prometheus.server.retention=7d \
--set loki.persistence.enabled=true,loki.persistence.size=10Gi,loki.config.chunk_store_config.max_look_back_period=168h,loki.config.table_manager.retention_deletes_enabled=true,loki.config.table_manager.retention_period=168h \
--set promtail.enabled=true,promtail.containerSecurityContext.privileged=true,promtail.containerSecurityContext.allowPrivilegeEscalation=true \
--set prometheus.nodeExporter.enabled=false,prometheus.alertmanager.enabled=false,prometheus.pushgateway.enabled=false \
--set grafana.securityContext.runAsUser=${GRAFANA_UID},grafana.securityContext.runAsGroup=0,grafana.securityContext.fsGroup=${GRAFANA_UID} \
--set prometheus.server.securityContext.runAsUser=${GRAFANA_UID},prometheus.server.securityContext.runAsGroup=0,prometheus.server.securityContext.fsGroup=${GRAFANA_UID} \
--set prometheus.kube-state-metrics.securityContext.runAsUser=${GRAFANA_UID},prometheus.kube-state-metrics.securityContext.runAsGroup=0,prometheus.kube-state-metrics.securityContext.fsGroup=${GRAFANA_UID} \
--set loki.securityContext.runAsUser=${GRAFANA_UID},loki.securityContext.runAsGroup=0,loki.securityContext.fsGroup=${GRAFANA_UID}
This Helm chart will install and configure Grafana, Prometheus, Loki, and their dependencies.
You might need to adjust some parameters to match the scale and requirements of your environment:
- grafana.persistence.size – specifies the volume size used by Grafana to store its configuration;
- prometheus.server.persistentVolume.size – specifies the volume size used by Prometheus to store metrics;
- prometheus.server.retention – specifies how long metrics are kept by Prometheus before they will be discarded;
- loki.persistence.size – specifies the volume size used by Loki to store logs;
- loki.config.chunk_store_config.max_look_back_period – specifies the maximum retention period for storing chunks (compressed log entries);
- loki.config.table_manager.retention_period – specifies the maximum retention period for storing logs in indexed tables;
- promtail.enabled – specifies if the Promtail component should be installed (required for collecting Mendix app environment logs).
For more details see the Loki installation guide.
Expose the Grafana Web UI
Use the following command to create an OpenShift Route object to access Grafana from your web browser: replace {project}
with the project name (for example grafana
):
oc --namespace {project} create route edge loki-grafana --service=loki-grafana --insecure-policy=Redirect
To get the Grafana web UI URL (domain), run the following command: replace {project}
with the project name (for example grafana
):
oc --namespace {project} get route loki-grafana -o jsonpath="{.status.ingress[*].host}"
Enable Metrics Scraping
To collect Mendix app environment metrics for a specific environment, Prometheus needs to discover and scrape pods with the following annotations:
privatecloud.mendix.com/component
:mendix-app
privatecloud.mendix.com/app
: Environment internal name
Each Mendix app pod listens on port 8900
and provides a /metrics
path that can be called by Prometheus to get metrics from a specific app Pod.
Prometheus supports multiple ways to set up metrics scraping. The easiest way is to use pod annotations. It is possible to specify annotations for all Mendix app environments in the namespace, or set annotations only for specific environments.
Enable Scraping for Entire Namespace
To enable scraping annotations for all environments in a namespace, add the following runtimeDeploymentPodAnnotations
in the Mendix App Deployment settings:
apiVersion: privatecloud.mendix.com/v1alpha1
kind: OperatorConfiguration
spec:
# Existing configuration
# ...
runtimeDeploymentPodAnnotations:
# Existing annotations
# ...
# Add these new annotations:
prometheus.io/path: /metrics
prometheus.io/port: '8900'
prometheus.io/scrape: 'true'
Then restart the Mendix Operator.
Enable Scraping for a Specific Environment
If you would like to enable Prometheus scraping only for a specific environment, you can add the Prometheus scraping annotations just for that environment.
Enable Scraping in Connected Mode
-
Go to the Cluster Manager page by clicking Cluster Manager in the top menu of the Clouds page of the Mendix Portal.
-
Click Details next to the namespace where your environment is deployed.
-
Click Configure next to the environment name where Prometheus scraping should be enabled.
-
Click Quick setup within Pod annotations:
-
Check the Prometheus Metrics checkbox and click Close:
-
Click Apply Changes:
Enable Scraping in Standalone Mode
Open an environment’s MendixApp
CR for editing and add the following pod annotations:
apiVersion: privatecloud.mendix.com/v1alpha1
kind: MendixApp
metadata:
name: example-mendixapp
spec:
# Existing configuration
# ...
runtimeDeploymentPodAnnotations:
# Existing annotations
# ...
# Add these new annotations:
prometheus.io/path: /metrics
prometheus.io/port: '8900'
prometheus.io/scrape: 'true'
Save and apply the changes.
Setting up a Grafana Dashboard
Mendix for Private Cloud offers a reference dashboard that looks similar to Mendix Cloud Metrics.
In addition, this dashboard will display Mendix app and Runtime logs.
Import the Dashboard
To install the reference dashboard, download the dashboard JSON to a local file using the links below. There are two dashboards available at the moment. If necessary you can install both at the same time:
- compatibility mode dashboard for metrics generated in compatibility mode
- native dashboard for metrics generated in native mode
Import the downloaded JSON into Grafana:
-
Open Grafana in a web browser using the domain name, admin username and password from Section 2.
-
Click Create, then Import:
-
Then click Upload JSON file and select the dashboard JSON you downloaded earlier.
-
Select Prometheus from the Prometheus data source dropdown, and Loki from the Loki data source dropdown. If necessary, rename the dashboard and change its uid. Press Import to import the dashboard into Grafana.
Using the Dashboard
Click Dashboards, then Manage and click Mendix app dashboard (native) or Mendix app dashboard (compatibility mode) to open the dashboard:
Select the Namespace, Environment internal name and Pod name from the dropdowns to see the metrics and logs for a specific Pod:
Metrics are displayed per pod and not aggregated on a namespace or environment level. Every time an app is restarted or scaled up, this will add new pods or replace existing pods with new ones. You will need to select the currently running pod from the dropdown to monitor its metrics and logs.
Configuring Metrics Links
To provide Mendix app developers with quick access to the dashboard, you can set the Metrics and Logs links in the namespace configuration.
The Mendix Portal supports placeholder (template) variables in Metrics and Logs links:
{namespace}
will be replaced with the environment namespace;{environment_name}
will be replaced with the environment internal name.
For example, if you have imported the reference dashboard JSON with default parameters, set Metrics and Logs links to the following:
https://grafana.mendix.example.com/d/4csBnmWnk/mendix-app-dashboard?var-namespace={namespace}&var-environment_id={environment_name}
(replace grafana.mendix.example.com
with the Grafana domain name used in your cluster).
When a Mendix app developer clicks a Metrics or Logs link in the Mendix Portal, the {namespace}
and {environment_name}
placeholders
will be replaced with that environment’s namespace and name, and the Mendix app developer will just need to select a Pod name in the Grafana dashboard dropdown.
To set the Metrics and Logs links:
-
Go to the Cluster Manager page by clicking Cluster Manager in the top menu of the Clouds page of the Mendix Portal.
-
Click Details next to the namespace where your environment is deployed.
-
Open the Operate tab, enter dashboard URL for the Metrics and Logs links, and click Save for each one.
Generating Metrics
It is possible to specify a default metrics configuration for the namespace in Advanced Operator Configuration. For each attribute, an environment can provide a custom value; this value will be used instead of the namespace default value.
Here is an example metrics configuration. This block can be added to the OperatorConfiguration
CR (default configuration for the namespace) or MendixApp
CR (customized configuration for a specific environment):
spec:
…
# Metrics configuration
runtimeMetricsConfiguration:
mode: native
interval: "PT1M"
mxAgentConfig: |-
{
"requestHandlers": [
{
"name": "*"
}
],
"microflows": [
{
"name": "*"
}
],
"activities": [
{
"name": "*"
}
]
}
mxAgentInstrumentationConfig: |-
{
…
}
# …
native
metrics by default.
However, if Mendix Operator v2.3.* or below is upgraded to v2.4.0 or above, the upgrade process will set the default metrics mode to compatibility
.
This way, upgrading an older Mendix Operator will not change the way it generates metrics.
Compatibility Metrics Mode
To enable compatibility
metrics mode, set the mode
attribute to compatibility
.
In this mode, all other runtimeMetricsConfiguration
attributes are ignored.
Enable Compatibility Metrics in Connected Mode
-
Open your app in Apps.
-
Go to the Environments page.
-
Click Details next to the environment where compatibility mode should be used.
-
Click the Runtime tab.
-
Click Enable next to the Custom Configuration of Runtime Metrics Configuration, then click Save.
-
Click Edit next to Mode.
-
Set Mode to compatibility and click Save and Apply.
Enable Compatibility Metrics in Standalone Mode
Open an environment’s MendixApp
CR for editing and set the mode
attribute in runtimeMetricsConfiguration
to compatibility
:
apiVersion: privatecloud.mendix.com/v1alpha1
kind: MendixApp
metadata:
name: example-mendixapp
spec:
# Existing configuration
# ...
# Metrics configuration
runtimeMetricsConfiguration:
# Set mode to compatibility
mode: compatibility
Save and apply the changes.
Disable All Metrics Collection
To completely disable metrics collection, delete the runtimeMetricsConfiguration
block from the OperatorConfiguration
CR, and update the environment to use the default metrics configuration.
Disable Metrics in Connected Mode
-
Open your app in Apps.
-
Go to the Environments page.
-
Click Details next to the environment where compatibility mode should be used.
-
Click the Runtime tab.
-
Click Enable next to the Custom Configuration of Runtime Metrics Configuration, then click Save.
-
Click Edit next to Mode.
-
Set Mode to default and click Save and Apply.
Disable Metrics in Standalone Mode
Open the environment’s MendixApp
CR for editing and delete the runtimeMetricsConfiguration
block:
apiVersion: privatecloud.mendix.com/v1alpha1
kind: MendixApp
metadata:
name: example-mendixapp
spec:
# Existing configuration
# ...
# Delete this runtimeMetricsConfiguration block
runtimeMetricsConfiguration:
...
Save and apply the changes.
Native Metrics Mode
To enable native
metrics mode, set the mode
attribute to native
.
If your Prometheus has a custom scrape interval (default is 1 minute), you should specify it in interval
to ensure the correct time window is used for max and average metrics.
The interval
field should be specified in ISO 8601 Duration format (for example, ‘PT1M’).
If interval
is empty (not specified), the default value of 1 minute will be used.
Native metrics are generated by the Mendix Runtime’s Micrometer component.
The Metrics.Registries configuration key will be generated automatically by the Mendix Operator.
If an environment has a manually assigned Metrics.Registries
key, it will be used instead of the automatically generated key.
It is also possible to add extra tags (Prometheus labels) by specifying them in the Metrics.ApplicationTags custom setting.
Enable Native Metrics in Connected Mode
-
Open your app in Apps.
-
Go to the Environments page.
-
Click Details next to the environment where compatibility mode should be used.
-
Click the Runtime tab.
-
Click Enable next to the Custom Configuration of Runtime Metrics Configuration, then click Save.
-
Click Edit next to Mode.
-
Set Mode to default, then click Save.
-
Set a custom value for MxAgent Config.
This parameter is optional and can be left empty. For more information about MxAgent see Configuring the Java Instrumentation Agent, below.
-
Click Apply Changes.
Configure additional Native Metrics options in Connected Mode
After an environment is switched into native metrics mode, it is possible to configure additional options for that environment.
-
Go to the Cluster Manager page by clicking Cluster Manager in the top menu of the Clouds page of the Mendix Portal.
-
Click Details next to the namespace where your environment is deployed.
-
Click Configure next to the environment name where the native metrics mode should be used.
-
Click the Runtime tab.
-
Set custom values for Interval and MxAgent Instrumentation Config by clicking the Edit button.
These parameters are optional and can be left empty. For more information about MxAgent see Configuring the Java Instrumentation Agent, below.
-
Click Apply Changes
Enable Native Metrics in Standalone Mode
Open an environment’s MendixApp
CR for editing and set the mode
attribute to native
:
apiVersion: privatecloud.mendix.com/v1alpha1
kind: MendixApp
metadata:
name: example-mendixapp
spec:
# Existing configuration
# ...
# Metrics configuration
runtimeMetricsConfiguration:
# Set mode to native
mode: native
# Optional: set the scrape interval
interval: "PT1M"
# Optional: set the agent config
mxAgentConfig: |-
{
…
}
# Optional: set the agent instrumentation config
mxAgentInstrumentationConfig: |-
{
…
}
# …
If your Prometheus setup is using a custom scrape interval, specify the interval in the interval
attribute in ISO 8601 Duration format (for example, ‘PT1M’).
If you would like to collect additional metrics, specify a non-empty configuration for mxAgentConfig
, see Configuring the Java Instrumentation Agent,below, for more details.
Save and apply the changes.
Configuring the Java Instrumentation Agent
By specifying a value for mxAgentConfig
, you can enable the Mendix Java instrumentation agent and collect additional metrics such as the execution times of microflows, OData/SOAP/REST endpoints, and client activity.
You can specify which request handlers, microflows, and activities are reported to Prometheus using a JSON configuration with the following format (note that this is the syntax and not an example of this custom setting):
{
"requestHandlers": [
{
"name": "*" | "<requesthandler>"
}
],
"microflows": [
{
"name": "*" | "<microflow>"
}
],
"activities": [
{
"name": "*" | "<activity>"
}
]
}
Value | What Is Sent | Note |
---|---|---|
"name": "*" |
All | Default |
"name": "<requesthandler>" |
All request handler calls of this type | click Request Handlers1 below to see the list of options |
"name": "<microflow>" |
Each time this microflow is run | The format is <module>.<microflow> For example, TrainingManagement.ACT_CancelScheduledCourse |
"name": "<activity>" |
All activities of this type | click Activities2 below to see the list of options |
[1 ] Request Handlers (click to see list)
The following Mendix request handler calls will be passed to Prometheus:
Request Handler | Call Type | Namespace |
---|---|---|
WebserviceRequestHandler |
SOAP requests | mx.soap.time |
ServiceRequestHandler |
OData requests | mx.odata.time |
RestRequestHandler |
REST requests | mx.rest.time |
ProcessorRequestHandler |
REST, ODATA, SOAP doc requests | mx.client.time |
ClientRequestHandler |
/xas requests (general queries for data in data grids, sending changes to the server, and triggering the execution of microflows) |
mx.client.time |
FileRequestHandler |
File upload/download requests | mx.client.time |
PageUrlRequestHandler |
/p requests |
mx.client.time |
You can find help in analyzing some of these values in Metrics.
[2] Activities (click to see list)
The following Mendix activities can be passed to Prometheus:
CastObject
ChangeObject
CommitObject
CreateObject
DeleteObject
RetrieveObject
RollbackObject
AggregateList
ChangeList
ListOperation
JavaAction
Microflow
CallRestService
CallWebService
ImportWithMapping
ExportWithMapping
Example
The following example will send logs for:
- All request handlers
- The microflow
After_Startup
in the moduleAdministration
- The
CreateObject
andDeleteObject
activities
{
"requestHandlers": [
{
"name": "*"
}
],
"microflows": [
{
"name": "Administration.After_Startup"
}
],
"activities": [
{
"name": "CreateObject"
},
{
"name": "DeleteObject"
}
]
}
mxAgentConfig
is identical to the APM METRICS_AGENT_CONFIG
custom environment variable in Mendix Cloud.
Advanced instrumentation configuration can be specified through mxAgentInstrumentationConfig
. If this attribute is not supplied, the default instrumentation configuration will be used.