Logging

Last modified: November 3, 2025

Introduction

Below we describe what the various log levels of the runtime will show as output. During development, these log levels can be set in the console (advanced -> set log levels), when deployed on a server, please refer to the Deployment pages.

You can also set log levels to provide more or less information when testing locally using the console in Studio Pro. See Configuring Log Levels Within Studio Pro in How To Set Log Levels for more information.

Log Levels

Critical

Critical is reserved for rare cases where the application may not be able to function reliably anymore. This should normally not occur. If it does, you should immediately take action. The 3.0 cloud treats these messages as alerts and will notify you on the cloud dashboard.

Error

Error is used to log all unhandled exceptions. These are unexpected events that should not occur, but are not critical. The application should be able to function normally afterwards.

Warning

Warning is often used for handled 'exceptions' or other important log events. For example, if your application requires a configuration setting but has a default in case the setting is missing, then the Warning level should be used to log the missing configuration setting.

Information

The Information level is typically used to output information that is useful to the running and management of your system. Information would also be the level used to log entry and exit points in key areas of your application. However, you may choose to add more entry and exit points at Debug level for more granularity during development and testing.

Debug

This should be used for debugging systems during development, but never in a production system. It can be used to easily pinpoint problems and the general flow of your application.

Trace

This is the most verbose logging level, and can be used if you want even more fine-grained logging than debug.

Log Nodes

This section provides some details on specific log nodes used by Mendix. It is recommended that if you write your own log messages you use your own log node names to avoid confusion with the Mendix log messages.

Default Mendix Log Nodes

The following log nodes are used by Mendix when writing log messages.

Log NodeDescription
ActionManagerLogs messages related to action scheduling (for example, scheduled events) and action execution (for example, running microflows).
ClientLogs from the Mendix client.
Client_*For example Client_NanoflowDebugger are logs from specific parts of the Mendix client.
ConfigurationLogging related to the configuration of the Mendix app that is read in at startup.
ConnectionBusGeneral logging related to database startup, synchronization and connections management for Mendix.
ConnectionBus_MappingInformation relating to the translations of XPath Queries and OQL text queries to OQL Queries.
ConnectionBus_QueriesIf LogMinDurationQuery has been set, queries that take longer than LogMinDurationQuery milliseconds will be logged here.
ConnectionBus_RetrieveAll information related to the retrieval of data, such as: Incoming requests from the application, the executed statement. Also logs issues encountered during the processing of the received data.
ConnectionBus_SecurityInformation regarding access rights needed to access the database.
ConnectionBus_Synchronize⚠ Deprecated: This is a legacy node.
ConnectionBus_UpdateAll information related to the update of data in the database. Incoming storage requests, the executed statements and issues encountered during storage.
ConnectionBus_ValidationInformation related modification of the existing database, and database migration.
ConnectorLogs when standard or custom request handlers (added through Core#addRequestHandler) are registered, or when a path is called that does not have a registered request handler.
CoreLogs messages from the core runtime. This can be startup of the runtime, version of the runtime, license being used and issues related to interpreting the model.
DataStorage_QueryHandlingLogs messages related to the queries that are being executed.
DataStorage_QueryPlanQuery execution plan information for installations (currently only supported for PostgreSQL databases).
DocumentExporterLogs messages related to the templating engine that generates documents.
FileDocumentSizesPopulateJobLogs messages for a background job that populates the file-size field in the database for documents that do not have that field filled (used during legacy migration).
InvalidRequestLimiterLogs messages related to responses being throttled due to invalid requests.
IDResolutionInformation on retrieval queries and runtime operations that are being executed.
I18NProcessorLogs messages related to translation of the app.
Integration APILogs messages related to the documentation of integration APIs.
JSONJSON messages from the Mendix Client to the Runtime Server. See JSON, below, for more information
JSON ExportLogs messages related to export mappings to JSON.
JSON ImportLogs messages related to import mappings from JSON.
JettyLogs messages from the internal Jetty webserver that handles HTTP requests between the runtime and the outside world.
LicenseServiceLogs messages related to the licensing of the app.
LoggingLogs messages related to the logging framework used by Mendix.
M2EELogs messages from the administration interface with the Mendix Runtime.
MetricsLogs messages related to the runtime metrics reporting infrastructure.
MicroflowDebuggerLogs messages related to the status of the microflow debugger (for example, connection status, incoming and outgoing requests).
MicroflowEngineLogs messages related to microflow execution (for example, which microflow or microflow action is being executed and errors that occur during the execution).
MicroflowStructureOptimizerLogs messages related to microflow structure optimization performed during startup.
ML EngineLog messages produced by ML Kit activities.
ModelStoreLogs debug messages related to synchronizing User Role and language information to the system tables.
ModuleLogs messages for modules that are loaded on-demand in the core runtime like the microflow-engine.
ObjectManagementLogs errors relating to attempts to make associations to non-existent object
ODataConsumeLogs messages related to consumed OData services.
OData PublishLogs messages related to published OData/GraphQL services.
OrphanFileCleanerLogs messages related the orphan file cleaning background task.
QueryParserLogs messages related to the parsing or interpretation of XPath and OQL queries.
REST ConsumeLogs messages related to the Call REST service activity.
REST PublishLogs messages related to published REST services.
RequestStatisticsLogs if thresholds related to state defined in Client Runtime Settings have exceeded the defined threshold.
SchemeManagerLogs messages related to model loading that is performed during startup.
ServicesLogs messages related to Web Services
StorageAzureLogs messages related to file handling if you are using Azure system as your file store.
StorageLocalLogs messages related to file handling if you are using the local file system as your file store.
StorageS3Logs messages related to file handling if you are using Amazon S3 system as your file store.
TaskQueueAll actions related to Task Queues
WebServicesTraces SOAP call request and response contents.
WebUILogs if thresholds related to feedback size defined in Client Runtime Settings have exceeded the defined threshold, or creating a valid session has failed.
Workflow EngineLogs messages related to workflow executions, for example, lifecycle events, such as a start or an end of a workflow, execution of workflow actions, and errors that occur during the execution.
XML ExportLogs messages related to export mappings to XML.
XML ImportLogs messages related to import mappings from XML.

JSON

Has only one relevant level: Debug.

Setting this log level to debug will show you all the JSON requests and responses from the Mendix Client to the Runtime Server. This may degrade performance as this output is normally streamed. This can also be used to gain insight in what users are doing in a production environment. When using it here, make sure you have enough disk space available for your log files though.