Storage Plans

Last modified: July 9, 2024

1 Introduction

Every Mendix app environment needs a database to store entities, and a blob file storage bucket to store the contents of System.FileDocument entities. When an app developer creates a new environment, the Mendix Operator will automatically create (provision) a database and blob file storage bucket for that environment. In this way, an app developer does not need to install or configure a database - the Mendix Operator automatically prepares the database and blob file storage bucket, and links it with the new environment. Creating a new environment can be completely automated, and can be done by an app developer without assistance from the infrastructure team.

The Mendix Operator has a modular architecture and offers a multitude of options how to create and attach databases and blob file storage buckets to environments. A storage plan specifies a configuration blueprint for providing a database or blob file storage bucket to a new environment. For example, if you create a postgres-prod storage plan and configure it to use a specific Postgres instance, an app developer will be able to choose a postgres-prod from the database plan dropdown when creating a new environment. This plan will create a new Postgres role (user) and a new database (schema and tenant) inside an existing Postgres instance (RDS instance). The new Mendix app environment will only be able to access its database and tenant, and will be isolated from other apps running on the same Postgres instance.

The Mendix Operator supports two storage management operations:

  • Create - Creates a new tenant, generate credentials and attach them to the new environment.
  • Delete - Deletes the environment’s tenant, and optionally deletes that environment’s data (database or files).

1.1 Classification of Storage Plans

There are multiple ways to categorize storage options.

1.1.1 Automated Storage Options

Automated provisioners can communicate with an API to create an isolated tenant for an environment. For example, the minio provisioner will automatically create a bucket, user and policy for every new environment. In this way, each environment gets its own user and bucket, and the Mendix Operator can isolate app environments from one another.

In most cases, automated provisioners require some prerequisites to create or delete tenants. This usually means an existing service (such as a Postgres or MinIO server) and admin credentials.

1.1.2 Basic Storage Options

Basic provisioners do not communicate with any APIs. Instead, they generate and attach existing credentials to a new environment. For example, a basic provisioner like Ceph provides the same credentials to every app environment (with an option to let each environment use its own bucket prefix).

Basic provisioners do not provide isolation between environments, but in some cases can provide more control over how storage is managed. For example, this option can be used to attach a pre-created S3 bucket or on-premise SQL Server database to a new environment.

1.1.3 On-Demand Options

On-demand storage plans can be used by any number of environments. These provisioners can provide a database and bucket on demand to any new environment.

1.1.4 Dedicated Options

Dedicated storage plans can only be used by one environment at a time. If a storage plan is marked as dedicated and is already in use by an environment, new environments cannot use it.

Most provisioners have require some prerequisites to be created manually. Typically this would be a server (database or blob file storage bucket) and credentials to access or manage it.

1.2 Creating and Testing a Storage Plan

As a best practice, test your new storage plan by creating a new environment and confirming that it is working as expected. In some cases, even though the Mendix Operator was able to create a database and bucket, an environment may fail to connect because of firewalls, Kubernetes security policies, or other reasons.

To create a new storage plan, do the following steps:

  1. Run the mxpc-cli configuration tool and fill in all the necessary details for the storage plan or plans.
  2. Apply the changes but keep the mxpc-cli configuration tool open.
  3. Try to create a new test environment using the new storage plan. If the environment is successfully created and able to start, the storage plan is ready to use.
  4. If the environment cannot be successfully created or started, check the error message displayed in the Cloud Portal and the logs from the {environment-name}-database and {environment-name}-file pods.
  5. If necessary, update the storage plan configuration in mxpc-cli by switching to the Database Plan or Storage Plan tabs, and apply the configuration.
  6. Delete the failed {environment-name}-database or {environment-name}-file pod, and then test the storage plan again.

1.3 Known Limitations

The following sections outline some limitations to keep in mind when using storage plans, as well as potential ways to mitigate those limitations.

1.3.1 Updating a Plan Does Not Update Existing Environments

Updating a storage plan does not update any already existing environments. For example, if you migrate a database to a new URL, updating the storage plan will not update the database URL in any already existing environments. In addition, any significant changes to the storage plan configuration (such as replacing Postgres with SQL Server) will not migrate the data in already existing environments.

To apply significant changes to your environments, do the following steps:

  1. Create a new storage plan.
  2. Create new versions of existing environments for the new storage plan.
  3. Migrate data from existing environments to their new versions and verify that the migration was successful.
  4. Delete previous environments and disable the previous storage plan.

1.3.2 Rotating Credentials Requires Manual Updates

To rotate credentials of an environment, you must manually update the credentials in the environment’s Kubernetes secret. If your security policy requires a regular rotation of credentials, consider using Secrets Storage instead.

1.3.3 Provisioner Pods Do Not Automatically Retry after Failing

If a provisioner pod fails, it will attempt to roll back any changes it made, but will not automatically retry.

To retry a failed provisioner pod, do the following steps:

  1. Check the logs of the failed {environment-name}-database or {environment-name}-file pod to find the root cause of the problem.
  2. Resolve the cause of the problem.
  3. Delete the failed {environment-name}-database or {environment-name}-file pod to retry.

1.3.4 The Configuration of an Existing Storage Plan Cannot Be Read

It is not currently possible to read the configuration of an existing storage plan. The only way to update the configuration of a storage plan is by overwriting it with an updated version. If you have created a storage plan in the past and would like to update it, for example, to change the admin credentials, you must create a new storage plan and give it the same name as the currently existing storage plan. This new configuration will overwrite and replace the existing plan.

1.3.5 You Are Responsible for Backing up and Restoring Files

1.3.6 The Mxpc-cli Configuration Tool Creates One Storage Plan at a Time

You can only create up to one database and one blob file storage plan when running the mxpc-cli configuration tool. Run the configuration tool multiple time to create additional database and blob file storage plans.

1.3.7 Some UI Elements May Be Hidden When Not in Fullscreen Mode

If the screen or terminal cannot fit all the elements, some UI elements may be hidden. As a best practice, open the mxpc-cli Configuration Tool in fullscreen mode, or increase the terminal window size to at least 180x60 characters.

1.3.8 Deleting an Environment Must Be Verified Manually

If you delete an environment, make sure that it is completely deleted by running the following commands:

  • kubectl -n {namespace} get storageinstance {environment-name}-file
  • kubectl get storageinstance {environment-name}-database

If the commands return a not found response, your environment database and blob file storage have been fully removed. If either the database or the blob file storage were not deleted, you must find and troubleshoot the reason, and then do a manual cleanup if necessary. Until the cleanup is done, you should not create a new environment that uses the same name as the environment that is still being deleted.

2 Database Plans

Every Mendix app needs a database to store persistent and non-persistent entities. A database plan tells the Mendix Operator how to provide a database to a new Mendix app environment.

2.1 Creating a Database

To create a new database, do the following steps:

  1. Give your plan a Name and choose the Database Type. See the information below for more help in setting up plans for the different types of database which are supported by Mendix for Private Cloud.
  2. Apply two validation checks by clicking the Validate and Connection Validation buttons:
    • Validate – Checks that you have provided all the required values and that they are in the correct format.
    • Connection validation – Checks whether the specified storage plan has been successfully created. This does not guarantee that the storage instance will be created successfully when the configuration is applied, so to fully test a database plan, you will need to test it by creating a temporary test environment.
Database Plan Configuration

2.2 Supported Database Types

The following database types are supported:

2.3 Postgres

Postgres databases can be used with static authentication.

If the Postgres instance is an AWS RDS database, you can use IAM authentication for additional security.

If the Postgres instance is an Azure Postgres (Flexible Server) database, you can use managed identity authentication for additional security.

2.3.1 Postgres (static credentials)

The Postgres database is an automated, on-demand database. The Postgres plan offers a good balance between automation, ease of use, and security. It is the most versatile and portable option for production-grade databases. If you would like to have more control over database configuration, consider using the JDBC plan instead. If your provider is AWS, Postgres IAM authentication can be used instead to increase security. If your provider is Azure, Postgres managed identity authentication can be used instead to increase security.

2.3.1.1 Prerequisites
  • A Postgres server - for example, an RDS instance, or a Postgres server installed from a Helm chart
  • A superuser account and its login database - in most cases, this would be the default postgres user and postgres database.
2.3.1.2 Environment Isolation
  • Unique user (Postgres role) for every environment.
  • Unique database for every environment.
  • Environment has full access only to its own database, cannot access data from other environments.
2.3.1.3 Limitations
  • Passwords can only be rotated manually.
  • The Postgres server will be shared between environments, which could affect scalability.
2.3.1.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a database name, username (Postgres role) and password for the new environment.
  • Create a new database in the provided Postgres server. This will be the environment’s dedicated database.
  • Create a new user (role) for the new environment, and allow this user to access only the new environment’s database. This will be the environment’s user.
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.
2.3.1.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s user (role).
  • Delete that environment’s database.
  • Delete that environment’s Kubernetes database credentials secret.
2.3.1.6 Configuring a Postgres Plan

In the Postgres plan configuration, enter the following details:

  • Host - Postgres server hostname, for example postgres-shared-postgresql.privatecloud-storage.svc.cluster.local.

  • Port - Postgres server port number; in most cases this should be set to 5432.

  • Strict TLS - specifies if the TLS should always be validated.

    • Enabling this option will enable full TLS certificate validation and require encryption when connecting to the PostgreSQL server. If the PostgreSQL server has a self-signed certificate, you will also need to configure custom TLS so that the self-signed certificate is accepted.
    • Disabling this option will attempt to connect with TLS, but skip certificate validation. If TLS is not supported, it will fall back to an unencrypted connection.
  • Database name - login database for the admin/superuser; in most cases this is set to postgres.

  • Authentication - select static from the dropdown.

  • Username - username of the admin or superuser, used by the Mendix Operator to create or delete tenants for app environments; typically, this is set to postgres.

  • Password - username of the admin or superuser; used by the Mendix Operator to create or delete tenants for app environments.

2.3.2 Postgres (IAM authentication)

The Postgres database is an automated, on-demand database. The Postgres plan offers a good balance between automation, ease of use, and security. IRSA authentication removes static passwords and instead uses IAM roles for authentication.

2.3.2.1 Prerequisites
  • An RDS Postgres server with IAM authentication enabled

  • A superuser account and its login database - in most cases, this would be the default postgres user and postgres database.

  • A Postgres Admin IAM role with permissions to access the database, with the following inline policy (replace <aws_region> with the database’s region, <account_id> with your AWS account number, <database_id> with the RDS database instance identifier and <database_user> with the Postgres superuser account name):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "AllowCreateTenants",
                "Action": [
                    "rds-db:connect"
                ],
                "Resource": [
                    "arn:aws:rds-db:<aws_region>:<account_id>:dbuser:db-<database_id>/<database_user>"
                ]
            }
        ]
    }
    
  • An IAM-based S3 blob storage plan.

2.3.2.2 Environment Isolation
  • Unique user (Postgres role) for every environment.
  • Unique database for every environment.
  • Environment has full access only to its own database, cannot access data from other environments.
2.3.2.3 Limitations
  • The Postgres server will be shared between environments, which could affect scalability.
  • To use this feature, your app needs to be upgraded to Mendix 9.22 (or later), and your namespace needs to use Mendix Operator version 2.12.0 (or later).
2.3.2.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a database name and username (Postgres role) for the new environment.
  • Create a new database in the provided Postgres server. This will be the environment’s dedicated database.
  • Create a new user (Postgres role) for the new environment, and allow this user to access only the new environment’s database. This will be the environment’s user.
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment. Since the app environment will authenticate through an IAM role, this secret will not contain any static passwords - only the database hostname, username and other non-sensitive connection details.
2.3.2.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s user (role).
  • Delete that environment’s database.
  • Delete that environment’s Kubernetes database credentials secret.
2.3.2.6 Configuring a Postgres Plan

In the Postgres plan configuration, enter the following details:

  • Host - Postgres server hostname, for example postgres-shared-postgresql.privatecloud-storage.svc.cluster.local.
  • Port - Postgres server port number; in most cases this should be set to 5432.
  • Strict TLS - specifies if the TLS should always be validated.
    • Enabling this option will enable full TLS certificate validation and require encryption when connecting to the PostgreSQL server. If the PostgreSQL server has a self-signed certificate, you will also need to configure custom TLS so that the self-signed certificate is accepted.
    • Disabling this option will attempt to connect with TLS, but skip certificate validation. If TLS is not supported, it will fall back to an unencrypted connection.
  • Database name - login database for the admin/superuser; in most cases this is set to postgres.
  • Authentication - select aws-iam from the dropdown.
  • Username - username of the admin or superuser, used by the Mendix Operator to create or delete tenants for app environments; typically, this is set to postgres.
  • IAM Role ARN - the Postgres Admin IAM role ARN.
    • Mendix recommends using the same IAM role to manage Postgres databases and S3 buckets, as this would be easier to set up and maintain.
  • K8s Service Account - the Kubernetes Service Account to create and attach to the IAM role.

AWS IRSA allows a Kubernetes Service Account to assume an IAM role. For this to work correctly, the IAM role’s trust policy needs to trust the Kubernetes Service Account:

  1. Open the role for editing and add an entry for the ServiceAccount (or ServiceAccounts) to the list of conditions:

  2. For the second condition, copy and paste the sts.amazonaws.com line; replace :aud with :sub and set it to system:serviceaccount:<Kubernetes namespace>:<Kubernetes serviceaccount name>.

    See Amazon EKS Pod Identity Webhook – EKS Walkthrough for more details.

    The role ARN is required, you can use the Copy button next to the ARN name in the role details.

2.3.3 Postgres (Azure managed identity authentication)

The Postgres database is an automated, on-demand database. The Postgres plan offers a good balance between automation, ease of use, and security. Managed identity authentication removes static passwords and instead uses IAM roles for authentication.

This section provides technical details on how managed identity authentication works with Postgres. If you just need instructions to get started, the Azure Managed Identity-based storage walkthrough provides a quick start guide to set the Mendix Operator to manage a Postgres database, SQL Server and Blob Storage account using managed identity authentication.

2.3.3.1 Prerequisites
  • An Azure Postgres (Flexible Server) with Entra authentication enabled
  • A Postgres Admin managed identity that the Mendix Operator would use to create/delete databases and managed identities for app environments. This managed identity needs the following permissions:
2.3.3.2 Environment Isolation
  • Unique user (Postgres role) for every environment.
  • Unique managed identity for every environment.
  • Unique database for every environment.
  • Environment has full access only to its own database, cannot access data from other environments.
2.3.3.3 Limitations
  • The Postgres server will be shared between environments, which could affect scalability.
  • To use this feature, your app needs to be upgraded to Mendix 9.22 (or later), and your namespace needs to use Mendix Operator version 2.17.0 (or later).
2.3.3.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Create a Managed Identity for an environment. This Managed Identity will be created in the same resource group, subscription and region as the Postgres Admin managed identity.
  • Create a Kubernetes Service Account and attach it to the environment’s Managed Identity. This Service Account acts as a replacement for static credentials, and can also be used to authenticate with Azure Postgres databases.
  • Generate a database name and username (Postgres role) for the new environment.
  • Create a new database in the provided Postgres server. This will be the environment’s dedicated database.
  • Create a new user (Postgres role) for the new environment, and allow this user to access only the new environment’s database. This will be the environment’s user.
  • Link the environment’s Postgres user (role) with the environment’s Managed Identity by adding a security label to the Postgres user (role).
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment. Since the app environment will authenticate through a managed identity role, this secret will not contain any static passwords - only the database hostname, username and other non-sensitive connection details.
2.3.3.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s user (role).
  • Delete that environment’s database.
  • Delete that environment’s Managed Identity.
  • Delete that environment’s Kubernetes Service Account.
  • Delete that environment’s Kubernetes database credentials secret.
2.3.3.6 Configuring a Postgres Plan

In the Postgres plan configuration, enter the following details:

  • Host - Postgres server hostname, for example postgres-shared-postgresql.privatecloud-storage.svc.cluster.local.
  • Port - Postgres server port number; in most cases this should be set to 5432.
  • Strict TLS - Set to yes, as Azure Postgres supports TLS without any extra configuration.
  • Database name - login database for the Postgres Admin managed identity; in most cases this is set to postgres.
  • Authentication - select azure-wi from the dropdown.
  • Managed Identity Name - name for the Postgres Admin managed identity, used by the Mendix Operator to create or delete tenants for app environments.
  • Managed Identity Client ID - the Postgres Admin managed identity Client ID.
    • Mendix recommends using the same storage admin managed identity to manage Azure databases and blob storage containers, as this would be easier to set up and maintain. One storage admin Service Account can be used for multiple storage plans, and only one Federated Credential would be needed to link it with a storage admin Managed Identity.
  • K8s Service Account - the Kubernetes Service Account to create and attach to the Postgres Admin managed identity (will be created automatically by the mxpc-cli installation and configuration tool).

Azure workload identities allow a Kubernetes Service Account to authenticate itself as a specific Managed Identity. For this to work correctly, add a Federated Credential to the Postgres Admin managed identity:

  1. Enable managed identities for your AKS cluster as described in the Azure documentation. This only need to be done once per cluster.

    Ensure that you have the Cluster OIDC Issuer URL. You will need the URL to complete the configuration.

  2. Add a Federated Credential to the Managed identity by using az identity federated-credential create command, or going to the Federated credentials tab and using the Add Credential wizard. This will allow the Postgres Admin Kubernetes Service Account to be associated with its Managed identity.

  3. Fill in the following details:

    • Federated credential scenario - Kubernetes accessing Azure resources
    • Cluster Issuer URL - the Cluster OIDC URL from step 1
    • Namespace - the Kubernetes namespace where the Operator is installed; for Global Operator installations, you must specify the managed namespace in the Namespace field.
    • Service Account - the K8s Service Account specified in the Postgres plan configuration
    • Name - any value
  4. Assign this Postgres Admin Managed Identity a Managed Identity Contributor role in its resource group.

  5. Add this Postgres Admin Managed Identity as an Entra Admin in the Postgres database.

2.4 Ephemeral

Ephemeral databases are basic, on-demand databases. Ephemeral databases are the simplest option to implement. The Ephemeral plan will enable you to quickly set up your environment and deploy your app, but any data you store in the database will be lost when you restart your environment.

2.4.1 Prerequisites

None.

2.4.2 Limitations

  • Data is lost when the app pod is restarted.
  • It is not possible to run more than one replica of an app.

2.4.3 Environment Isolation

  • Each environment (Kubernetes pod) stores its data in memory.
  • An environment cannot access data from other environments.

2.4.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Create a Kubernetes secret to provide connection details to the new app environment and specify that the app should use a local in-memory database.

2.4.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s Kubernetes database credentials secret.

2.5 SQL Server

SQL Server databases can be used with static authentication. If the SQL Server instance is an Azure SQL database, you can use managed identity authentication for additional security.

2.5.1 SQL Server (static credentials)

SQL server databases are automated, on-demand databases. The SQL Server plan offers a good balance between automation, ease of use, and security when using Microsoft SQL Server or Azure SQL. If you would like to have more control over database configuration, consider using the JDBC plan instead.

If your app is using Mendix 10.10 (or a later version) consider using the Azure managed identity authentication instead, for additional security.

2.5.1.1 Prerequisites
  • An SQL Server server - for example, an Azure SQL server, or a SQL Server installed from a Helm chart.

  • An admin user account.

2.5.1.2 Limitations
  • Passwords can only be rotated manually.
  • A standalone SQL Server will be shared between environments, which could affect scalability. Azure SQL allows more flexibility, and is much better at scaling - each database can have reserved capacity and does not affect performance of other databases on the same server.
  • NetBIOS names are not supported. It is only possible to connect using the server’s FQDN.
  • Only username/password authentication is supported at the moment.
2.5.1.3 Environment Isolation
  • Unique user, login for every environment.
  • Unique database for every environment.
  • Environment has full access only to its own database, cannot access data from other environments.
2.5.1.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a database name, username and password for the new environment.
  • Create a new database in the provided SQL Server server. This will be the environment’s dedicated database.
  • Create a new user and login for the new environment, and allow this user to access only the new environment’s database. This will be the environment’s user.
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.
2.5.1.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s user and login.
  • Delete that environment’s database.
  • Delete that environment’s Kubernetes database credentials secret.
2.5.1.6 Configuring an SQL Server Plan

In the SQL Server plan configuration, enter the following details:

  • Host - SQL Server (Azure SQL) server hostname, for example my-database.database.windows.net
  • Port - SQL Server (Azure SQL) server port number, in most cases this should be set to 1433.
  • Strict TLS - Specifies if TLS should always be validated.
    • Enabling this option will enable full TLS certificate validation and require encryption when connecting to SQL Server. If the SQL Server server has a self-signed certificate, you will also need to configure custom TLS so that the self-signed certificate is accepted. Azure SQL supports Strict TLS without any extra TLS configuration - no additional custom TLS configuration is required.
    • Disabling this option will attempt to connect with TLS, but skip certificate validation. If TLS is not supported, it will fall back to an unencrypted connection.
  • Authentication - select static from the dropdown.
  • Username - Username for the admin user, used by the Mendix Operator to create or delete tenants for app environments.
  • Password - Password for the admin user, used by the Mendix Operator to create or delete tenants for app environments.
  • Is Azure SQL Server - Opens additional options that are only available when using Azure SQL (instead of a standalone SQL Server):
    • Elastic Pool - Specifies an existing Elastic Pool to use (can be left empty if the new app’s database should not be using an elastic pool)
    • Edition - Specifies the database edition/tier to use, for example Basic. Can be left empty, in this case Azure SQL will use the default GeneralPurpose edition.
    • Service Objective - Specifies the database service objective (performance level), for example Basic. Can be left empty, in which case Azure SQL will use the default service objective (such as GP_Gen5_2).
    • Maximum Size - Specifies the database maximum size, for example 1 GB. Can be left empty, in this case the default size will be used.

2.5.2 SQL Server (Azure managed identity authentication)

SQL server databases are automated, on-demand databases. The SQL Server plan offers a good balance between automation, ease of use, and security when using Microsoft SQL Server or Azure SQL. If you would like to have more control over database configuration, consider using the JDBC plan instead.

2.5.2.1 Prerequisites
  • An Azure SQL Server with Entra authentication enabled.

  • A SQL Admin managed identity that the Mendix Operator would use to create/delete databases and managed identities for app environments. This managed identity needs the following permissions:

    • Permissions to authenticate with Azure SQL using its managed identity;
    • A dbmanager role in the master database;
    • A Managed Identity Contributor role in its resource group.
2.5.2.2 Limitations
  • To use this feature, your app needs to be upgraded to Mendix 10.10 (or later), and your namespace needs to use Mendix Operator version 2.17.0 (or later).
2.5.2.3 Environment Isolation
  • Unique user for every environment.
  • Unique managed identity for every environment.
  • Unique database for every environment.
  • Environment has full access only to its own database, cannot access data from other environments.
2.5.2.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Create a Managed Identity for an environment. This Managed Identity will be created in the same resource group, subscription and region as the SQL Admin managed identity.
  • Create a Kubernetes Service Account and attach it to the environment’s Managed Identity. This Service Account acts as a replacement for static credentials, and can also be used to authenticate with Azure SQL databases.
  • Generate a database name for the new environment.
  • Create a new database in the provided SQL Server server. This will be the environment’s dedicated database.
  • Generate a username for the new environment.
  • Create a new contained database user for the new environment. This will be the environment’s user, which only exists in the environment’s database. This user will be linked with the environment’s Managed Identity.
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment. Since the app environment will authenticate through a managed identity role, this secret will not contain any static passwords - only the database hostname, username and other non-sensitive connection details.
2.5.2.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s database (which also deletes the contained database user).
  • Delete that environment’s Managed Identity.
  • Delete that environment’s Kubernetes Service Account.
  • Delete that environment’s Kubernetes database credentials secret.
2.5.2.6 Configuring an SQL Server Plan

In the SQL Server plan configuration, enter the following details:

  • Host - SQL Server (Azure SQL) server hostname, for example my-database.database.windows.net
  • Port - SQL Server (Azure SQL) server port number, in most cases this should be set to 1433.
  • Strict TLS - Set to yes, as Azure SQL supports TLS without any extra configuration.
    • Enabling this option will enable full TLS certificate validation and require encryption when connecting to SQL Server. If the SQL Server server has a self-signed certificate, you will also need to configure custom TLS so that the self-signed certificate is accepted. Azure SQL supports Strict TLS without any extra TLS configuration - no additional custom TLS configuration is required.
    • Disabling this option will attempt to connect with TLS, but skip certificate validation. If TLS is not supported, it will fall back to an unencrypted connection.
  • Authentication - select azure-wi from the dropdown.
  • Managed Identity Client ID - the SQL Admin managed identity Client ID.
    • Mendix recommends using the same storage admin managed identity to manage Azure databases and blob storage containers, as this would be easier to set up and maintain. One storage admin Service Account can be used for multiple storage plans, and only one Federated Credential would be needed to link it with a storage admin Managed Identity.
  • K8s Service Account - the Kubernetes Service Account to create and attach to the SQL Admin managed identity (will be created automatically by the mxpc-cli installation and configuration tool).
  • Is Azure SQL Server - Opens additional options that are only available when using Azure SQL (instead of a standalone SQL Server):
    • Elastic Pool - Specifies an existing Elastic Pool to use (can be left empty if the new app’s database should not be using an elastic pool)
    • Edition - Specifies the database edition/tier to use, for example Basic. Can be left empty, in this case Azure SQL will use the default GeneralPurpose edition.
    • Service Objective - Specifies the database service objective (performance level), for example Basic. Can be left empty, in which case Azure SQL will use the default service objective (such as GP_Gen5_2).
    • Maximum Size - Specifies the database maximum size, for example 1 GB. Can be left empty, in this case the default size will be used.

Azure workload identities allow a Kubernetes Service Account to authenticate itself as a specific Managed Identity. For this to work correctly, add a Federated Credential to the SQL Admin managed identity:

  1. Enable managed identities for your AKS cluster as described in the Azure documentation. This only need to be done once per cluster.

    Ensure that you have the Cluster OIDC Issuer URL. You will need the URL to complete the configuration.

  2. Add a Federated Credential to the Managed identity by using az identity federated-credential create command, or going to the Federated credentials tab and using the Add Credential wizard. This will allow the SQL Admin Kubernetes Service Account to be associated with its Managed identity.

  3. Fill in the following details:

    • Federated credential scenario - Kubernetes accessing Azure resources
    • Cluster Issuer URL - the Cluster OIDC URL from step 1
    • Namespace - the Kubernetes namespace where the Operator is installed; for Global Operator installations, you must specify the managed namespace in the Namespace field.
    • Service Account - the K8s Service Account specified in the SQL Server plan configuration
    • Name - any value
  4. Assign this SQL Admin Managed Identity a Managed Identity Contributor role in its resource group.

  5. Open a Bash-compatible shell (or Azure Console in Bash mode), and run the following command to connect to the Azure SQL master database using sqlcmd managed identity authentication, replacing <hostname> with the SQL Server server hostname (e.g. example.database.windows.net):

    az account get-access-token --resource https://database.windows.net --output tsv | cut -f 1 | tr -d '\n' | iconv -f ascii -t UTF-16LE > /tmp/token && sqlcmd -S <hostname> -G -P /tmp/token && rm /tmp/token
    
  6. In the sqlcmd client, run the following commands (replace <sql-admin-managed-identity> with the SQL Admin Managed Identity name):

    CREATE USER [<sql-admin-identity-name>] FROM EXTERNAL PROVIDER;
    GO
    ALTER ROLE dbmanager ADD MEMBER [<sql-admin-identity-name>];
    GO
    quit
    

2.6 Dedicated JDBC

JDBC databases are dedicated, basic databases. The Dedicated JDBC plan enables you to enter the database configuration parameters for an existing database directly, as supported by the Mendix Runtime. This plan allows to configure and use any database supported by the Mendix Runtime, including Oracle.

2.6.1 Prerequisites

  • A database server, for example Postgres or Oracle.
  • A database in the database server - the database that should be used by the Mendix app environment.
  • An user account that has permissions to access the database - the account that should be used by the Mendix app environment.

2.6.2 Limitations

  • Passwords can only be rotated manually.
  • A dedicated JDBC database cannot be used by more than one Mendix app.
  • Configuration parameters will not be validated and will be provided to the Mendix app as-is. If the arguments are not valid or there is an issue with permissions, the Mendix Runtime will fail to start the and deployment will appear stuck with Replicas running and Runtime showing a spinner.

2.6.3 Environment Isolation

  • Database plan can only be used by one environment at a time.
  • Other environments will not be able to use the database plan if it’s already in use.

2.6.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.

2.6.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s Kubernetes database credentials secret.

2.6.6 Configuring a Dedicated JDBC Plan

In the Dedicated JDBC plan configuration, enter the following details:

  • Database type - The database type, one of the supported DatabaseType values such as PostgreSQL.
  • Host - The database hostname, for example postgres-shared-postgresql.privatecloud-storage.svc.cluster.local:5432 - specifies the value of DatabaseHost.
  • Database name - The name of the database or schema used by the Mendix app, for example postgres - specifies the value of DatabaseName.
  • JDBC URL - The JDBC URL used to connect to the database, for example jdbc:postgresql://postgres-shared-postgresql.privatecloud-storage.svc.cluster.local:5432/myappdb?sslmode=prefer.
  • User - Specifies the username to be used by the Mendix app environment to connect to the database.
  • Password - Specifies the password for Username.

3 Blob File Storage Plans

The following Blob File Storage Types are supported:

  • MinIO - Easiest option to use for a cloud vendor-agnostic solution, if the MinIO server license terms are acceptable
  • Ephemeral (non-persistent) - Simplest option; the contents of System.FileDocument will only be stored locally in a pod and will be lost when a pod is restarted
  • Amazon S3 - Solution hosted by Amazon S3
  • Azure Blob Storage - Solution hosted by Azure Blob Storage
  • Google Cloud Storage - Solution hosted by Google Cloud Storage
  • Ceph RADOS - Allows the use of a pre-created bucket from an S3-compatible vendor. This option also works with other S3-compatible storage options (not listed in this document)

3.1 MinIO

MinIO is an automated, on-demand S3-compatible object storage. The MinIO plan offers a good balance between automation, ease of use and security, and doesn’t depend on any cloud vendors.

3.1.1 Prerequisites

  • A MinIO server - for example, installed from a Helm chart or using the official MinIO Kubernetes Operator.

  • An admin user account - with permissions to create/delete users, policies and buckets.

3.1.2 Limitations

  • Access and Secret keys used by existing environments can only be rotated manually.
  • The MinIO server needs to be a full-featured MinIO server, or a MinIO Gateway with configured etcd. MinIO Gateway without etcd can only have one user, and won’t support environment isolation.

3.1.3 Environment Isolation

  • Every environment has its own IAM user.
  • An environment can only access its own blob file storage bucket.

3.1.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a bucket name and an IAM username for the new environment.
  • Create a new IAM user for the new environment.
  • Create a new policy that allows the environment’s user to access the environment’s bucket.
  • Attach this new policy to the new environment’s user.
  • Create a new bucket for the new environment.
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.

3.1.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • (Only if Prevent Data Deletion is not enabled) Delete that environment’s bucket and its contents.
  • Delete that environment’s IAM user.
  • Delete that environment’s policy.
  • Delete that environment’s Kubernetes blob file storage credentials secret.

3.1.6 Configuring a MinIO Plan

In the MinIO plan configuration, enter the following details:

  • Endpoint is the MinIO server API endpoint, for example http://minio-shared.privatecloud-storage.svc.cluster.local:9000
    • To use TLS, change http to https. If the MinIO server has a self-signed certificate, you will also need to configure custom TLS so that the self-signed certificate is accepted.
  • Access Key is the admin user account access key (username), used by Mendix Operator to create tenants for new environments.
  • Secret Key is the admin user account secret key (password), used by Mendix Operator to create tenants for new environments.

3.2 Ephemeral

The Ephemeral plan is a basic, on-demand way to quickly set up your environment and deploy your app, but any data objects you store will be lost when you restart your environment.

3.2.1 Prerequisites

  • None.

3.2.2 Limitations

  • Data is lost when the app pod is restarted.
  • If an app has more than one replica, behavior can be unpredictable unless the ingress controller has session affinity.

3.2.3 Environment Isolation

  • Each environment (Kubernetes pod) stores its data in the local filesystem.
  • An environment cannot access data from other environments.

3.2.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Create a Kubernetes secret to provide connection details to the new app environment and specify that the app should use the default file storage option (local files in a pod).

3.2.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s Kubernetes blob file storage credentials secret.

3.3 Amazon S3

Mendix for Private Cloud provides a variety of options for storing files in Amazon S3. Each option uses its own approach to isolation between environments, and to attaching a bucket (and IAM user/policy) to a new environment.

If you would like to have Mendix Operator with automation, and have full isolation between environments, use the Create account with existing policy option. This option works with the least possible AWS privileges. For apps using Mendix 9.22 (or a later version), the IRSA Mode option provides the same features and additional security.

If you would like to simply share a bucket between environments, or to manually create a bucket and account per environment, use the existing bucket and account option.

3.3.1 Create Account with Existing Policy

This automated, on-demand option allows to share an existing bucket between environments, and isolates environments from accessing each other’s data.

If your app is using Mendix 9.22 (or a later version) consider using the IRSA Mode instead, for additional security.

3.3.1.1 Prerequisites
  • An existing S3 bucket

  • An environment template policy which will be attached to every new environment’s user; the policy allows access to the S3 bucket, as in the following example (replace <bucket_name> with the S3 bucket name):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowListingOfUserFolder",
                "Action": [
                    "s3:ListBucket"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::<bucket_name>"
                ],
                "Condition": {
                    "StringLike": {
                        "s3:prefix": [
                            "${aws:username}/*",
                            "${aws:username}"
                        ]
                    }
                }
            },
            {
                "Sid": "AllowAllS3ActionsInUserFolder",
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::<bucket_name>/${aws:username}/*"
                ],
                "Action": [
                    "s3:AbortMultipartUpload",
                    "s3:DeleteObject",
                    "s3:GetObject",
                    "s3:ListMultipartUploadParts",
                    "s3:PutObject"
                ]
            }
        ]
    }
    
  • An admin user account - with the following policy (replace <account_id> with your AWS account number, and <policy_arn> with the environment template policy ARN):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "LimitedAttachmentPermissions",
                "Effect": "Allow",
                "Action": [
                    "iam:AttachUserPolicy",
                    "iam:DetachUserPolicy"
                ],
                "Resource": "*",
                "Condition": {
                    "ArnEquals": {
                        "iam:PolicyArn": [
                            "<policy_arn>"
                        ]
                    }
                }
            },
            {
                "Sid": "iamPermissions",
                "Effect": "Allow",
                "Action": [
                    "iam:DeleteAccessKey",
                    "iam:DeleteUser",
                    "iam:CreateUser",
                    "iam:CreateAccessKey"
                ],
                "Resource": [
                    "arn:aws:iam::<account_id>:user/mendix-*"
                ]
            }
        ]
    }
    
3.3.1.2 Limitations
  • Access/Secret keys used by existing environments can only be rotated manually.
3.3.1.3 Environment Isolation
  • Every environment has its own IAM user.
  • The S3 bucket is shared.
    • The environment template policy uses the IAM username as a template, so that a user can only access a certain prefix (path or directory) in the bucket.
    • In practice, this means that any environment can only access files if those files’ prefix matches the environment’s IAM username.
    • An environment cannot access files from other environments.
  • The Mendix Operator does not need permissions to create new policies, only to attach a manually created policy.
3.3.1.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a new IAM username.
  • Create the new IAM user and attach the existing environment template policy to this user.
  • Create a Kubernetes secret to provide connection details to the new app environment and to automatically configure the new environment.
3.3.1.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • (Only if the Prevent Data Deletion is not enabled) Delete files from that environment’s prefix (directory). Files from other apps (in other prefixes/directories) will not be affected.
  • Delete that environment’s IAM user.
  • Delete that environment’s Kubernetes blob file storage credentials secret.
3.3.1.6 Configuring the Plan

In the Amazon S3 plan configuration, enter the following details:

  • IRSA Authentication - Set to no.
  • Create bucket per environment - Set to yes.
  • Create account (IAM user) per environment - Set to yes.
  • Bucket region - The existing shared bucket’s region, for example eu-west-1.
  • Bucket name - The existing shared bucket’s name, for example mendix-apps-production-example.
  • Create inline policy - Set to yes.
  • Attach policy ARN - The environment template policy ARN; this is the existing policy that will be attached to every environment’s user.
  • Access Key and Secret Key credentials for the admin user account - Used to create or delete environment IAM users.

3.3.2 IRSA mode

This automated, on-demand option allows to share an existing bucket between environments, and isolates environments from accessing each other’s data. It’s similar to the Create account with existing policy option, but instead of static credentials, uses IAM roles for authentication.

3.3.2.1 Prerequisites
  • An existing S3 bucket

  • An environment template policy which will be attached to every new environment’s user; the policy allows access to the S3 bucket and RDS database, as in the following example (replace <bucket_name> with the S3 bucket name, <aws_region> with the database’s region, <account_id> with your AWS account number, <database_id> with the RDS database instance identifier and <database_user> with the Postgres superuser account name):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowListingOfUserFolder",
                "Action": [
                    "s3:ListBucket"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::<bucket_name>"
                ],
                "Condition": {
                    "StringLike": {
                        "s3:prefix": [
                            "${aws:PrincipalTag/privatecloud.mendix.com/s3-prefix}/*",
                            "${aws:PrincipalTag/privatecloud.mendix.com/s3-prefix}"
                        ]
                    }
                }
            },
            {
                "Sid": "AllowAllS3ActionsInUserFolder",
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::<bucket_name>/${aws:PrincipalTag/privatecloud.mendix.com/s3-prefix}/*"
                ],
                "Action": [
                    "s3:AbortMultipartUpload",
                    "s3:DeleteObject",
                    "s3:GetObject",
                    "s3:ListMultipartUploadParts",
                    "s3:PutObject"
                ]
            },
            {
                "Sid": "AllowConnectionToDatabase",
                "Effect": "Allow",
                "Action": "rds-db:connect",
                "Resource": "arn:aws:rds-db:<aws_region>:<account_id>:dbuser:db-<database_id>/${aws:PrincipalTag/privatecloud.mendix.com/database-user}"
            }
        ]
    }
    
  • An admin user role - with the following policy (replace <account_id> with your AWS account number, and <policy_arn> with the environment template policy ARN, <aws_region> with the database’s region, <account_id> with your AWS account number, <database_id> with the RDS database instance identifier and <database_user> with the Postgres superuser account name):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "LimitedAttachmentPermissions",
                "Effect": "Allow",
                "Action": [
                    "iam:AttachRolePolicy",
                    "iam:DetachRolePolicy"
                ],
                "Resource": "*",
                "Condition": {
                    "ArnEquals": {
                        "iam:PolicyArn": [
                            "<policy_arn>"
                        ]
                    }
                }
            },
            {
                "Sid": "ManageRoles",
                "Effect": "Allow",
                "Action": [
                    "iam:CreateRole",
                    "iam:TagRole",
                    "iam:DeleteRole"
                ],
                "Resource": [
                    "arn:aws:iam::<account_id>:role/mendix-*"
                ]
            },
            {
                "Sid": "AllowFileCleanup",
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::<bucket_name>"
                ],
                "Action": [
                    "s3:AbortMultipartUpload",
                    "s3:DeleteObject",
                    "s3:GetObject",
                    "s3:ListMultipartUploadParts",
                    "s3:PutObject",
                    "s3:ListBucket"
                ]
            },
            {
                "Sid": "AllowCreateRDSTenants",
                "Effect": "Allow",
                "Action": [
                    "rds-db:connect"
                ],
                "Resource": [
                    "arn:aws:rds-db:<aws_region>:<account_id>:dbuser:db-<database_id>/<database_user>"
                ]
            }
        ]
    }
    
3.3.2.2 Limitations
  • To use this feature, your app needs to be upgraded to Mendix 9.22 (or later), and your namespace needs to use Mendix Operator version 2.12.0 (or later).
3.3.2.3 Environment Isolation
  • Every environment has its own IAM role and associated Kubernetes Service Account.
  • The S3 bucket is shared.
    • The environment template policy uses the IAM role tags as a template, so that a user can only access a certain prefix (path or directory) in the bucket.
    • In practice, this means that any environment can only access files if those files’ prefix matches the environment’s IAM role tags.
    • An environment cannot access files from other environments.
  • The Mendix Operator does not need permissions to create new policies, only to attach a manually created policy.
3.3.2.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Create a new IAM role (the environment role).
  • Add privatecloud.mendix.com/s3-prefix and privatecloud.mendix.com/database-user tags to the environment role. These tags will be used as values in IAM policies, and can be used to limit which S3 bucket prefix the environment can access. Modifying or removing these tags will change the environment’s permissions.
  • Create a Kubernetes Service Account and attach it to the environment role. This Service Account acts as a replacement for AWS access/secret keys, and can also be used to authenticate with RDS Postgres databases.
  • Create a Kubernetes secret to provide connection details to the new app environment and to automatically configure the new environment. Since the app environment will authenticate through an IAM role, this secret will not contain any static passwords - only non-sensitive connection details such as the bucket endpoint and prefix.
3.3.2.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • (Only if the Prevent Data Deletion is not enabled) Delete files from that environment’s prefix (directory). Files from other apps (in other prefixes/directories) will not be affected.
  • Delete that environment’s IAM role.
  • Delete that environment’s Kubernetes Kubernetes Service Account.
  • Delete that environment’s Kubernetes blob file storage credentials secret.
3.3.2.6 Configuring the Plan

In the Amazon S3 plan configuration, enter the following details:

  • IRSA Authentication - Set to yes.

  • Bucket region - The existing shared bucket’s region, for example eu-west-1.

  • Bucket name - The existing shared bucket’s name, for example mendix-apps-production-example.

  • Attach policy ARN - The environment template policy ARN; this is the existing policy that will be attached to every environment’s IAM role.

  • EKS OIDC URL - The OIDC URL of the EKS cluster; in most cases, the OIDC provider is created automatically, and its URL can be found in the AWS Management Console.

  • IAM Role ARN - the admin user role ARN.

    • Mendix recommends using the same IAM role to manage Postgres databases and S3 buckets, as this would be easier to set up and maintain.
  • K8s Service Account - the Kubernetes Service Account to create and attach to the IAM role.

AWS IRSA allows a Kubernetes Service Account to assume an IAM role. For this to work correctly, the IAM role’s trust policy needs to trust the Kubernetes Service Account:

  1. Open the role for editing and add an entry for the ServiceAccount (or ServiceAccounts) to the list of conditions:

  2. For the second condition, copy and paste the sts.amazonaws.com line; replace :aud with :sub and set it to system:serviceaccount:<Kubernetes namespace>:<Kubernetes serviceaccount name>.

3.3.3 Existing bucket and account

This basic, on-demand option allows you to attach an existing S3 bucket and IAM account credentials (access and secret keys) to one or more environments. All apps (environments) will use the same S3 bucket and an IAM user account.

3.3.3.1 Prerequisites
  • An existing S3 bucket

  • An environment user account, with the following IAM policy (replace <bucket_name> with the S3 bucket name):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::<bucket_name>/*"
            ]
            }
        ]
    }
    
3.3.3.2 Limitations
  • Access/Secret keys used by existing environments can only be rotated manually.
  • No isolation between environments using this blob storage plan (if the Share bucket between environments option is checked).
  • Configuration parameters will not be validated and will be provided to the Mendix app as-is. If the arguments are not valid or there is an issue with permissions, the Mendix Runtime will fail to start the and deployment will appear to be stuck with Replicas running and Runtime showing a spinner.
  • To configure the Autogenerate Prefix option you need Mendix Operator version 2.7.0 or above. See Upgrading Private Cloud for instructions on upgrading the Mendix Operator.
3.3.3.3 Environment Isolation
  • The S3 bucket and IAM credentials (access and secret keys) are shared between all environments using this plan.
  • An environment can access data from other environments using this Storage Plan.
  • By unchecking the Share bucket between environments option, this plan switches into Dedicated mode - so that only one environment can use it.
3.3.3.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • (Optional, if Autogenerate Prefix is checked) - generate a unique prefix based on the environment’s name, so that each environment stores files in a separate prefix (directory).
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.
3.3.3.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s Kubernetes blob file storage credentials secret.
3.3.3.6 Configuring the Plan

In the Amazon S3 plan configuration, enter the following details:

  • IRSA Authentication - Set to no.
  • Create bucket per environment - Set to no.
  • Create account (IAM user) per environment - Set to no.
  • Endpoint - The S3 bucket’s endpoint address, for example https://mendix-apps-production-example.s3.eu-west-1.amazonaws.com.
  • Access Key and Secret Key - The credentials for the environment user account.
  • Autogenerate prefix - Can be used to specify if the Mendix Operator should generate a unique bucket prefix (folder) for each environment, or to use a fixed, predefined prefix. If you want a new environment to reuse/inherit data from an existing environment, you can deselect the Autogenerate Prefix and provide the existing prefix you want to use.
  • Share bucket between environments - Specifies is the bucket can be shared between environments (create an on-demand storage plan); if unchecked, the bucket can only be used by one environment (create a dedicated storage plan). To increase security and prevent environments from being able to access each other’s data, do not enable this option.

3.3.4 Create Bucket and Account with Inline Policy

This automated, on-demand option will create an S3 bucket and IAM account for every new environment.

3.3.4.1 Prerequisites
  • An admin user account - with the following policy (replace <account_id> with your AWS account number):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "bucketPermissions",
                "Effect": "Allow",
                "Action": [
                    "s3:CreateBucket",
                    "s3:DeleteBucket"
                ],
                "Resource": "arn:aws:s3:::mendix-*"
            },
            {
                "Sid": "iamPermissions",
                "Effect": "Allow",
                "Action": [
                    "iam:DeleteAccessKey",
                    "iam:PutUserPolicy",
                    "iam:DeleteUserPolicy",
                    "iam:DeleteUser",
                    "iam:CreateUser",
                    "iam:CreateAccessKey"
                ],
                "Resource": [
                    "arn:aws:iam::<account_id>:user/mendix-*"
                ]
            }
        ]
    }
    
3.3.4.2 Limitations
  • Access/Secret keys used by existing environments can only be rotated manually.
  • It is not possible to customize how an S3 bucket is created (for example, encryption or default file access).
  • It is not possible to customize how the inline IAM policy is created.
3.3.4.3 Environment Isolation
  • Every environment has its own IAM user.
  • Every environment has its own S3 bucket, which can only be accessed by that environment’s IAM user.
3.3.4.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a new IAM username and S3 bucket name for the environment.
  • Create a new S3 bucket for the environment.
  • Create the new IAM user with an inline policy - allowing that user to access the environment’s S3 bucket.
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.
3.3.4.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • (Only if Prevent Data Deletion is not enabled) Delete the environment’s bucket and all of its contents.
  • Delete that environment’s IAM user and inline policy.
  • Delete that environment’s Kubernetes blob file storage credentials secret.
3.3.4.6 Configuring the Plan

In the Amazon S3 plan configuration, enter the following details:

  • IRSA Authentication - Set to no.
  • Create bucket per environment - Set to yes.
  • Create account (IAM user) per environment - Set to yes.
  • Bucket region - The region where buckets will be created, for example eu-west-1.
  • Create inline policy - Set to yes.
  • Access Key and Secret Key - Credentials for the admin user account, used to create or delete environment buckets and IAM users.

3.3.5 Create Bucket and Account with Existing Policy

This automated, on-demand option will create an S3 bucket and IAM account for every new environment.

3.3.5.1 Prerequisites
  • An environment template policy (will be attached to every new environment’s user), allowing access to the environment’s S3 bucket:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowListingOfUserFolder",
                "Action": [
                    "s3:ListBucket"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::${aws:username}"
                ],
                "Condition": {
                    "StringLike": {
                        "s3:prefix": [
                            "${aws:username}/*",
                            "${aws:username}"
                        ]
                    }
                }
            },
            {
                "Sid": "AllowAllS3ActionsInUserFolder",
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::${aws:username}/${aws:username}/*"
                ],
                "Action": [
                    "s3:AbortMultipartUpload",
                    "s3:DeleteObject",
                    "s3:GetObject",
                    "s3:ListMultipartUploadParts",
                    "s3:PutObject"
                ]
            }
        ]
    }
    
  • An admin user account - with the following policy (replace <account_id> with your AWS account number, and <policy_arn> with the environment template policy ARN):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "LimitedAttachmentPermissions",
                "Effect": "Allow",
                "Action": [
                    "iam:AttachUserPolicy",
                    "iam:DetachUserPolicy"
                ],
                "Resource": "*",
                "Condition": {
                    "ArnEquals": {
                        "iam:PolicyArn": [
                            "<policy_arn>"
                        ]
                    }
                }
            },
            {
                "Sid": "iamPermissions",
                "Effect": "Allow",
                "Action": [
                    "iam:DeleteAccessKey",
                    "iam:DeleteUser",
                    "iam:CreateUser",
                    "iam:CreateAccessKey"
                ],
                "Resource": [
                    "arn:aws:iam::<account_id>:user/mendix-*"
                ]
            },
            {
                "Sid": "bucketPermissions",
                "Effect": "Allow",
                "Action": [
                    "s3:CreateBucket",
                    "s3:DeleteBucket"
                ],
                "Resource": "arn:aws:s3:::mendix-*"
            }
        ]
    }
    
3.3.5.2 Limitations
  • Access/Secret keys used by existing environments can only be rotated manually.
  • It is not possible to customize how an S3 bucket is created (for example, encryption or default file access).
3.3.5.3 Environment Isolation
  • Every environment has its own IAM user.
  • Every environment has its own S3 bucket, which can only be accessed by that environment’s IAM user.
    • The environment template policy uses the IAM username as a template - so that a user can only access an S3 bucket that matches the IAM username.
  • The Mendix Operator does not need permissions to create IAM policies.
3.3.5.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a new IAM username and S3 bucket name for the environment.
  • Create a new S3 bucket for the environment.
  • Create the new IAM user and attach the environment template policy to this user.
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.
3.3.5.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • (Only if Prevent Data Deletion is not enabled) Delete the environment’s bucket and all of its contents.
  • Delete that environment’s IAM user.
  • Delete that environment’s Kubernetes blob file storage credentials secret.
3.3.4.6 Configuring the Plan

In the Amazon S3 plan configuration, enter the following details:

  • IRSA Authentication - Set to no.
  • Create bucket per environment - Set to yes.
  • Create account (IAM user) per environment - Set to yes.
  • Bucket region - The region where buckets will be created, for example eu-west-1.
  • Create inline policy - Set to no.
  • Attach policy ARN - The environment template policy ARN; this is the policy that will be attached to every environment’s user.
  • Access Key and Secret Key - The credentials for the admin user account, used to create or delete environment buckets and IAM users.

3.3.6 Create account with inline policy

This automated, on-demand option allows the sharing of an existing bucket between environments, and isolates environments from accessing each other’s data.

3.3.6.1 Prerequisites
  • An existing S3 bucket

  • An admin user account - with the following policy (replace <account_id> with your AWS account number):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "iamPermissions",
                "Effect": "Allow",
                "Action": [
                    "iam:DeleteAccessKey",
                    "iam:PutUserPolicy",
                    "iam:DeleteUserPolicy",
                    "iam:DeleteUser",
                    "iam:CreateUser",
                    "iam:CreateAccessKey"
                ],
                "Resource": [
                    "arn:aws:iam::<account_id>:user/mendix-*"
                ]
            }
        ]
    }
    
3.3.6.2 Limitations
  • Access/Secret keys used by existing environments can only be rotated manually.
  • It is not possible to customize how the inline IAM policy is created.
3.3.6.3 Environment Isolation
  • Every environment has its own IAM user.
  • The S3 bucket is shared.
    • The Mendix Operator will generate an IAM policy for every user that only allows access to files in a specific prefix (directory) in the bucket.
    • An environment cannot access files from other environments.
  • The Mendix Operator does no need permissions to create new buckets, only to create IAM users and inline policies.
3.3.6.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a new IAM username.
  • Create the new IAM user with an inline policy - allowing that user to access the environment’s S3 bucket.
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.
3.3.6.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • (Only if Prevent Data Deletion is not enabled) Delete files from that environment’s prefix (directory). Files from other apps (in other prefixes/directories) will not be affected.
  • Delete that environment’s IAM user.
  • Delete that environment’s Kubernetes blob file storage credentials secret.
3.3.6.6 Configuring the Plan

In the Amazon S3 plan configuration, enter the following details:

  • IRSA Authentication - Set to no.
  • Create bucket per environment - Set to no.
  • Create account (IAM user) per environment - Set to yes.
  • Bucket region - The existing shared bucket’s region, for example eu-west-1.
  • Bucket name - The existing shared bucket’s name, for example mendix-apps-production-example.
  • Create inline policy - Set to yes.
  • Access Key and Secret Key - Credentials for the “admin” user account, used to create or delete environment IAM users.

3.4 Azure Blob Storage

If you would like to have Mendix Operator with automation, and have full isolation between environments, use the Azure managed identity authentication option. This option works with apps using Mendix 10.10 (or a later version).

If you would like to simply share a container between environments, or to manually create a container and account per environment, use the static credentials option.

3.4.1 Azure Blob Storage (Azure managed identity authentication)

This automated, on-demand option allows to use an existing blob storage accounts in multiple environments, and isolates environments from accessing each other’s data.

3.4.1.1 Prerequisites
  • An Azure Blob storage account.
  • A Blob Storage Admin managed identity that the Mendix Operator would use to create/delete containers and managed identities for app environments. This managed identity needs the following permissions:
3.4.1.2 Limitations
  • To use this feature, your app needs to be upgraded to Mendix 10.10 (or later), and your namespace needs to use Mendix Operator version 2.17.0 (or later).
3.4.1.3 Environment Isolation
  • Unique managed identity for every environment.
  • Unique container for every environment.
  • The storage account is shared.
  • Environment has full access only to its own container, cannot access data from other environments.
3.4.1.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Create a Managed Identity for an environment. This Managed Identity will be created in the same resource group, subscription and region as the Blob Storage Admin managed identity.
  • Create a Kubernetes Service Account and attach it to the environment’s Managed Identity. This Service Account acts as a replacement for static credentials, and can also be used to authenticate with the environment’s Blob Storage Container.
  • Create a new container in the shared blob storage account. This will be the environment’s dedicated container.
  • Add the Storage Blob Data Contributor role to an environment’s Managed Identity, scoped to its container.
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment. Since the app environment will authenticate through a managed identity role, this secret will not contain any static passwords - only the blob storage endpoint, container name and other non-sensitive connection details.
3.4.1.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s role assignment.
  • Delete that environment’s container and its files.
  • Delete that environment’s Managed Identity.
  • Delete that environment’s Kubernetes Service Account.
  • Delete that environment’s Kubernetes blob file storage credentials secret.
3.4.1.6 Configuring the Plan

In the Azure Blob plan configuration, enter the following details:

  • Account Name - Blob Storage account name.
  • Managed Identity authentication - Set to yes.
  • Account Subscription ID - subscription ID of the blob storage account.
  • Account Resource Group - resource group of the blob storage account.
  • Managed Identity Client ID - the Blob Storage Admin managed identity Client ID.
    • Mendix recommends using the same storage admin managed identity to manage Azure databases and blob storage containers, as this would be easier to set up and maintain. One storage admin Service Account can be used for multiple storage plans, and only one Federated Credential would be needed to link it with a storage admin Managed Identity.
  • K8s Service Account - the Kubernetes Service Account to create and attach to the Blob Storage Admin managed identity (will be created automatically by the mxpc-cli installation and configuration tool).

Azure workload identities allow a Kubernetes Service Account to authenticate itself as a specific Managed Identity. For this to work correctly, add a Federated Credential to the Blob Storage Admin managed identity:

  1. Enable managed identities for your AKS cluster as described in the Azure documentation. This only need to be done once per cluster.

    Ensure that you have the Cluster OIDC Issuer URL. You will need the URL to complete the configuration.

  2. Add a Federated Credential to the Managed identity by using az identity federated-credential create command, or by going to the Federated credentials tab and using the Add Credential wizard. This will allow the Blob Storage Admin Kubernetes Service Account to be associated with its Managed identity.

  3. Fill in the following details:

    • Federated credential scenario - Kubernetes accessing Azure resources
    • Cluster Issuer URL - the Cluster OIDC URL from step 1
    • Namespace - the Kubernetes namespace where the Operator is installed; for Global Operator installations, you must specify the managed namespace in the Namespace field.
    • Service Account - the K8s Service Account specified in the Blob Storage plan configuration
    • Name - any value
  4. Grant the Blob Storage Admin Managed Identity the following permissions:

3.4.2 Azure Blob Storage (static credentials)

This basic, on-demand option allows you to attach an existing Azure Blob Storage container and credentials (account name and secret key) to one or more environments. All apps (environments) will use the same Azure Blob Storage container and credentials.

If your app is using Mendix 10.10 (or a later version) consider using the Azure managed identity authentication instead, for additional security.

3.4.2.1 Prerequisites
  • An Azure Blob storage container and credentials to access it.
3.4.2.2 Limitations
  • Access/Secret keys used by existing environments can only be rotated manually.
  • No isolation between environments using this blob storage plan (if the plan Type is On-Demand).
  • Configuration parameters will not be validated and will be provided to the Mendix app as-is. If the arguments are not valid or there is an issue with permissions, the Mendix Runtime will fail to start the and deployment will appear to hang with Replicas running and Runtime showing a spinner.
3.4.2.3 Environment Isolation
  • The Azure Blob storage container and credentials are shared between all environments using this plan.
  • An environment can access data from other environments using this Storage Plan.
  • All environments will store their data in the root directory of the blob storage container.
  • By using the Dedicated Type, this plan switches into Dedicated mode - so that only one environment can use it.
3.4.2.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.
3.4.2.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s Kubernetes blob file storage credentials secret.
3.4.2.6 Configuring the Plan

In the Azure Blob plan configuration, enter the following details:

  • Account Name - Blob Storage account name.
  • Managed Identity authentication - Set to no
  • Account Key - Access key for the blob storage container.
  • Container name - Name of the blob storage container.
  • Type - Specifies is the container can be shared between environments (create an on-demand storage plan); or that the container can only be used by one environment (create a dedicated storage plan). To increase security and prevent environments from being able to access each other’s data, select Dedicated.

3.5 Google Cloud Storage

This basic, on-demand option allows you to attach an existing GCP Cloud Storage bucket and credentials (access and secret keys) to one or more environments. All apps (environments) will use the same GCP Cloud Storage bucket and credentials (access and secret keys).

3.5.1 Prerequisites

  • A GCP Cloud Storage bucket.
  • An Access and Secret key with permissions to access the bucket.

3.5.2 Limitations

  • Access/Secret keys used by existing environments can only be rotated manually.
  • No isolation between environments using this blob storage plan (if the plan Type is On-Demand).
  • Configuration parameters will not be validated and will be provided to the Mendix app as-is. If the arguments are not valid or there is an issue with permissions, the Mendix Runtime will fail to start the and deployment will appear to hang with Replicas running and Runtime showing a spinner.

3.5.3 Environment Isolation

  • The GCP Cloud Storage bucket and credentials (access and secret keys) are shared between all environments using this plan.
  • An environment can access data from other environments using this Storage Plan.
  • By using the Dedicated Type, this plan switches into Dedicated mode - so that only one environment can use it.

3.5.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a unique prefix based on the environment’s name, so that each environment stores files in a separate prefix (directory).
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.

3.5.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s Kubernetes blob file storage credentials secret.

3.5.6 Configuring the Plan

In the GCP Cloud Storage plan configuration, enter the following details:

  • Endpoint - The GCP bucket’s endpoint address, for example, https://storage.googleapis.com/<bucket-name>.
  • Access Key and Secret Key - Credentials to access the bucket.
  • Type - Specifies is the container can be shared between environments (create an on-demand storage plan); or that the container can only be used by one environment (create a dedicated storage plan). To increase security and prevent environments from being able to access each other’s data, select Dedicated.

3.6 Ceph

This basic, on-demand option allows to attach an existing Ceph or S3-compatible bucket and credentials (access and secret keys) to one or more environments. All apps (environments) will use the same bucket and credentials (access and secret keys).

3.6.1 Prerequisites

  • A Ceph or S3-compatible bucket.
  • An Access and Secret key with permissions to access the bucket.

3.6.2 Limitations

  • Access/Secret keys used by existing environments can only be rotated manually.
  • No isolation between environments using this blob storage plan (if the plan Type is On-Demand).
  • Configuration parameters will not be validated and will be provided to the Mendix app as-is. If the arguments are not valid or there is an issue with permissions, the Mendix Runtime will fail to start the and deployment will appear to hang with Replicas running and Runtime showing a spinner.

3.6.3 Environment Isolation

  • The Ceph or S3-compatible bucket and credentials (access and secret keys) are shared between all environments using this plan.
  • An environment can access data from other environments using this Storage Plan.
  • By using the Dedicated type, this plan switches into Dedicated mode, so that only one environment can use it.

3.6.4 Create Workflow

When a new environment is created, the Mendix Operator performs the following actions:

  • Generate a unique prefix based on the environment’s name, so that each environment stores files in a separate prefix (directory).
  • Create a Kubernetes secret to provide connection details to the new app environment - to automatically configure the new environment.

3.6.5 Delete Workflow

When an existing environment is deleted, the Mendix Operator performs the following actions:

  • Delete that environment’s Kubernetes blob file storage credentials secret.

3.6.6 Configuring the Plan

In the Ceph plan configuration, enter the following details:

  • Endpoint - The Ceph bucket’s endpoint address, for example https://ceph-instance.local:9000/<bucket-name>.
  • Access Key and Secret Key - Credentials to access the bucket.
  • Type - Specifies if the container can be shared between environments (create an on-demand storage plan); or that the container can only be used by one environment (create a dedicated storage plan). To increase security and prevent environments from being able to access each other’s data, select Dedicated.

4 Walkthroughs

This section provides instructions how to set up storage for the most typical use cases.

4.1 AWS IAM-based Storage

AWS recommends using IRSA authentication instead of static credentials. This guide explains how to set up and use a database and blob file storage plan using AWS best practices.

Before you begin, you need to create an EKS cluster and install Mendix for Private Cloud in that cluster.

Navigate to the EKS cluster details and write down the OpenID Connect provider URL:

IRSA authentication uses the same AWS IAM Role and Kubernetes Service Account to authenticate with AWS services. It is not possible to assign more than one IAM Role or Kubernetes Service Account to a Mendix app environment. To avoid conflicts, IAM roles and service accounts will be managed by the S3 blob file storage provisioner. The Postgres provisioner only creates a database and Postgres user (Postgres role), but does not manage IAM roles. To use IAM authentication, the database and blob file storage plans need to be managed together - the IAM policy is shared, and grants access to the database and S3 bucket.

For more details, see the Postgres (IAM authentication) and S3 IRSA mode plan details.

4.1.1 RDS Database

To configure the required settings for an RDS database, do the following steps:

  1. Create a Postgres RDS instance and enable Password and IAM database authentication, or enable Password and IAM database authentication for an existing instance.

  2. Enable IAM authentication and grant rds_iam role to database-username role by using the below psql commandline to run the following jump pod commands (replacing <database-username> with the username specified in database-username and <database-host> with the database host):

    kubectl run postgrestools docker.io/bitnami/postgresql:14 -ti --restart=Never --rm=true -- /bin/sh
    export PGDATABASE=postgres
    export PGUSER=<database-username>
    export PGHOST=<database-host>
    export PGPASSWORD=""
    psql    
    GRANT rds_iam TO <database-username>;
    ALTER ROLE <database-username> WITH PASSWORD NULL;
    

    See the RDS IAM documentation for more details on enabling IAM authentication.

  3. Navigate to the RDS instance details, and write down the following information:

    • The database Endpoint from the Connectivity & security tab:

    • The Master username and Resource ID from the Configuration tab:

  4. Download the RDS TLS certificates and save them into a Kubernetes secret (replace {namespace} with the namespace where the Mendix Operator is installed):

1
2
curl -L -o custom.crt https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem
kubectl -n {namespace} create secret generic mendix-custom-tls --from-file=custom.crt=custom.crt

4.1.2 S3 Bucket

To configure the required settings for an S3 bucket, do the following steps:

  1. Create an S3 bucket using default parameters.
  2. Write down the Bucket name and Region.

4.1.3 Environment Template Policy

Create a new IAM policy with the following JSON:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowListingOfUserFolder",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::<bucket_name>"
            ],
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "${aws:PrincipalTag/privatecloud.mendix.com/s3-prefix}/*",
                        "${aws:PrincipalTag/privatecloud.mendix.com/s3-prefix}"
                    ]
                }
            }
        },
        {
            "Sid": "AllowAllS3ActionsInUserFolder",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::<bucket_name>/${aws:PrincipalTag/privatecloud.mendix.com/s3-prefix}/*"
            ],
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ]
        },
        {
            "Sid": "AllowConnectionToDatabase",
            "Effect": "Allow",
            "Action": "rds-db:connect",
            "Resource": "arn:aws:rds-db:<aws_region>:<account_id>:dbuser:<database_id>/${aws:PrincipalTag/privatecloud.mendix.com/database-user}"
        }
    ]
}

In this template, replace:

  • <bucket_name> with the S3 Bucket name
  • <aws_region> with the RDS Instance’s AWS region
  • <account_id> with the AWS account ID
  • <database_id> with the Resource ID from the RDS database Configuration tab (it should look like db-ABCDEFGHIJKL01234, and is not the database name or ARN). In the case of Aurora DB, ensure that the database_id is from the cluster and not the instance.

This environment template policy will be attached to every new environment’s role. Write down its ARN.

For every new environment, the Mendix Operator will automatically create a new role and fill in the privatecloud.mendix.com/database-user and privatecloud.mendix.com/s3-prefix tags.

4.1.4 Storage Provisioner Admin Role

Create a new IAM role.

  1. In the first screen of the creation wizard, select the following values:

    • Trusted entity type - Select Web identity
    • Identity provider - Select OpenID Connect provider URL
    • Audience - Leave as sts.amazonaws.com
  2. Complete the wizard with default options, without adding any permissions in the second screen of the creation wizard.

  3. Write down the Storage Provisioner admin role ARN.

  4. Allow a Kubernetes ServiceAccount to assume the Storage Provisioner admin role. This will be the storage provisioner admin-like service account that manages permissions for new environments, and revokes permissions of an environment when it is deleted. To avoid conflicts with existing ServiceAccounts, as a best practice, use mendix-storage-provisioner-iam as the Kubernetes ServiceAccount name.

    1. Open the role for editing and add an entry for the ServiceAccount to the list of conditions:

    2. For the second condition, copy and paste the sts.amazonaws.com line; replace :aud with :sub and set it to system:serviceaccount:<Kubernetes namespace>:<Kubernetes serviceaccount name>.

  5. Attach the following IAM policy to this Storage Provisioner admin IAM role:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "LimitedAttachmentPermissions",
            "Effect": "Allow",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:DetachRolePolicy"
            ],
            "Resource": "*",
            "Condition": {
                "ArnEquals": {
                    "iam:PolicyArn": [
                        "<policy_arn>"
                    ]
                }
            }
        },
        {
            "Sid": "ManageRoles",
            "Effect": "Allow",
            "Action": [
                "iam:CreateRole",
                "iam:TagRole",
                "iam:DeleteRole"
            ],
            "Resource": [
                "arn:aws:iam::<account_id>:role/mendix-*"
            ]
        },
        {
            "Sid": "AllowFileCleanup",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::<bucket_name>"
            ],
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:ListMultipartUploadParts",
                "s3:PutObject",
                "s3:ListBucket"
            ]
        },
        {
            "Sid": "AllowCreateRDSTenants",
            "Effect": "Allow",
            "Action": [
                "rds-db:connect"
            ],
            "Resource": [
                "arn:aws:rds-db:<aws_region>:<account_id>:dbuser:<database_id>/<database_user>"
            ]
        }
    ]
}

In this template, replace:

  • <policy_arn> with the environment template policy ARN
  • <bucket_name> with the S3 Bucket name
  • <aws_region>, with the RDS Instance’s AWS region
  • <account_id>, with the AWS account ID
  • <database_id>, with the Resource ID from the RDS database Configuration tab (it should look like db-ABCDEFGHIJKL01234, and is not the database name or ARN). In the case of Aurora DB, ensure that the database_id is from the cluster and not the instance.
  • <database-user> with the Postgres superuser account name

This role allows the Mendix Operator to create and delete IAM roles for Mendix app environments.

In addition, the optional AllowFileCleanup permissions will be used to clean up a deleted environment’s files (if Prevent Data Deletion is disabled). Only files from a deleted environment will be cleaned up, files from other environments will remain unaffected.

4.1.5 Creating the Storage Plans

To create the required storage plans, do the following steps:

  1. Run the mxpc-cli configuration tool and select to configure the Database Plan, Storage Plan and Custom TLS.

  2. In the Database configuration tab, select postgres as the database type, and provide the following details:

    • Host should be set to the Endpoint of the RDS database instance
    • Port should be set to 5432 (or custom port if the RDS instance is using a non-standard port)
    • Database Name should be set to postgres (or a custom login database if the default database is not available)
    • Authentication should be set to aws-iam
    • Username should be set to the Master username of the RDS database instance
    • IAM Role ARN should be set to the Storage Provisioner admin role ARN
    • K8s Service Account should use the same Kubernetes Service Account that was specified in the Storage Provisioner admin role trust policy. If you used the recommended Service Account name, paste mendix-storage-provisioner-iam in this field.
  3. In the Storage Plan configuration tab, select amazon-s3 as the storage type, and provide the following details:

    • IRSA Authentication should be set to yes
    • Bucket Region and Bucket Name should be set to the bucket’s region and name
    • Attach policy ARN should be set to the environment template policy ARN
    • EKS OIDC URL should be set to the OpenID Connect provider URL value
    • IAM Role ARN should be set to the Storage Provisioner admin role ARN
    • K8s Service Account should use the same Kubernetes Service Account that was specified in the Storage Provisioner admin role trust policy. If you used the recommended Service Account name, paste mendix-storage-provisioner-iam in this field.
  4. In the Custom TLS tab, paste mendix-custom-tls into the CA Certificates Secret Name field.

  5. Apply the changes - you can now use the new Postgres and S3 plans to create new environments with IRSA authentication.

4.2 Azure Managed Identity-based Storage

Azure recommends using managed identity authentication instead of static credentials. This guide explains how to set up and use a database and blob file storage plan using Azure best practices.

Before you begin, you need to create an AKS cluster and install Mendix for Private Cloud in that cluster.

  1. Enable managed identities for your AKS cluster as described in the Azure documentation. This only need to be done once per cluster.

    Ensure that you have the Cluster OIDC Issuer URL. You will need this URL to complete the configuration on Step 3.

  2. Create a new managed identity using the az identity create command, or by using the Create a user-assigned managed identity wizard in the Azure Portal.

    This Managed Identity would act as a storage admin. When a new environment is created, this storage admin will create that environment’s tenant database, blob storage and an environment’s Managed Identity (this environment-specific managed identity would only be able to access the environment’s tenant database and file storage).

    For every new environment, Mendix Operator will create an environment managed identity - in the same region and resource group as the storage admin managed identity.

    Later, you’ll need the following details of the storage admin managed identity:

    • Name
    • Client ID
    • Object (principal) ID
  3. Add a Federated Credential to the Managed identity by using az identity federated-credential create command, or going to the Federated credentials tab and using the Add Credential wizard. This will allow the storage admin Kubernetes Service Account to be associated with its Managed identity.

    Fill in the following details:

    • Federated credential scenario - Kubernetes accessing Azure resources
    • Cluster Issuer URL - the Cluster OIDC URL from step 1
    • Namespace - the Kubernetes namespace where the Operator is installed; for Global Operator installations, you must specify the managed namespace in the Namespace field.
    • Service Account - on the Kubernetes side, this would be a storage admin account assigned to the Managed Identity created on Step 2. To avoid conflicts with existing ServiceAccounts, as a best practice, use mendix-storage-provisioner-wi as the Kubernetes ServiceAccount name. You will this name later.
    • Name - any value
  4. Assign this storage admin Managed Identity a Managed Identity Contributor role in its resource group.

Managed Identity authentication uses the same Managed Identity and Kubernetes Service Account to authenticate with Azure services. It is not possible to assign more than one Kubernetes Service Account to a Mendix app environment. To avoid conflicts, the database and file storage provisioners will create the environment’s tenant managed identity with the same parameters in parallel.

For more details, see the Postgres (Azure managed identity authentication), SQL Server (Azure managed identity authentication) and Azure Blob Storage (Azure managed identity authentication) plan details.

4.2.1 Postgres (Flexible Server) Database

To configure the required settings for a Postgres database, do the following steps:

  1. Create a Postgres (Flexible Server) instance. Navigate to the Overview page, and write down the Server name.

  2. Navigate to the Authentication page. Set authentication to Microsoft Entra authentication only and press Save.

  3. Add the storage admin Managed Identity you’ve created in the beginning of this walkthrough as a Microsoft Entra Admin.

4.2.2 Azure SQL Database

To configure the required settings for an Azure SQL, do the following steps:

  1. Create an Azure SQL Server instance. Navigate to the Overview page, and write down the Server name.

  2. Navigate to the Microsoft Entra ID page, and add yourself (or your Entra group) as an Entra Admin user in the Azure SQL database.

    Azure SQL can only have one Entra Admin, and to add multiple users you’ll need to do grant access through an Entra group.

  3. Open Azure Cloud Shell (or a Bash-compatible terminal) and run az login to authenticate with Entra ID.

  4. Run the following command to connect to the Azure SQL database, replacing <hostname> with the Server name from Step 1:

    az account get-access-token --resource https://database.windows.net --output tsv | cut -f 1 | tr -d '\n' | iconv -f ascii -t UTF-16LE > /tmp/token && sqlcmd -S <hostname> -G -P /tmp/token && rm /tmp/token
    
  5. In the sqlcmd client, run the following commands (replace <storage-admin-identity-name> with the Name of the storage admin Managed Identity you’ve created in the beginning of this walkthrough:

    CREATE USER [<storage-admin-identity-name>] FROM EXTERNAL PROVIDER;
    GO
    ALTER ROLE dbmanager ADD MEMBER [<storage-admin-identity-name>];
    quit
    

4.2.3 Azure Blob Storage

To configure the required settings for an S3 bucket, do the following steps:

  1. Create an Azure Blob Storage account, and write down the following details from its Overview page:

    • Name of the storage account
    • Resource group
    • Subscription ID
  2. Grant the storage admin Managed Identity (created in the beginning of this walkthrough) the following permissions:

4.2.4 Creating the Storage Plans

To create the required storage plans, do the following steps:

  1. Run the mxpc-cli configuration tool and select to configure the Database Plan and Storage Plan.

  2. If you created a Postgres database:

    In the Database configuration tab, select postgres as the database type, and provide the following details:

    • Host should be set to the Server name of the Postgres database server
    • Port should be set to 5432 (or custom port if the Postgres server is using a non-standard port)
    • Strict TLS should be set to yes
    • Database Name should be set to postgres (or a custom login database if the default database is not available)
    • Authentication should be set to azure-wi
    • Managed Identity name should be set to the Name of the storage admin Managed Identity created in the beginning of this walkthrough
    • Managed Identity Client ID should be set to the Client ID of the storage admin Managed Identity created in the beginning of this walkthrough
    • K8s Service Account should use the same Kubernetes Service Account that was specified in the beginning of this walkthrough. If you used the recommended Service Account name, paste mendix-storage-provisioner-wi in this field.
  3. If you created an Azure SQL database:

    In the Database configuration tab, select sqlserver as the database type, and provide the following details:

    • Host should be set to the Server name of the Azure SQL database server
    • Port should be set to 1433 (or custom port if the SQL Server instance is using a non-standard port)
    • Strict TLS should be set to yes
    • Authentication should be set to azure-wi
    • Managed Identity Client ID should be set to the Client ID of the storage admin Managed Identity created in the beginning of this walkthrough
    • K8s Service Account should use the same Kubernetes Service Account that was specified in the beginning of this walkthrough. If you used the recommended Service Account name, paste mendix-storage-provisioner-wi in this field.
    • Is Azure SQL Server should be enabled to change default settings:
      • Elastic Pool - Specifies an existing Elastic Pool to use (can be left empty if the new app’s database should not be using an elastic pool)
      • Edition - Specifies the database edition/tier to use, for example Basic to use an entry-level tier. Can be left empty, in this case Azure SQL will use the default GeneralPurpose edition.
      • Service Objective - Specifies the database service objective (performance level), for example Basic to use an entry-level tier. Can be left empty, in which case Azure SQL will use the default service objective (such as GP_Gen5_2).
      • Maximum Size - Specifies the database maximum size, for example 1 GB.
  1. In the Storage Plan configuration tab, select azure-blob as the storage type, and provide the following details:

    • Account name should be set to the Blob Storage account name
    • Managed Identity authentication should be set to yes
    • Account Subscription ID should be set to the Subscription ID of the Blob Storage account
    • Account Resource Group should be set to the Resource Group of the Blob Storage account
    • Managed Identity Client ID should be set to the Client ID of the storage admin Managed Identity created in the beginning of this walkthrough
    • K8s Service Account should use the same Kubernetes Service Account that was specified in the beginning of this walkthrough. If you used the recommended Service Account name, paste mendix-storage-provisioner-wi in this field.
  2. Apply the changes - you can now use the new database and blob storage plans to create new environments with Managed Identity authentication.