Cloud Native Support Coverage
Cloud Native Quotas and Limitations
1) Overview
Liferay Cloud Native provides Kubernetes-oriented deployment and operations tooling for running Liferay DXP on hyperscaler-managed infrastructure. It is designed around GitOps workflows and standardized “golden path” patterns to simplify provisioning, release promotion, and day-2 operations.
Liferay Cloud Native is a composable offer with distinct layers:
- Cloud Native (Kubernetes Orchestration Tooling): Liferay-provided tooling used to provision, deploy, configure, and operate Liferay DXP on Kubernetes using GitOps workflows and standardized “golden path” patterns.
- Cloud Provider Ready: provider-specific implementations of Cloud Native tooling aligned to a supported hyperscaler.
Cloud Provider Ready Availability
| Cloud Provider Ready | Availability |
| AWS Ready | Available Now |
| GCP Ready | Available Soon |
| Azure Ready | Available Soon |
Key Terms
The following terms are used throughout this document:
| Term | Definition |
| Cloud Provider Ready | A provider-specific version of the Cloud Native tooling aligned to a supported hyperscaler (for example, AWS Ready). |
| Tooling (Tools) | Provisioning and deployment artifacts including templates, charts, modules, scripts, and other mechanisms used to deploy, configure, or operate Liferay DXP (for example, Helm charts and Terraform modules). |
| Pod | A Kubernetes pod in which one or more containers execute Liferay DXP. |
| Production Pod | A pod intended for production workloads and configured with a clustering-capable license.xml obtained through the Customer Portal. |
| Non-Production Pod | A pod intended for development/test workloads and configured with a developer license.xml obtained through the Customer Portal. |
| Cluster | A group of Liferay DXP pods configured to work together via Liferay DXP clustering (see also), sharing cache and (when configured) session/state mechanisms. In the context of Cloud Native Experience a Kubernetes namespace is equivalent to the concept Liferay DXP clustering. |
| Namespace | A Kubernetes namespace used to isolate resources for an environment (for example: liferay-prd, liferay-uat, liferay-dev) |
| Environment | A logical deployment context (for example: Production, UAT, Development, DR). Environments are commonly implemented as dedicated Kubernetes namespaces with distinct configuration and license files. |
| Activation Key / License File | A license file that entitles use of Liferay DXP. Different license types apply depending on whether clustering is enabled and whether the workload is production or non-production. See the Virtual Cluster Key documentation for production key generation. |
| Production Environment | A managed-service concept used in PaaS/SaaS offers. It is separate from the Production Pod limitation model used in Cloud Native. |
Scope
This document describes quotas and limitations for Cloud Native Experience and Cloud Provider Ready subscriptions (for example: AWS Ready, GCP Ready, and Azure Ready). It does not describe quotas, limitations, or included resources for Cloud Native–based PaaS Experience or Cloud Native–based SaaS Experience, which are separate subscriptions with their own infrastructure, managed services, and consumption-based resources.
2) Product Limitations
2.1 Product Limitations (at a glance)
| Dimension | Production Pods | Non-Production Pods | Notes |
| Maximum Concurrent Pods | Fixed: 1 / 3 / 5 / 7 / 9 | Unlimited | Production is limited by concurrently running Pods (typically in a Kubernetes namespace). |
| CPU | Unlimited | Unlimited | Not a metered limitation dimension. |
| Memory | Unlimited | Unlimited | Not a metered limitation dimension. |
| Concurrent Users | Unlimited | Limited to 5 | User limit applies to Non-Production use only. |
| License file | Virtual Cluster License license.xml |
Developer Cluster License license.xml |
License type determines allowed clustering behavior and Pod classification. |
| Intended use | Production workloads | Dev/Test/QA/UAT | Non-Production is not intended for live production traffic. |
Limit Scope and Counting Rules
| Limit | Scope | Counted When | Not Counted When |
| Production Pod concurrency | Network (typically a Kubernetes namespace) |
Pod is in a capacity-consuming lifecycle state (e.g., Ready, or other states defined in Liferay documentation) | Pod is Terminated/Completed |
| Non-Production concurrent users | Network (typically a Kubernetes namespace) |
Active authenticated users/sessions as defined by Liferay documentation | Inactive sessions/users as defined by Liferay documentation |
2.2 Production Pods
2.2.1 Primary Limitation: Production Pod Concurrency
Cloud Native Experience limits production usage by the maximum number of concurrently running Production Pods.
Production Pod concurrency: 1, 3, 5, 7, or 9 concurrently running Production Pods.
Scope of the Limit: The Production Pod concurrency limit applies within a given Network (typically a Kubernetes namespace).
2.2.2 Production Pod Characteristics
For Production Pods, the following are not limitation dimensions:
- CPU: unlimited
- Memory: unlimited
- Concurrent Users: unlimited
These are treated as non-metered characteristics; the controlling limitation is Production Pod concurrency.
2.2.3 Production Pod Concurrency Examples
| Max concurrently running Production Pods in a Namespace | Example: allowed running state |
| 1 | 1 running, 0 additional running |
| 3 | 3 running, 1 additional not allowed |
| 5 | 4 running + 1 starting = 5 total |
| 7 | 7 running across replicas |
| 9 | 9 running across replicas |
2.3 Non-Production Pods
2.3.1 Non-Production Pod Limitation Model
Cloud Native Experience allows an unlimited number of concurrently running Non-Production Pods, subject to the Non-Production user limitation below.
Non-Production Pod characteristics:
- CPU: unlimited
- Memory: unlimited
Non-Production user limitation:
- Concurrent Users: limited to 5
Scope of the limit: The Non-Production Pod user limitation applies within a given Namespace (unless explicitly stated otherwise in the applicable offer description).
2.3.2 Intended Uses (examples)
Non-Production Pods are intended for non-production workloads such as:
- Development
- Testing/QA
- Staging/UAT
- CI/CD validation environments
- Performance testing (non-production)
2.4 Namespace-Based Enforcement (no overages within a namespace)
Liferay license validation is evaluated within the scope of a cluster network which for Cloud Native means a Kubernetes namespace:
- For production pods, Liferay validates the clustering-capable license and prevents running more production pods than the namespace entitlement.
- For non-production pods, Liferay validates the developer license and will block clustering-related behavior if clustering is detected.
As a result, exceeding the entitled number of production pods in a namespace is not supported, and clustering with a developer license is not supported.
3) Cloud Native Limitations (what is not included)
Cloud Native is deployment enablement tooling. It does not, by itself, include hosting, managed operations, or a “Production Environment” as defined in PaaS/SaaS offers.
3.1 Git Repository and CI/CD Pipeline
Cloud Native is GitOps-oriented, but it does not include the underlying developer platform services used to implement GitOps.
| Capability area | Included in Cloud Native? | Notes |
| Git repository hosting | No | Cloud Native assumes a customer- or partner-managed Git repository for configuration and promotion workflows. |
| CI tooling | No | Cloud Native assumes a customer- or partner-managed CI system (for example, to build images or artifact bundles and open pull requests for promotion). |
| Artifact registry | No | Cloud Native assumes a customer- or partner-managed registry for storing container images and/or build artifacts used by deployments. |
These components are included as part of Cloud Native–based PaaS and SaaS deployment offers, where applicable.
3.2 Cloud Native Hosting and Infrastructure Resources (unless included via PaaS/SaaS)
- Cloud infrastructure capacity (compute, storage, networking)
- Managed hosting or managed operations
- Hyperscaler service costs
- Production environment(s) as defined under PaaS/SaaS
These are typically included in Cloud Native–based PaaS and SaaS deployment offers, where applicable.
4) Compatibility Matrix
4.1 Cloud Native Compatibility Matrix
| Dimension | Supported | Not Supported (Examples) | Notes / Rationale |
| DXP version | 7.4 | Other DXP versions will be added soon | Scope is intentionally narrow for a supportable, tested baseline. |
| Application server | Tomcat only (as shipped in Liferay standard images) | WebLogic, JBoss/WildFly, custom app servers are not supported | Cloud-native scripts assume Tomcat in expected locations/paths. |
| Database engine (GitOps level) | Amazon RDS PostgreSQL | RDS MySQL, DB2, SQL Server, MariaDB, Aurora variants will be added soon | |
| Image strategy | Liferay official images and downstream custom images built FROM Liferay official images |
“From scratch” images (e.g., FROM alpine + manual Liferay install) |
Scripts depend on standard image layout (Liferay paths, entrypoints, tooling conventions). |
| JDK | 8, 11, 21 (as supported by standard DXP images) | Unvalidated JDKs | Cloud Native JDK support is based on Liferay DXP conpatible JDK versions. |
4.2 Cloud Native Experience - GitOps “Golden Path”
| Layer | What's Supported | Notes |
| GitOps full-stack (Liferay-provided) | RDS PostgreSQL and RDS MySQL only if present in Liferay’s Crossplane compositions | The GitOps level is defined by what Liferay ships + tests end-to-end (Terraform + Crossplane + scripts). |
| Helm chart (theoretical flexibility) | Potentially broader DB choices depending on your platform | Even if Helm can connect to other DBs, official support is constrained to tested Golden Paths. |
4.3 Clarifications / Support Boundaries
- Customization Boundary: If customers modify the Terraform/scripts beyond Liferay’s provided tooling, the configuration may not work as intended. Additionally, modifications to the base scripts outside of Terraform variables and Helm Chart values will make your Cloud Native experience out of support.
- Reason for Being Opinionated: Avoids an untestable explosion of permutations (DB engines × versions × app servers × image layouts × chart values).
5) License Activation
An administrator can generate a Virtual Cluster Key from the Liferay Customer Portal:
- In Customer Portal, select the appropriate project.
- Go to Product Activation and select DXP.
- Select Actions → Generate New.
- Choose Product, Version, and Key Type (Virtual Cluster Production, Developer Cluster).
- Select the applicable subscription term, and provide the number of nodes in the cluster.
- After submission, the key appears as a single line item and can be downloaded as an XML file (
license.xml).
Compatibility Note: Virtual Cluster Keys require minimum patch levels/updates for each DXP version; ensure your DXP patch level is compatible before switching to Virtual Cluster Keys.
5.2 Apply the Key to the Environment (Self-Hosted / Kubernetes)
At runtime, Liferay DXP activates when the key XML is deployed into the Liferay Home deploy directory (or the configured auto-deploy directory). One standard approach is:
- Clear
${liferay.home}/data/license(remove existing.lifiles if present). - Start the application server.
- Place the key XML file into
${liferay.home}/deploy(or${auto.deploy.deploy.dir}). - Wait for the key to auto-deploy and confirm activation in logs.
Kubernetes (typical pattern): store license.xml as a Kubernetes Secret and mount it into the pod so it is available in the Liferay Home deploy directory on startup (or via an init step). The exact mechanism depends on your Helm chart/values structure, but the goal is always the same: ensure the license.xml is present for auto-deploy and that each Namespace uses the correct license type for its intended workload.
6) Responsibility Boundary
6.1 Customer Responsibilities
Cloud Native tooling is designed around GitOps workflows. Customers are expected to be aware of how to manage a GitOps workflow including but not limited to:
- Environment configuration is typically stored as per-environment YAML (for example:
environments/{env}/liferay.yaml). - Deployments are reconciled by a GitOps controller (for example, Argo CD) based on version-controlled configuration.
- Liferay-published Helm charts are consumed as published artifacts (via an OCI registry). Customers supply configuration values rather than cloning chart source.
- Build artifacts (custom images or bundles) are produced by a customer/partner CI pipeline and promoted through Git changes.
Customers are responsible for:
- Selecting namespaces for Production vs Non-Production workloads
- Applying the correct
license.xmlper namespace and workload type - Ensuring the number of concurrently running Production Pods in a namespace does not exceed the applicable tier
- Ensuring Non-Production usage remains within the Non-Production concurrent user limit
- Operating their chosen Git repository, CI system, and artifact registry (for Cloud Native tooling)
6.3 Liferay Responsibilities
Liferay is responsible for:
- Providing the Cloud Native tooling and Cloud Provider Ready packages as documented
- Making the applicable license artifacts available through the Customer Portal
- Supporting the tooling as documented for Cloud Native