Cluster Setup

Last Updated: Aug 8, 2024
documentation for the dotCMS Content Management System

dotCMS Enterprise Cloud (/cms-platform/features) support multi-node clusters in load balanced, round-robin or hot standby configurations.

This document describes the common configuration required setting up a dotCMS cluster. Please review the following sections before performing your clustering configuration:

Initial Setup

The following resources are shared among all servers in the cluster, and must be set up for all clustering configurations. These configuration steps must be completed before implementing the steps for either Auto-clustering or Manual Cluster Configuration.

1. Common Location

Clustering in dotCMS is designed to be used in a single location/datacenter with low network latency between servers. In the cloud, this means that in a dotCMS cluster, dotCMS nodes can span multiple availability zones but generally should not span regions. Clusters require significant inter-node communications, and having nodes in different regions can negatively affect performance and reliability of the cluster due to network latency or communication failures.

Nodes Spanning Multiple Locations

Push Publishing is the recommended and fully supported solution to span multiple locations.

Although some other methods can be implemented to connect nodes spanning multiple locations, these require custom configuration and support. If you would like more information on other methods of spanning nodes in different physical locations (such as clusters with nodes in different data centers or on separate networks), please contact dotCMS Support for assistance.

2. Shared Database and OpenSearch

In order to cluster dotCMS, you must first create and set up your initial database and your OpenSearch cluster. Though caches are stored separately on each node in the cluster, all nodes in a cluster connect to the same centralized databases and OpenSearch clusters in order to sync data across the cluster. As such, one PostgreSQL database is required for each cluster.

Every node in a cluster needs the same environmental variable value in DOT_DOTCMS_CLUSTER_ID, which is how these units are distinguished. For example, if Acme, inc., has separate development and production clusters, they might configure one set of nodes with DOT_DOTCMS_CLUSTER_ID: acme-dev and the other acme-prod. These identifiers define OpenSearch index names, thereby preventing collisions between clusters.

dotCMS does not support primary/secondary database nodes. All dotCMS servers tend to occupy separate containers, all likewise separate from the single PosgreSQL database container associated with their cluster. While dotCMS uses the native Listen/Notify pub-sub functionality of PostgreSQL in its clustering behavior, the rebroadcasting of NOTIFY messages will not reach LISTENing nodes across multiple PostgresQL instances.

3. Load Balancer

Additionally, you will need a load balancer that has sticky session support enabled and running in front of your dotCMS instances.

4. Shared Assets Directory

dotCMS requires a network share or NAS that shares the contents of the assets directory across all nodes in the cluster. In a clustered dotCMS system, you should mount a directory to /data/shared before starting dotCMS.

Cluster Diagram

5. (Optional) Sharing OSGi Plugin Directories

You can share your OSGi plugins and deploy and undeploy them across the whole cluster at the same time, from any node. To do this, you must use the Shared Assets Directory for 2 of the OSGi folders. Each server in the cluster monitors these shared folders and deploys/un-deploys any OSGi jars found there.

  1. Inside the Shared Asset Directory, create folders named /felix/load and /felix/undeployed.

    • e.g., dotCMS/assets/felix/load and dotCMS/assets/felix/undeployed.
  2. Replace the server local OSGi folders under WEB-INF with symlinks to these shared folders, so you would have

    • dotCMS/WEB-INF/felix/load sym link to —> dotCMS/assets/felix/load
    • dotCMS/WEB-INF/felix/undeployed sym link to —> dotCMS/assets/felix/undeployed

Note: If you share your plugins across all nodes in your cluster, each node will try run the code in the plugins Activator class simultaneously. It is important to know this when doing “set up type work” in the plugins Activator.

Testing your Cluster

Test your cache cluster startup

  1. Shut down and restart 1 node in the cluster.
  2. Open the log file for the restarted node and search for “ping”.

Result: When you restart the node, it should “ping” the other servers in the cluster, and you should see the results of those pings in the dotcms.log file. If you do not see “ping” on the other servers in the cluster, then your cluster cache settings are incorrect.

On this page

×

We Dig Feedback

Selected excerpt:

×