Configuring Web Application Clusters for High Availability
There is usually no need to configure a cluster as Redwood Server has been designed from the base for application clusters. As soon as a second application server connects to the database, with the same database connection settings, it will request a license. If another application server is active, the newly started application server will act as a secondary and be available as soon as it has been registered and a license has been installed.
Note that if you want an Active/Passive setup, you will not need an additional license for the second passive node - you may install the same license on both nodes provided they have the same hostname and port.
If you want to tune the behavior, there are various Redwood Server registry entries that can be used to configure a cluster, including the bind address and the port number on a per-instance-node basis. Note that as soon as more than one application server share the same database connection settings, they form a cluster; Redwood Server will immediately assign the master role to one of the running nodes.
The following main variables are used for the naming of clusters:
System Host Name
The name of the host the server is run on. This is determined by looking up all applicable addresses for the host and taking the best option - this excludes Ipv6 addresses and has a preference for an external address to a localhost address. If the system is configured to map the ip address to a host name then the host name will be returned. Note: This can be different from the Cluster Host Name.
Cluster Node ID
The unique identifier for the node within the cluster instance. This uniquely identifies members of a cluster instance. The Node ID is determined from the path in which the server is installed matching the value X
in j2ee/cluster/serverX
. Under a standard configuration this will be 1 by default.
These three values can be viewed by running the System_Info process definition and looking at the information for the cluster node.
There are five values that can be configured:
- Clustering Implementation.
- Cluster Name.
- Cluster Instance ID.
- Redwood Messaging communication binding address.
- Redwood Messaging communication port.
Cluster Implementation
This is the implementation that is used to perform all clustering operations.
The configuration is set with the following registry entry:
/configuration/boot/cluster/type
The possible values are (Note: these values are case-sensitive):
- RWM - Redwood Messaging, the default implementation
- Standalone - A non-clustered implementation designed as a Failsafe
Cluster Name
This is used for display purposes (for example within the output of the System_Info job) for identifying a cluster. By default it is CLUSTER
.
The name can be changed with the following registry entry
/configuration/boot/cluster/name
Cluster Instance ID
This is a unique identifier for the cluster. This allows you to have two clusters on the same host; not recommended. By default this is set to 0
.
The ID can be changed with the following registry entry
/configuration/boot/cluster/<System Host Name>/instanceId
This must to be a number between 0 and 99 (inclusive).
Setting the registry entry will change the Cluster Instance ID for all instances on the same host (that share the same database). If a particular server instance requires a separate Cluster Instance ID then it should be set via the System Property rather than the Registry Entry.
Redwood Messaging Communication Bind Address
The address to which the communication port is bound. By default this is the System Host Name.
The bind address can be changed with the following registry entry
/configuration/boot/cluster/<System Host Name>/<Cluster Instance ID>/<Cluster Node ID>/bindAddress
Alternatively the bind address for all nodes in an instance can be changed with the following registry entry
/configuration/boot/cluster/<System Host Name>/<Cluster Instance ID>/bindAddress
If both these values are set the more specific (Instance + Node) value is used.
Redwood Messaging Communication Port
The port to which the communication is bound. By default this is determined from the Cluster Instance ID, and the Cluster Node ID via the following equation:
Port Number = 10000 + (1000 * Cluster Instance ID) + 70 + (Cluster Node ID)
On a default setup this will be 10071.
The port can be changed with the following registry entry
/configuration/boot/cluster/<System Host Name>/<Cluster Instance ID>/<Cluster Node ID>/port
Using System Properties instead of Redwood Server Registry Keys
All the boot configuration settings can be overridden by System Properties. They follow a standard naming scheme whereby the root path /configuration
is removed and is replaced with com.redwood.scheduler
and all slashes (/)
are replaced with dots (.)
. For example /configuration/boot/cluster/type
becomes com.redwood.scheduler.boot.cluster.type
.
Active/Active Master Node Switch
In an active/active cluster, there is always one master node. When this master node becomes unavailable for a period of 300 seconds, by default, another node in the cluster will take over the master role. The timeout can be customised by setting the /configuration/jcs/clustering/lockTimeoutThreshold
registry entry. When this registry entry does not exist, the default is 300 seconds.
Procedure
Create a Cluster for Redwood Platform with Multiple Active/Active Nodes
- Install the software on node 0, 1, 2 ... as usual.
- On each node, start the adminserver as usual and specify database settings; the first time, the adminserver will create the database objects, in subsequent runs, only the database settings will be used to configure
server<n>
. - On each node, start
server1
.
Create a Cluster for Redwood Platform with Multiple Active/Passive Nodes
- Install the software on node 0 and 1 as usual.
- On node 0, start the adminserver as usual and specify database settings; the adminserver will create the database objects. Shutdown adminserver.
- Start
server1
, log in and verify it works, shutdownserver1
. - Fail over to node 1, start the adminserver as usual and specify database settings; it will check the database is up-to-date. Shutdown adminserver.
- Start
server1
, log in and verify it works.
Integrate the Cluster with Microsoft Cluster Service
- On each node create a Windows Service using the
<install_dir>/j2ee/cluster/global/bin/rw_service.bat
utility. - Control the service from within MS Cluster Service.
Integrate the Cluster with Microsoft Failover Clustering
- Run the validation test for the servers in the cluster, see Run Cluster Validation Tests.
- Create the fail over cluster, see Create the failover cluster.
- Use Failover Cluster Manager to fail over from one server to the other in the cluster and validate that the service starts and is available.
See Also
onsiteTopic