To start the Red Hat Cluster Service on a member, type the following commands in this order. Remember to enter the command on each node before proceeding to the next command.
1. service ccsd start. This is the cluster configuration system daemon that synchronizes configuration between cluster nodes.
2. service cman start. This is the cluster heartbeat daemon. It returns when both nodes have established heartbeat with one another.
3. service fenced start. This is the cluster I/O fencing system that allows cluster nodes to reboot a failed node during failover.
4. service rgmanager start. This manages cluster services and resources. The service rgmanager start command returns immediately, but initializing the cluster and bringing up the Zimbra Collaboration Suite application for the cluster services on the active node may take some time. After all commands have been issued on both nodes, run clustat command on the active node, to verify that the cluster service has been started. When clustat shows all services are running on the active node, the cluster configuration is complete.
For the cluster service that is not running on the active node, run clusvcadm -d <cluster service name>, as root on the active node.
[firstname.lastname@example.org]#clusvcadm -d mail1.example.com
This disables the service by stopping all associated Zimbra processes, releasing the service IP address, and unmounting the service’s SAN volumes.
To enable a disabled service, run clusvcadm -e <service name> -m <node name>. This command can be run on any cluster node. It instructs the specified node to mount the SAN volumes of the service, bring up the serviceIP address, and start the Zimbra processes.
[email@example.com]#clusvcadm -e mail1.example.com -m node1.example.com
Testing the Cluster Set up
To perform a quick test to see if failover works:
1. Log in to the remote power switch and turn off the active node.
2. Run tail -f /var/log/messages on the standby node. You will observe the cluster becomes aware of the failed node, I/O fence it, and bring up the failed service on the standby node.