Configuring a Cluster of Consul Instances
You can install individual Consul instances locally for each Composer node and then configure the Consul instances as a cluster for the Composer high availability environment. Before you do, make sure you are familiar with the general clustering techniques used for Consul. See https://www.consul.io/docs/install/bootstrapping.html for more information.
The primary advantage of using a Consul cluster in a Composer high availability environment is that it requires fewer configuration changes to Composer components. Each component will look for the Consul on the localhost interface at port 127.0.0.1. The only configuration necessary is to the Consul cluster nodes themselves.
Another advantage is that each node of the Composer high availability cluster will use several discovered instances of the same component to balance the load and become more tolerant of any component failures.
To configure a Consul cluster for a Composer high availability environment:
Install all of the individual Consul instances on each Composer node. This happens automatically when you use the Composer Bootstrap installation procedure.
Make sure that a firewall is opened in your environment for ports 8500, 8300, and 8301 on all hosts that will form the Consul cluster.
Edit the Consul custom configuration file
consul.jsonon each Composer node.
If you did not install the Consul instances using the Composer Bootstrap installation procedure, its custom configuration file might have a different name and location.
Configure the Consul custom configuration file for each Consul instance so it includes these lines:
<node-name>settings for each Consul node should be unique within the cluster. Each Consul instance in the cluster should have a different name.
A bind address (
bind_addr) and client address (
0.0.0.0allow the Consul to listen over all network interfaces. The
bind_addrsetting can be limited to the hosts IP address instead.
bootstrapsetting should be set to
trueon one node in the cluster only. Set it to
falseon all other cluster nodes.
retry_joinoption, list all of the Composer host IP addresses in the cluster. At least one must be listed. If you are using cloud-hosted instances such as AWS or GCE, the
retry_joinoption can be changed to something like this (assuming each cluster node is an AWS EC2 instance and has a
Rolethat is assigned to
"provider=aws tag_key=Role tag_value=zoomdata-cluster-node"
Restart each Consul instance and wait for several seconds for the cluster to form. Then validate the cluster by entering the following command:
The following shows sample output from this command:
When the Consul cluster has formed correctly, restart all of the Composer microservices for the instance. See Restarting Composer Microservices.
Please sign in to leave a comment.