Configuring a Cluster of Consul Instances
You can install individual Consul instances locally for each Composer node and then configure the Consul instances as a cluster for the Composer high availability environment. Before you do, make sure you are familiar with the general clustering techniques used for Consul. See https://www.consul.io/docs/install/bootstrapping.html for more information.
The primary advantage of using a Consul cluster in a Composer high availability environment is that it requires fewer configuration changes to Composer components. Each component will look for the Consul on the localhost interface at port 127.0.0.1. The only configuration necessary is to the Consul cluster nodes themselves.
Another advantage is that each node of the Composer high availability cluster will use several discovered instances of the same component to balance the load and become more tolerant of any component failures.
To configure a Consul cluster for a Composer high availability environment:
Install all of the individual Consul instances on each Composer node. This happens automatically when you use the Composer Bootstrap installation procedure.
Make sure that a firewall is opened in your environment for ports 8500, 8300, and 8301 on all hosts that will form the Consul cluster.
Edit the Consul custom configuration file
consul.json
on each Composer node.vi /etc/zoomdata/consul.json
If you did not install the Consul instances using the Composer Bootstrap installation procedure, its custom configuration file might have a different name and location.
Configure the Consul custom configuration file for each Consul instance so it includes these lines:
{
.........
"node_name": "<node-name>"
"bind_addr": "0.0.0.0",
"bootstrap": false,
"client_addr": "0.0.0.0",
"retry_join": [
"<host-ip-address-1>",
"<host-ip-address-2>",
"<host-ip-address-n>"
],
"server": true
.........
}The
<node-name>
settings for each Consul node should be unique within the cluster. Each Consul instance in the cluster should have a different name.A bind address (
bind_addr
) and client address (client_addr
) of0.0.0.0
allow the Consul to listen over all network interfaces. Thebind_addr
setting can be limited to the hosts IP address instead.The
bootstrap
setting should be set totrue
on one node in the cluster only. Set it tofalse
on all other cluster nodes.For the
retry_join
option, list all of the Composer host IP addresses in the cluster. At least one must be listed. If you are using cloud-hosted instances such as AWS or GCE, theretry_join
option can be changed to something like this (assuming each cluster node is an AWS EC2 instance and has atag_key
calledRole
that is assigned tozoomdata-cluster-node
):.........
"retry_join": [
"provider=aws tag_key=Role tag_value=zoomdata-cluster-node"
],
.........Restart each Consul instance and wait for several seconds for the cluster to form. Then validate the cluster by entering the following command:
#/opt/zoomdata/bin/zoomdata-consul members
The following shows sample output from this command:
Node
Address
Status
Type
Build
Protocol
DC
Segment
node-1
10.0.0.1:8301
alive
server
1.2.2
2
dc1
<all>
node-3
10.0.0.3:8301
alive
server
1.2.2
2
dc1
<all>
node-2
10.0.0.2:8301
alive
server
1.2.2
2
dc1
<all>
When the Consul cluster has formed correctly, restart all of the Composer microservices for the instance. See Restarting Composer Microservices.
Comments
0 comments
Please sign in to leave a comment.