How to set up a Secure and Scalable ElasticSearch-Kibana Cluster on AWS EC2
Elasticsearch is an open-source, RESTful, distributed search, and analytics engine built on Apache Lucene. It is super fast and reliable but as the loads and data increase, we need a scalable architecture to keep up with production needs.
In this article, I am going to show you how we can set up a scalable opensource ElasticSearch Cluster on AWS EC2. We’re going to build a 3-Node cluster including two Master-Data Nodes and one Coordinating Node for Kibana.

Master + Data Node:
The master node is responsible for actions such as creating or deleting an index, tracking actions, and deciding which shares to allocate to which nodes. However, Data nodes hold the shards that contain the documents you have indexed. Data nodes handle data-related operations like CRUD, search, and aggregations. These operations are I/O, memory, and CPU-intensive.
We are going to combine these operations into one node for simplicity and call them Master Node.
Coordinator/Client Node:
Coordinating nodes (formerly called “client nodes”) are some kind of load balancers within your ES cluster. Here, we will use them to install Kibana so that all the calls to our indices are balanced.
We are going to use this node to access Kibana, so we will call it Kibana Node from now.
Cluster Specifications:
For this tutorial, We're going to use ElasticSearch v7.8Two Master + Data Nodes:
r5.large EC2 instances running Amazon Linux 2One Coordinator Node:
t3.medium EC2 instance running Amazon Linux 2This Cluster will be:
- Load Balanced
- Scalable
- Deployed in Multiple Availability Zones
- Will stay inside a VPC for better security
Preparation for Master Node:
- Launch one r5.Large with Amazon Linux 2 AMI (you can use a different ec2 instance according to your requirements). Please remember to use Different Availability Zones for all nodes.
- Attach and Mount EBS volume according to your storage requirements. Follow this tutorial
- Install Java 8.
Master Node Configuration:
- Install ElasticSearch:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.8.1-x86_64.rpmwget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.8.1-x86_64.rpm.sha512shasum -a 512 -c elasticsearch-7.8.1-x86_64.rpm.sha512 sudo rpm --install elasticsearch-7.8.1-x86_64.rpm
- Register and Enable ElasticSearch as system service (systemd):
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service# Service can be stopped by using these commands:sudo systemctl start elasticsearch.service
sudo systemctl stop elasticsearch.service# After starting the service, check if it is working fine:curl -X GET "localhost:9200/?pretty"# The response for above command should be something like:{
"name" : "Cp8oag6",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
"version" : {
"number" : "7.8.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "f27399d",
"build_date" : "2020-05-30T09:51:41.449Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "1.2.3",
"minimum_index_compatibility_version" : "1.2.3"
},
"tagline" : "You Know, for Search"
}# Install EC2 Discovery Plugin:cd /usr/share/elasticsearch/bin
./elasticsearch-plugin install discovery-ec2
- Change the owner of the location/directory where you mounted your EBS volume (in this case, /mnt/elasticsearch):
sudo chown elasticsearch:elasticsearch /mnt/elasticsearch
- Configure ElasticSearch:
# Use your favorite editor to add the following lines to /etc/security/limits.conf:elasticsearch soft nofile 65536
elasticsearch hard nofile 65536
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited# Change or add the following configuration to /usr/lib/systemd/system/elasticsearch.service:LimitMEMLOCK=infinity# Change or add the following configuration to /etc/elasticsearch/jvm.options:
(Note: this is the heap size for JVM, ideally use 50% of the available RAM. Here, I am using 8GB heap size according to r5.large specs)-Xms8g
-Xmx8g
- Configure ElasticSearch YML file (Change or add the following configuration to /etc/elasticsearch/elasticsearch.yml). For now, you can skip the IP address and AZ details for other nodes if you have not launched other ec2 instances yet.
cluster.name: ElasticSearch
cluster.routing.allocation.awareness.attributes: zone# Add all availability zones you have used to create cluster, We are using "us-east-1a" for node-1, "us-east-1d" for node-2, and "us-east-1b" for kibana-node.cluster.routing.allocation.awareness.force.zone.values: us-east-1a,us-east-1d,us-east-1bcluster.initial_master_nodes: [<Private IP of Master Node-1>, <Private IP of Master Node-2>]# Add Node name and role information. Configure the "node.attr.zone" info according to the availability zone/subnet you selected while launching the instancenode.attr.zone: us-east-1a
node.attr.rack: r1
node.master: true
node.data: true
node.ingest: true
node.name: node-1# Add path to the EBS volume you mounted in previous steps. Here we mounted EBS volume at /mnt/elasticsearchpath.data: /mnt/elasticsearch/data
path.logs: /mnt/elasticsearch/logsbootstrap.memory_lock: truenetwork.host: [<Private IP of this node>, 127.0.0.1]
network.publish_host: <Private IP of this node>discovery.zen.hosts_provider: ec2
discovery.zen.minimum_master_nodes: 2
discovery.ec2.endpoint: ec2.us-east-1.amazonaws.com
discovery.zen.ping.unicast.hosts: [<Private IP of Master Node-1>,<Private IP of Master Node-2>,<Private IP of Kibana Node>]
After configuring all the settings for Master Node, You can create an image of this EC2 and use that to launch the 2nd Master Node and Kibana Node (Coordinator Node).
Preparations for Coordinator Node (Kibana Node):
- Launch one t3.Medium with the image that we created from our Master Node. Please remember to use Different Availability Zones for all nodes.
- We are not going to use this node for storing data so we don’t need to mount additional EBS Volume.
Kibana Node Configuration:
- Configure ElasticSearch:
# Change or add the following configuration to /etc/elasticsearch/jvm.options:
(Note: this is the heap size for JVM, ideally use 50% of the available RAM. Here, I am using 2GB heap size according to t3.Medium specs)-Xms2g
-Xmx2g
- Configure ElasticSearch YML file (Change or add the following configuration to /etc/elasticsearch/elasticsearch.yml).
cluster.name: ElasticSearch
cluster.routing.allocation.awareness.attributes: zone# Add all availability zones you have used to create cluster, We are using "us-east-1a" for node-1, "us-east-1d" for node-2, and "us-east-1b" for kibana-node.cluster.routing.allocation.awareness.force.zone.values: us-east-1a,us-east-1d,us-east-1bcluster.initial_master_nodes: [<Private IP of Master Node-1>, <Private IP of Master Node-2>]# Add Node name and role information. Configure the "node.attr.zone" info according to the availability zone/subnet you selected while launching the instancenode.attr.zone: us-east-1b
node.attr.rack: r3
node.master: false
node.data: false
node.ingest: false
node.name: kibana-nodepath.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearchbootstrap.memory_lock: truenetwork.host: [<Private IP of this node>, 127.0.0.1]
network.publish_host: <Private IP of this node>discovery.zen.hosts_provider: ec2
discovery.zen.minimum_master_nodes: 2
discovery.ec2.endpoint: ec2.us-east-1.amazonaws.com
discovery.zen.ping.unicast.hosts: [<Private IP of Master Node-1>,<Private IP of Master Node-2>,<Private IP of Kibana Node>]
After Configuring all 3 nodes accordingly (don’t forget to fill in the correct Private IP addresses and Availability Zones in each node’s configuration), restart ElasticSearch service on all nodes.
sudo systemctl restart elasticsearch.service
Now, check if everything is working:
curl "http://localhost:9200/_cluster/health?pretty"# The response should be something like this:{
"cluster_name" : "ElasticSearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 2,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
- Install Kibana:
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.8.1-x86_64.rpmshasum -a 512 kibana-7.8.1-x86_64.rpmsudo rpm --install kibana-7.8.1-x86_64.rpm
- Register and Enable Kibana as system service (systemd):
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
sudo systemctl start kibana.service
sudo systemctl stop kibana.service
- Configure Kibana YML file (Change or add the following configuration to /etc/kibana/kibana.yml).
server.port: 5601
server.host: "0.0.0.0"elasticsearch.hosts: ["http://127.0.0.1:9200","http://<Private IP of Master Node-1>:9200","http://<Private IP of Master Node-1>:9200"]kibana.index: ".kibana"
Now, Restart Kibana:
sudo systemctl restart kibana.service
You can access Kibana by visiting http://<Public IP of your Kibana Node>:5601 on your browser.

Configure Load Balancing and Auto Scaling:
Set up an Internal Load Balancer on AWS with the following configuration (Tutorial):
Listeners: Port 80
Target Group: Master Nodes on Port 9200
Since we have created an Image of our Master Node configuration, you can use that image to set up an auto-scaling group by following this tutorial.
Congratulations! Now you have a 3-Node ElasticSearch-Kibana Cluster that you can scale according to your needs.