Skip to main content

ELK - Set up a Multi-Node Elasticsearch Cluster


Elasticsearch along with Logstash and Kibana is a great combination for aggregating and enriching log files, splitting them into different fields and visualizing them. For this setup, I will set up a 3 Node Cluster with every node operating every role.

The recommended Setup would be separating master-eligible nodes and data nodes, as well as ingest nodes (depending on your workload). 

Quick Note on the software components:

Elasticsearch:
Elasticsearch is a distributed search backend using the lucene engine for searching its shards and saved data. The Data indexed into this will be in the json-format.

Logstash:
Logstash takes your data, passes it through the grok filters you wrote, enriches them if you've configured to do so and indexes them into elasticsearch.

Kibana:
With Kibana you can control and manage your cluster as well as configure pipelines. Kibana will also give you a fancy frontend to search your data and build graphs.

Since I'm only going to go 3 Nodes, and my workload is rather small, I'll go with a "every node does everything" setup.

Each Node has the following specs:

4vCPUs
4GB RAM
80GB HDD
CentOS 7.6 with latest updates.
Elasticsearch 6.6.2
Logstash 6.6.2
Kibana 6.6.2

I'll skip the repository setup since I'm using foreman in my lab and imported the GPG Keys there, synced the repos and promoted the content view. Most of this can be automated, but I will still walk through the manual steps to get the cluster up and running. 

Now for the Setup:

This has to be done on every node:
 [archy@elk01 ~]$ sudo yum -y install openjdk-1.8.0 elasticsearch logstash kibana  
 [archy@elk02 ~]$ sudo yum -y install openjdk-1.8.0 elasticsearch logstash kibana  
 [archy@elk03 ~]$ sudo yum -y install openjdk-1.8.0 elasticsearch logstash kibana  

First up, configure elasticsearch:

 [archy@elk01 ~]$ sudo vim /etc/elasticsearch/elasticsearch.yml  
 cluster.name: elk-homelab # your cluster name  
 node.name: elk01.archyslife.lan # your node name   
 node.master: true # node is master-eligible  
 node.data: true # node is a data node  
 node.attr.rack: r1 # attributes, come in handy when distributing it over mutlitple data centers / racks  
 path.data: /var/lib/elasticsearch # where your shards will be stored  
 path.logs: /var/log/elasticsearch # where your logs will be stored  
 network.host: _eth0:ipv4_ # this is a yaml-variable used to setting it to the ipv4 address of the interface eth0  
 http.port: 9200 # elasticsearch http-api listening port  
 transport.tcp.port: 9300 # inter-cluster communication port  
 discovery.zen.ping.unicast.hosts: ["elk01.archyslife.lan", "elk02.archyslife.lan", "elk03.archyslife.lan" ] # your nodes that will form the cluster  
 discovery.zen.minimum_master_nodes: 2 # sum of cluster nodes / 2 + 1  
 gateway.recover_after_nodes: 2 # sum of cluster nodes / 2 + 1  

 [archy@elk02 ~]$ sudo vim /etc/elasticsearch/elasticsearch.yml  
 cluster.name: elk-homelab # your cluster name  
 node.name: elk02.archyslife.lan # your node name   
 node.master: true # node is master-eligible  
 node.data: true # node is a data node  
 node.attr.rack: r1 # attributes, come in handy when distributing it over mutlitple data centers / racks  
 path.data: /var/lib/elasticsearch # where your shards will be stored  
 path.logs: /var/log/elasticsearch # where your logs will be stored  
 network.host: _eth0:ipv4_ # this is a yaml-variable used to setting it to the ipv4 address of the interface eth0  
 http.port: 9200 # elasticsearch http-api listening port  
 transport.tcp.port: 9300 # inter-cluster communication port  
 discovery.zen.ping.unicast.hosts: ["elk01.archyslife.lan", "elk02.archyslife.lan", "elk03.archyslife.lan" ] # your nodes that will form the cluster  
 discovery.zen.minimum_master_nodes: 2 # sum of cluster nodes / 2 + 1  
 gateway.recover_after_nodes: 2 # sum of cluster nodes / 2 + 1  

 [archy@elk03 ~]$ sudo vim /etc/elasticsearch/elasticsearch.yml  
 cluster.name: elk-homelab # your cluster name  
 node.name: elk03.archyslife.lan # your node name   
 node.master: true # node is master-eligible  
 node.data: true # node is a data node  
 node.attr.rack: r1 # attributes, come in handy when distributing it over mutlitple data centers / racks  
 path.data: /var/lib/elasticsearch # where your shards will be stored  
 path.logs: /var/log/elasticsearch # where your logs will be stored  
 network.host: _eth0:ipv4_ # this is a yaml-variable used to setting it to the ipv4 address of the interface eth0  
 http.port: 9200 # elasticsearch http-api listening port  
 transport.tcp.port: 9300 # inter-cluster communication port  
 discovery.zen.ping.unicast.hosts: ["elk01.archyslife.lan", "elk02.archyslife.lan", "elk03.archyslife.lan" ] # your nodes that will form the cluster  
 discovery.zen.minimum_master_nodes: 2 # sum of cluster nodes / 2 + 1  
 gateway.recover_after_nodes: 2 # sum of cluster nodes / 2 + 1  

Start up and enable elasticsearch:
 [archy@elk01 ~]$ sudo systemctl enable elasticsearch.service  
 [archy@elk01 ~]$ sudo systemctl start elasticsearch.service  
 [archy@elk02 ~]$ sudo systemctl enable elasticsearch.service  
 [archy@elk02 ~]$ sudo systemctl start elasticsearch.service  
 [archy@elk03 ~]$ sudo systemctl enable elasticsearch.service  
 [archy@elk03 ~]$ sudo systemctl start elasticsearch.service  

If everything is started as expected, you should now be having the elasticsearch cluster itself. Check by using curl against the elasticsearch-api:
 [archy@elk01 ~]$ curl -XGET 'https://elk03.archyslife.lan:9200/_cat/nodes?v'  
 ip     heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name  
 172.31.0.12      31     95  1  0.01  0.12   0.14 mdi    -   elk01.archyslife.lan  
 172.31.0.13      47     95  2  0.05  0.12   0.16 mdi    *   elk02.archyslife.lan  
 172.31.0.14      30     95  3  0.10  0.11   0.16 mdi    -   elk03.archyslife.lan  

Next step, logstash.

 [archy@elk01 ~]$ sudo vim /etc/logstash/logstash.yml  
 path.data: /var/lib/logstash  
 config.reload.automatic: true  
 path.logs: /var/log/logstash  
 [archy@elk02 ~]$ sudo vim /etc/logstash/logstash.yml  
 path.data: /var/lib/logstash  
 config.reload.automatic: true  
 path.logs: /var/log/logstash  
 [archy@elk03 ~]$ sudo vim /etc/logstash/logstash.yml  
 path.data: /var/lib/logstash  
 config.reload.automatic: true  
 path.logs: /var/log/logstash  

Start up and enable logstash:
 [archy@elk01 ~]$ sudo systemctl enable logstash.service  
 [archy@elk01 ~]$ sudo systemctl start logstash.service  
 [archy@elk02 ~]$ sudo systemctl enable logstash.service  
 [archy@elk02 ~]$ sudo systemctl start logstash.service  
 [archy@elk03 ~]$ sudo systemctl enable logstash.service  
 [archy@elk03 ~]$ sudo systemctl start logstash.service  

Last step, kibana:
 [archy@elk01 ~]$ sudo vim /etc/kibana/kibana.yml  
 server.port: 5601  
 server.host: "0.0.0.0"  
 elasticsearch.hosts: ["http://elk01.archyslife.lan:9200"]  
 [archy@elk02 ~]$ sudo vim /etc/kibana/kibana.yml  
 server.port: 5601  
 server.host: "0.0.0.0"  
 elasticsearch.hosts: ["http://elk02.archyslife.lan:9200"]  
 [archy@elk03 ~]$ sudo vim /etc/kibana/kibana.yml  
 server.port: 5601  
 server.host: "0.0.0.0"  
 elasticsearch.hosts: ["http://elk03.archyslife.lan:9200"]  

Start up and enable kibana:
 [archy@elk01 ~]$ sudo systemctl enable kibana.service  
 [archy@elk01 ~]$ sudo systemctl start kibana.service  
 [archy@elk02 ~]$ sudo systemctl enable kibana.service  
 [archy@elk02 ~]$ sudo systemctl start kibana.service  
 [archy@elk03 ~]$ sudo systemctl enable kibana.service  
 [archy@elk03 ~]$ sudo systemctl start kibana.service  

You can now log in to your cluster on any node with port 5601 using your browser.
If you have a license, you can also enable xpack with central monitoring and orchestration of the pipelines in logstash, as well as https and users. This makes the setup much more secure since in the current state, everyone that can access the interface, can access the complete cluster.

The simplest fix would be using nginx as a reverse proxy and make nginx do the authentication. However, this is not part of this tutorial.

Feel free to comment and / or suggest a topic.

Comments

Popular posts from this blog

Dynamic DNS with BIND and ISC-DHCP

I personally prefer to work with hostnames instead of ip-addresses. If you have anything like freeipa or active directory, it will do that for you by registering the client you added to your realm to the managed dns and edit the records dynamically. We can achieve the same goal with just bind and isc-dhcp. I'll use a raspberry pi with raspbian 9 for this setup. So here is a quick tutorial on how to configure the isc-dhcp-server to dynamically update bind. First set a static ip to your server. [archy@ddns ~]$ sudo vim /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) # Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf' # Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d auto eth0 iface eth0 inet static address 172.31.30.5 network 172.31.30.0 broadcast 172.31.30.255 netmask 255.255.255.0 ...

LACP-Teaming on CentOS 7 / RHEL 7

What is teaming? Teaming or LACP (802.3ad) is a technique used to bond together multiple interfaces to achieve higher combined bandwith. NOTE: every clients speed can only be as high as the single link speed of one of the members. That means, if the interfaces I use in the bond have 1 Gigabit, every client will only have a maximum speed of 1 Gigabit. The advantage of teaming is, that it can handle multiple connections with 1 Gigabit. How many connections depends on the amount of your network cards. I'm using 2 network cards for this team on my server. That means I can handle 2 Gigabit connections at full rate on my server provided the rest of the hardware can deliver that speed. There also exists 'Bonding' in the Linux world. They both do the same in theory but  for a detailed comparison check out this  article about teaming in RHEL7 . To create a teaming-interface, we will first have to remove all the interface configurations we've done on the (soon to be) sla...

Push logs and data into elasticsearch - Part 2 Mikrotik Logs

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana. Start by getting the Log Data you want to structure parsed correctly. Mikrotik Logs are a bit difficult since they show you Data in the interface which is already enriched with Time / Date. That means a message that the remote logging will send to Logstash will look like this: firewall,info forward: in:lan out:wan, src-mac aa:bb:cc:dd:ee:ff, proto UDP, 172.31.100.154:57061->109.164.113.231:443, len 76 You can check them in the grok debugger and create your own filters and mapping. The following is my example which might not fit your needs. Here are some custom patterns I wrote for my pattern matching: MIKROTIK_DATE \b(?:jan(?:uary)?|feb(?:ruary)?|mar(?:ch)?|apr(?:il)?|may|jun(?:e)?|jul(?...

FreeIPA - Integrating your DHCPD dynamic Updates into IPA

I recently went over my network configuration and noticed that the dhcp-leases were not pushed into the IPA-DNS yet. So I thought, why not do it now. The setup is very similar to setting it up on a single bind instance not managed by IPA (I've already written a guide about this here ). My setup is done with the following hosts: ipa01.archyslife.lan - 172.31.0.1 inf01.archyslife.lan - 172.31.0.5 First of all, create a rndc-key: [archy@ipa01 ~]$ sudo rndc-confgen -a -b 512 This will create the following file '/etc/rndc-key' [archy@ipa01 ~]$ sudo cat /etc/rndc.key key "rndc-key" { algorithm hmac-md5; secret "secret_key_here=="; }; We also need to make named aware of the rndc-key and allow our remote dhcp server to write dns entries: [archy@ipa01 ~]$ sudo vim /etc/named.conf ... include "/etc/rndc-key"; controls { inet 172.31.0.1 port 953 allow { 172.31.0.5; } keys ...

SSSD - Debugging PAM permission denied

Sometimes there's weird errors in IT that occur on random chance. I've had such an encounter with SSSD in combination with IPA(+AD-Trust) recently, where only sometimes, a connection to one of the IPA-Servers would fail with this error: Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: pam_sss(sshd:account): Access denied for user runner: 4 (System error) Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: fatal: Access denied for user runner by PAM account configuration [preauth] In my case, it was only happening sometimes when running a basic system setup role using ansible on every host in the entire environment. This way, there was no consistent pattern besides being the same host every time if it failed. First up, add the 'debug_level=X' to every section required in the /etc/sssd/sssd.conf where X is a number from 1 to 10 with 10 being the most verbose. Afterward, restart sssd and check the logs for any obvious problems. 1) If you are using local users, check the...