This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs.
Your data is now indexed in elasticsearch and visible in kibana.
Prerequesites:
You'll need a working Elasticsearch Cluster with Logstash and Kibana and an installation of Filebeat on the Host(s) where you get your nginx logs from.
Start by getting the Log Data you want to structure parsed correctly.
The nginx logs are pretty straight forward, so after checking them out in the grok debugger, I'll have the following structure mapped:
%{IP:ClientIP} - %{DATA:username} \[%{NGINXTIMESTAMP:timestamp}%{GREEDYDATA}\] \"%{WORD:method} %{DATA:request_uri} %{DATA:http-version}\" %{RETURNCODE:http_return_code} %{GREEDYDATA} \"%{DATA:server_name}\" \"%{GREEDYDATA}\"
Also, I've written some custom patterns:
NGINXTIMESTAMP (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])\/\b(?:[Jj]an(?:uary|uar)?|[Ff]eb(?:ruary|ruar)?|[Mm](?:a|ä)?r(?:ch|z)?|[Aa]pr(?:il)?|[Mm]a(?:y|i)?|[Jj]un(?:e|i)?|[Jj]ul(?:y)?|[Aa]ug(?:ust)?|[Ss]ep(?:tember)?|[Oo](?:c|k)?t(?:ober)?|[Nn]ov(?:ember)?|[Dd]e(?:c|z)(?:ember)?)\b\/(?>\d\d){1,2}\:(?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
RETURNCODE [1-5][0-9][0-9]
With that we can build a pipeline that looks like so:
[archy@elk01 ~]$ cat /etc/logstash/conf.d/nginx.conf
# Input will be the filebeat installed on the host
input {
beats {
port => 5044
}
}
# the tag nginx-log will be sent by filebeat
filter {
if "nginx-log" in [tags] {
grok {
id => "nginx-pipeline"
patterns_dir => "/etc/logstash/custom-patterns/"
tag_on_failure => "_grokparsefailure_nginx"
match => [
"message", "%{IP:ClientIP} - %{DATA:username} \[%{NGINXTIMESTAMP:timestamp}%{GREEDYDATA}\] \"%{WORD:method} %{DATA:request_uri} %{DATA:http-version}\" %{RETURNCODE:http_return_code} %{GREEDYDATA} \"%{DATA:server_name}\" \"%{GREEDYDATA}\""
]
}
if "_grokparsefailure_nginx" not in [tags] {
mutate {
remove_field => ["message"]
}
}
}
}
# output to all elasticsearch hosts
output {
if "nginx-log" in [tags] {
elasticsearch {
id => "nginx-output"
hosts => ["http://elk01.archyslife.lan:11920", "http://elk02.archyslife.lan:11920", "http://elk03.archyslife.lan:11920", "http://elk04.archyslife.lan:11920", "http://elk05.archyslife.lan:11920"]
index => "nginx-log-%{+YYYY.MM.ww}"
}
}
}
Next up, add the following lines to your pipelines.yml
- pipeline.id: nginx
path.config: "/etc/logstash/conf.d/nginx.conf"
Logstash is configured, next up create the Index Template for the data. The Index Template will essentially tell elasticsearch what data will be stored in what way.
Note: You'll need to run this in the DevTools Console in your Kibana instance or by using Curl with 'XPUT' against the elasticsearch api.
PUT _template/nginx-log
{
"index_patterns" : [
"nginx-log-*"
],
"settings" : {
"index" : {
"codec" : "best_compression",
"refresh_interval" : "5s",
"number_of_shards" : "1",
"number_of_replicas" : "1"
}
},
"mappings" : {
"doc" : {
"numeric_detection" : true,
"dynamic_templates" : [
{
"string_fields" : {
"mapping" : {
"type" : "keyword"
},
"match_mapping_type" : "string",
"match" : "*"
}
}
],
"properties" : {
"@version" : {
"type" : "keyword"
},
"@timestamp" : {
"type" : "date"
},
"date" : {
"type" : "keyword"
},
"time" : {
"type" : "keyword"
},
"ClientIP" : {
"type" : "keyword"
},
"username" : {
"type" : "keyword"
},
"method" : {
"type" : "keyword"
},
"request_uri" : {
"type" : "text"
},
"http-version" : {
"type" : "text"
},
"http_return_code" : {
"type" : "keyword"
},
"server_name" : {
"type" : "keyword"
}
}
}
},
"aliases" : { }
}
Elasticsearch is setup and we can continue to install and configure filebeat on the server running nginx:
[archy@nginx01 ~]$ sudo yum -y localinstall filebeat-6.7.2.rpm
Here's a short example of my filebeat config:
[archy@nginx01 ~]$ sudo vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/http_intranet.archyslife.lan-access.log
- /var/log/nginx/http_intranet.archyslife.lan-error.log
- /var/log/nginx/https_intranet.archyslife.lan-accress.log
- /var/log/nginx/https_intranet.archyslife.lan-error.log
- /var/log/nginx/http_elastic-api.archyslife.lan-access.log
- /var/log/nginx/http_elastic-api.archyslife.lan-error.log
- /var/log/nginx/http_kibana.archyslife.lan-access.log
- /var/log/nginx/http_kibana.archyslife.lan-error.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
tags: ["nginx-log"]
output.logstash:
hosts: ["elk01.archyslife.lan:5044", "elk02.archyslife.lan:5044"]
processors:
The last step is to enable and start filebeat:
[archy@nginx01 ~]$ systemctl enable filebeat.service
[archy@nginx01 ~]$ systemctl start filebeat.service
Filebeat will now push data into your Cluster. To make the Data visible, however, kibana needs to have an Index pattern created. You can do this from the web interface itself by doing the following
--> Click on 'Management'
--> Click on 'Index Patterns' in the Kibana section
--> Click on the 'Create Index Pattern' button in the top left corner and type in the index name. In my case, it's 'nginx-log*'
--> Click on 'Next Step', select the Time Filter field name and click on 'Create Index pattern'
Your data is now indexed in elasticsearch and visible in kibana.
If there are any _grokparsefailure, you'll need to check your message pattern with the grok debugger which can be found in the Dev Tools.
Feel free to comment and / or suggest a topic.
Comments
Post a Comment