Skip to main content

Posts

Showing posts from August, 2019

FreeIPA - Recreate corrupted ds.keytab

If for some reason your ds.keytab has been corrupted, for example through time drift in the hardware clocks of your multi-master infrastructure, you'll find yourself with a non-working or very slow krb5kdc. However, this can be fixed fairly fast but you'll have to check your replicas for errors and maybe even replicate the whole infrastructure from a known good replica. Let's get to fixing the corrupted ds.keytab first. All these steps will be done with your authentication services offline, so it's probably the safest to do all of the steps as root. Start by stopping the ipa services on the host: [archy@ipa02 ~]$ sudo su - [root@ipa02 ~]# ipactl stop Next up move the not-working keytab: [root@ipa02 ~]# mv /etc/dirsrv/ds.keytab /etc/dirsrv/ds.keytab-$(date +%Y-%m-%d) In order to fix the keytab, a few services are required to run: [root@ipa02 ~]# start-dirsrv [root@ipa02 ~]# systemctl start krb5kdc.service Next login to the krb5kdc and export

Push logs and data into elasticsearch - Part 2 Mikrotik Logs

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana. Start by getting the Log Data you want to structure parsed correctly. Mikrotik Logs are a bit difficult since they show you Data in the interface which is already enriched with Time / Date. That means a message that the remote logging will send to Logstash will look like this: firewall,info forward: in:lan out:wan, src-mac aa:bb:cc:dd:ee:ff, proto UDP, 172.31.100.154:57061->109.164.113.231:443, len 76 You can check them in the grok debugger and create your own filters and mapping. The following is my example which might not fit your needs. Here are some custom patterns I wrote for my pattern matching: MIKROTIK_DATE \b(?:jan(?:uary)?|feb(?:ruary)?|mar(?:ch)?|apr(?:il)?|may|jun(?:e)?|jul(?

Push logs and data into elasticsearch - Part 1 NGINX

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana and an installation of Filebeat on the Host(s) where you get your nginx logs from. Start by getting the Log Data you want to structure parsed correctly. The nginx logs are pretty straight forward, so after checking them out in the grok debugger, I'll have the following structure mapped: %{IP:ClientIP} - %{DATA:username} \[%{NGINXTIMESTAMP:timestamp}%{GREEDYDATA}\] \"%{WORD:method} %{DATA:request_uri} %{DATA:http-version}\" %{RETURNCODE:http_return_code} %{GREEDYDATA} \"%{DATA:server_name}\" \"%{GREEDYDATA}\" Also, I've written some custom patterns: NGINXTIMESTAMP (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])\/\b(?:[Jj]an(?:uary|uar)?|[Ff]eb(?:ruary|rua