Skip to main content

Build a scalable DNS Infrastructure with Knot-DNS and FreeIPA

I've recently made a PoC at work where I build a scalable DNS Infrastructure. Please note that while the schematic includes the Windows Network, I will not go into detail with that one since this is out of scope.


Linux Datacenter (internal zones only):
FreeIPA / Red Hat IDM is a full identity management solution by Red Hat which integrates the following components:
  • - 389-DS
  • - Krb5kdc
  • - Bind Nameserver
  • - Dogtag CA
  • - Certmonger
This is a nice feature pack but I will only focus on the DNS part for now. The bind nameserver will handle internal domains and external forwarding in this setup.

Authoritative DNS (external zones only):
Knot-DNS is a high-performance authoritative DNS Server. That means, it just covers the zones itself knows about and is authoritative for. It will not forward DNS Queries to its upstream hosts.

Now, let's jump into what needs to be done to make this setup reality. I'll start by setting up the Knot-Servers.

Start by setting up the master, first install the knot server and utils:
 [archy@ns01 ~]$ yum -y install knot knot-utils  
The config by default is pretty good documented. I'll configure a very basic setup, so there's not much to do. Edit the config with your favorite editor, in my case the config you see is what I found working for me.
 [archy@ns01 ~]$ vim /etc/knot/knot.conf  
 # the server configuration with the user the service is running as and the address the server listens to.  
 server:  
   rundir: "/run/knot"  
   user: knot:knot  
   listen: [ 0.0.0.0@53 ]  
 # where to log, syslog is usually a good start  
 log:  
  - target: syslog  
    any: info  
 # this is the section where to define your slaves / remote masters  
 remote:  
  - id: slave  
    address: 172.31.0.2@53  
 # ACLs allow which host is allowed to do what  
 acl:  
  - id: acl_slave  
    address: 172.31.0.2  
    action: transfer  
  - id: access_rule  
    address: [172.31.0.0/24, 172.31.100.0/24]  
    action: transfer  
  - id: deny_rule  
    address: [0.0.0.0/0]  
    action: transfer  
    deny: on  
 # Will leave essentially default for now  
 template:  
  - id: default  
    storage: "/var/lib/knot/zones"  
    file: "%s.zone"  
 # the first zone definition. All these parameters should be self explanatory.  
 zone:  
  # Master zone  
  - domain: archyslife.lan  
    storage: /var/lib/knot/zones/  
    file: archyslife.lan.zone  
    acl: [acl_slave, access_rule, deny_rule]  
    notify: slave  
With the config adjusted, create the directory for the zones and define your master zone. For more info on this, check out my bind master-slave article.
 [archy@ns01 ~]$ mkdir /var/lib/knot/zones  
 [archy@ns01 ~]$ vim /var/lib/knot/zones/archyslife.lan.zone  
 @      IN     SOA ns01.archyslife.lan. ns02.archyslife.lan. (  
               2018062615  ; Serial  
               3600        ; Refresh (1 Hour)  
               3600        ; Retry (1 Hour)  
               604800      ; Expire (1 Week)  
               3600        ; Minimum (1 Hour)  
 )  
 @             NS    ns01.archyslife.lan.  
 @             NS    ns01.archyslife.lan.  
 ; A Records  
 @             A     172.31.0.1  
 @             A     172.31.0.2  
 ns01          A     172.31.0.1  
 ns02          A     172.31.0.2  
 ...  
Make the knot service persistent accross reboots and start it:
 [archy@ns01 ~]$ sudo systemctl enable knot.service  
 [archy@ns01 ~]$ sudo systemctl start knot.service  
With the master set up, continue with the slave setup, again by starting to install the necessary components:
 [archy@ns02 ~]$ yum -y install knot knot-utils  
Next, edit the config to your liking. I ended up with this:
 [archy@ns02 ~]$ vim /etc/knot/knot.conf  
 # the server configuration with the user the service is running as and the address the server listens to.  
 server:  
   rundir: "/run/knot"  
   user: knot:knot  
   listen: [ 0.0.0.0@53 ]  
 # where to log, syslog is usually a good start  
 log:  
  - target: syslog  
   any: info  
 # this is the section where to define your slaves / remote masters  
 remote:  
  - id: master  
   address: 172.31.0.178@53  
 # ACLs allow which host is allowed to do what  
 acl:  
  - id: acl_master  
   address: 172.31.0.178  
   action: notify  
  - id: access_rule  
   address: [172.31.0.0/24, 172.31.100.0/24]  
   action: transfer  
  - id: deny_rule  
   address: [0.0.0.0/0]  
   action: transfer  
   deny: on  
 # Will leave essentially default for now  
 template:  
  - id: default  
   storage: "/var/lib/knot"  
   file: "%s.zone"  
 # the first zone definition. All these parameters should be self explanatory.  
 zone:  
   # Slave zone  
  - domain: archyslife.lan  
   storage: /var/lib/knot/zones  
   file: archyslife.lan.zone  
   acl: [acl_master, access_rule, deny_rule]  
   master: master  
With the slave's config all done, create the directory for the zones and copy-paste it from your master. This is just to have this backed up on a separate host.
 [archy@ns02 ~]$ mkdir /var/lib/knot/zones  
 [archy@ns02 ~]$ vim /var/lib/knot/zones/archyslife.lan.zone  
 @      IN     SOA ns01.archyslife.lan. ns02.archyslife.lan. (  
               2018062615  ; Serial  
               3600        ; Refresh (1 Hour)  
               3600        ; Retry (1 Hour)  
               604800      ; Expire (1 Week)  
               3600        ; Minimum (1 Hour)  
 )  
 @             NS    ns01.archyslife.lan.  
 @             NS    ns01.archyslife.lan.  
 ; A Records  
 @             A     172.31.0.1  
 @             A     172.31.0.2  
 ns01          A     172.31.0.1  
 ns02          A     172.31.0.2  
 ...  
Enable and restart the services to check everything comes up as expected and make the service start at reboot.
 [archy@ns02 ~]$ sudo systemctl enable knot.service  
 [archy@ns02 ~]$ sudo systemctl start knot.service  
With the Authoritative DNS Section up and running, it's time to make the FreeIPA Hosts forward to the specified DNS Servers
 [archy@ipa01 ~]$ kinit  
 Password for archy@ARCHYSLIFE.LAN:  
 [archy@ipa01 ~]$ ipa dnsforwardzone-add archyslife.lan --forwarder=172.31.0.1 --forwarder=172.31.0.2 --forward-policy=only  
Check back on the changes you've done and verify these are correct:
 [archy@ipa01 ~]$pa dnsforwardzone-find  
  Zone name: archyslife.lan.  
  Active zone: TRUE  
  Zone forwarders: 172.31.0.1, 172.31.0.2  
  Forward policy: only  
 ----------------------------  
 Number of entries returned 1  
 ----------------------------  
Check your forwarding by running a query to your IPA Server for your Knot-DNS hosted zone:
Note: 172.31.100.250 is the address of my IPA Server
 [archy@client ~]$ nslookup ns01.archyslife.lan 172.31.100.250  
 Server:          172.31.100.250  
 Address:     172.31.100.250#53  

 Non-authoritative answer:  
 Name:     ns01.archyslife.lan  
 Address:  172.31.0.1  
Everything looks good so far. If you have any troubles configuring and feel the urge to extend the config, check the manual for Knot-DNS.

Feel free to comment and / or suggest a topic.

Comments

Popular posts from this blog

Dynamic DNS with BIND and ISC-DHCP

I personally prefer to work with hostnames instead of ip-addresses. If you have anything like freeipa or active directory, it will do that for you by registering the client you added to your realm to the managed dns and edit the records dynamically. We can achieve the same goal with just bind and isc-dhcp. I'll use a raspberry pi with raspbian 9 for this setup. So here is a quick tutorial on how to configure the isc-dhcp-server to dynamically update bind. First set a static ip to your server. [archy@ddns ~]$ sudo vim /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) # Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf' # Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d auto eth0 iface eth0 inet static address 172.31.30.5 network 172.31.30.0 broadcast 172.31.30.255 netmask 255.255.255.0 ...

LACP-Teaming on CentOS 7 / RHEL 7

What is teaming? Teaming or LACP (802.3ad) is a technique used to bond together multiple interfaces to achieve higher combined bandwith. NOTE: every clients speed can only be as high as the single link speed of one of the members. That means, if the interfaces I use in the bond have 1 Gigabit, every client will only have a maximum speed of 1 Gigabit. The advantage of teaming is, that it can handle multiple connections with 1 Gigabit. How many connections depends on the amount of your network cards. I'm using 2 network cards for this team on my server. That means I can handle 2 Gigabit connections at full rate on my server provided the rest of the hardware can deliver that speed. There also exists 'Bonding' in the Linux world. They both do the same in theory but  for a detailed comparison check out this  article about teaming in RHEL7 . To create a teaming-interface, we will first have to remove all the interface configurations we've done on the (soon to be) sla...

Push logs and data into elasticsearch - Part 2 Mikrotik Logs

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana. Start by getting the Log Data you want to structure parsed correctly. Mikrotik Logs are a bit difficult since they show you Data in the interface which is already enriched with Time / Date. That means a message that the remote logging will send to Logstash will look like this: firewall,info forward: in:lan out:wan, src-mac aa:bb:cc:dd:ee:ff, proto UDP, 172.31.100.154:57061->109.164.113.231:443, len 76 You can check them in the grok debugger and create your own filters and mapping. The following is my example which might not fit your needs. Here are some custom patterns I wrote for my pattern matching: MIKROTIK_DATE \b(?:jan(?:uary)?|feb(?:ruary)?|mar(?:ch)?|apr(?:il)?|may|jun(?:e)?|jul(?...

FreeIPA - Integrating your DHCPD dynamic Updates into IPA

I recently went over my network configuration and noticed that the dhcp-leases were not pushed into the IPA-DNS yet. So I thought, why not do it now. The setup is very similar to setting it up on a single bind instance not managed by IPA (I've already written a guide about this here ). My setup is done with the following hosts: ipa01.archyslife.lan - 172.31.0.1 inf01.archyslife.lan - 172.31.0.5 First of all, create a rndc-key: [archy@ipa01 ~]$ sudo rndc-confgen -a -b 512 This will create the following file '/etc/rndc-key' [archy@ipa01 ~]$ sudo cat /etc/rndc.key key "rndc-key" { algorithm hmac-md5; secret "secret_key_here=="; }; We also need to make named aware of the rndc-key and allow our remote dhcp server to write dns entries: [archy@ipa01 ~]$ sudo vim /etc/named.conf ... include "/etc/rndc-key"; controls { inet 172.31.0.1 port 953 allow { 172.31.0.5; } keys ...

SSSD - Debugging PAM permission denied

Sometimes there's weird errors in IT that occur on random chance. I've had such an encounter with SSSD in combination with IPA(+AD-Trust) recently, where only sometimes, a connection to one of the IPA-Servers would fail with this error: Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: pam_sss(sshd:account): Access denied for user runner: 4 (System error) Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: fatal: Access denied for user runner by PAM account configuration [preauth] In my case, it was only happening sometimes when running a basic system setup role using ansible on every host in the entire environment. This way, there was no consistent pattern besides being the same host every time if it failed. First up, add the 'debug_level=X' to every section required in the /etc/sssd/sssd.conf where X is a number from 1 to 10 with 10 being the most verbose. Afterward, restart sssd and check the logs for any obvious problems. 1) If you are using local users, check the...