Skip to main content

Getting started with Ansible

First of all, what is ansible?
Ansible is a simple-to-learn and very powerful configuration management tool. Ansible is not using a client deployed on clients like puppet or salt do, but instead it utilizes SSH Connections to run tasks on the destination hosts.

What I'm going to describe here is how to install ansible on a CentOS 7 Host, create the directory structure, write a role and reference to it

Alright now, let's get to install ansible. I'll use a CentOS 7 Host for this:
 [archy@ansible ~]$ sudo yum -y install ansible  
This will install ansible with all dependencies on your system. So far so good, let's create the directory structure where you'll most of the time be working in and the first host inventory which will be in /srv/ansible/inventories/production/hosts.
 [archy@ansible ~]$ sudo mkdir -p /srv/ansible/playbooks/{roles,inventories}  
 [archy@ansible ~]$ sudo mkdir /srv/ansible/playbooks/inventories/production  
 [archy@ansible ~]$ sudo vim /srv/ansible/playbooks/inventories/production/hosts  
 [apphosts]  
 app01.archyslife.lan  
 app02.archyslife.lan  
Now with the directories and files created, let's give them the appropriate permissions, owners and groups.
 [archy@ansible ~]$ sudo chown -R nobody:ansible-users /srv/ansible/playbooks  
 [archy@ansible ~]$ sudo chmod 2775 /srv/ansible/playbooks  
Now with the inventory created, let's create a role named base. Creating a role is very simple with ansible-galaxy. This will create all the directories and files for you.
 [archy@ansible ~]$ cd /srv/ansible/playbooks  
 [archy@ansible /srv/ansible/playbooks]$ ansible-galaxy init roles/base  
 [archy@ansible /srv/ansible/playbooks]$ tree  
 roles/base  
 ├── defaults  
 │  └── main.yml  
 ├── files  
 ├── handlers  
 │  └── main.yml  
 ├── meta  
 │  └── main.yml  
 ├── README.md  
 ├── tasks  
 │  └── main.yml  
 ├── templates  
 ├── tests  
 │  ├── inventory  
 │  └── test.yml  
 └── vars  
   └── main.yml  
 8 directories, 8 files  
Since the directory structure is created, let's go ahead and edit the tasks/main.yml to contain some stuff we want to run.
 [archy@ansible /srv/ansible/playbooks]$ vim roles/base/tasks/main.yml  
 ---  
 - name: install base packages  
  yum:  
   name:  
    - deltarpm  
    - lsof  
    - mailx  
    - tcpdump  
    - htop  
    - bmon  
    - iotop  
    - net-tools  
    - bind-utils  
    - nmap-ncat  
    - rsync  
    - tmux  
    - vim  
    - bash-completion  
    - policycoreutils-python  
    - setroubleshoot-server  
 - name: enabling selinux  
   selinux:  
     state: enforcing  
     policy: targeted  
 - name: disabling firewalld.service  
   service:  
     name: firewalld  
     state: stopped  
     enabled: no  
 - name: copying sshd-config  
   copy:  
     src: /srv/ansible/playbooks/roles/base/files/sshd_config  
     dest: /etc/ssh/sshd_config  
     owner: root  
     group: root  
     mode: 0600  
  notify: restart sshd.service  
 ...  

Notice that notify line? That notify will search for the 'restart sshd.service' in the current handlers directory for the role's directory structure. To make that work, let's create the specified handler.
 [archy@ansible /srv/ansible/playbooks]$ vim roles/base/handlers/main.yml  
 ---  
 - name: restart sshd.service  
   systemd:  
     name: sshd  
     state: restarted  
     enabled: yes  
 ...  

Now that that's created, let's write a playbook that includes the base role.
 [archy@ansible /srv/ansible/playbooks]$ vim deploy_base.yml  
 ---  
 - hosts: all  
   become: yes  
   gather_facts: True  
   roles:  
     - base  
 ...  
Alright, let's run the playbook and check if everything worked as expected.
 [archy@ansible /srv/ansible/playbooks]$ ansible-playbook --inventory inventories/production/hosts --become --ask-become-pass deploy_base.yml  
 PLAY [all] ************************************************************************************************************
  
 TASK [Gathering Facts] ************************************************************************************************  
 ok: [app01.archyslife.lan]  
 ok: [app02.archyslife.lan]
  
 TASK [base : install base packages] ***********************************************************************************  
 changed: [app01.archyslife.lan]  
 changed: [app02.archyslife.lan]
  
 TASK [base: enabling selinux] *****************************************************************************************  
 ok: [app01.archyslife.lan]  
 ok: [app02.archyslife.lan]
  
 TASK [base : disabling firewalld.service] *****************************************************************************  
 changed: [app01.archyslife.lan]  
 changed: [app02.archyslife.lan]
  
 TASK [base : copying sshd-config] *************************************************************************************  
 changed: [app01.archyslife.lan]  
 changed: [app02.archyslife.lan]
  
 RUNNING HANDLER [base : restart sshd.service] ************************************************************************  
 changed: [app01.archyslife.lan]  
 changed: [app02.archyslife.lan]
  
 PLAY RECAP ************************************************************************************************************  
 ansible01.archyslife.lan: ok=2  changed=4  unreachable=0  failed=0  skipped=0  rescued=0  ignored=0    
 docker01.archyslife.lan  : ok=2  changed=4  unreachable=0  failed=0  skipped=0  rescued=0  ignored=0    
Alright, seems like everything has run successfully. Here's a quick recap on what we've done:
- Installed ansible on the host
- Created the directory Structure
- Created an inventory to work with
- Assigned proper permissions
- Created a base role and a playbook to run it all
You can add more roles and reference to them just like we did with this setup which allows for an expandable setup. 
Please note that this setup is not recommended with AWX / Ansible Tower. In order to make full use of this, you'll have to create additional projects for each role with a requirements.yml and import the required roles. However this sums up this guide for now.

Feel free to comment and / or suggest any topics.

Comments

Popular posts from this blog

Dynamic DNS with BIND and ISC-DHCP

I personally prefer to work with hostnames instead of ip-addresses. If you have anything like freeipa or active directory, it will do that for you by registering the client you added to your realm to the managed dns and edit the records dynamically. We can achieve the same goal with just bind and isc-dhcp. I'll use a raspberry pi with raspbian 9 for this setup. So here is a quick tutorial on how to configure the isc-dhcp-server to dynamically update bind. First set a static ip to your server. [archy@ddns ~]$ sudo vim /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) # Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf' # Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d auto eth0 iface eth0 inet static address 172.31.30.5 network 172.31.30.0 broadcast 172.31.30.255 netmask 255.255.255.0 ...

LACP-Teaming on CentOS 7 / RHEL 7

What is teaming? Teaming or LACP (802.3ad) is a technique used to bond together multiple interfaces to achieve higher combined bandwith. NOTE: every clients speed can only be as high as the single link speed of one of the members. That means, if the interfaces I use in the bond have 1 Gigabit, every client will only have a maximum speed of 1 Gigabit. The advantage of teaming is, that it can handle multiple connections with 1 Gigabit. How many connections depends on the amount of your network cards. I'm using 2 network cards for this team on my server. That means I can handle 2 Gigabit connections at full rate on my server provided the rest of the hardware can deliver that speed. There also exists 'Bonding' in the Linux world. They both do the same in theory but  for a detailed comparison check out this  article about teaming in RHEL7 . To create a teaming-interface, we will first have to remove all the interface configurations we've done on the (soon to be) sla...

Push logs and data into elasticsearch - Part 2 Mikrotik Logs

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana. Start by getting the Log Data you want to structure parsed correctly. Mikrotik Logs are a bit difficult since they show you Data in the interface which is already enriched with Time / Date. That means a message that the remote logging will send to Logstash will look like this: firewall,info forward: in:lan out:wan, src-mac aa:bb:cc:dd:ee:ff, proto UDP, 172.31.100.154:57061->109.164.113.231:443, len 76 You can check them in the grok debugger and create your own filters and mapping. The following is my example which might not fit your needs. Here are some custom patterns I wrote for my pattern matching: MIKROTIK_DATE \b(?:jan(?:uary)?|feb(?:ruary)?|mar(?:ch)?|apr(?:il)?|may|jun(?:e)?|jul(?...

FreeIPA - Integrating your DHCPD dynamic Updates into IPA

I recently went over my network configuration and noticed that the dhcp-leases were not pushed into the IPA-DNS yet. So I thought, why not do it now. The setup is very similar to setting it up on a single bind instance not managed by IPA (I've already written a guide about this here ). My setup is done with the following hosts: ipa01.archyslife.lan - 172.31.0.1 inf01.archyslife.lan - 172.31.0.5 First of all, create a rndc-key: [archy@ipa01 ~]$ sudo rndc-confgen -a -b 512 This will create the following file '/etc/rndc-key' [archy@ipa01 ~]$ sudo cat /etc/rndc.key key "rndc-key" { algorithm hmac-md5; secret "secret_key_here=="; }; We also need to make named aware of the rndc-key and allow our remote dhcp server to write dns entries: [archy@ipa01 ~]$ sudo vim /etc/named.conf ... include "/etc/rndc-key"; controls { inet 172.31.0.1 port 953 allow { 172.31.0.5; } keys ...

SSSD - Debugging PAM permission denied

Sometimes there's weird errors in IT that occur on random chance. I've had such an encounter with SSSD in combination with IPA(+AD-Trust) recently, where only sometimes, a connection to one of the IPA-Servers would fail with this error: Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: pam_sss(sshd:account): Access denied for user runner: 4 (System error) Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: fatal: Access denied for user runner by PAM account configuration [preauth] In my case, it was only happening sometimes when running a basic system setup role using ansible on every host in the entire environment. This way, there was no consistent pattern besides being the same host every time if it failed. First up, add the 'debug_level=X' to every section required in the /etc/sssd/sssd.conf where X is a number from 1 to 10 with 10 being the most verbose. Afterward, restart sssd and check the logs for any obvious problems. 1) If you are using local users, check the...