Skip to main content

LACP-Teaming on CentOS 7 / RHEL 7

What is teaming?
Teaming or LACP (802.3ad) is a technique used to bond together multiple interfaces to achieve higher combined bandwith. NOTE: every clients speed can only be as high as the single link speed of one of the members. That means, if the interfaces I use in the bond have 1 Gigabit, every client will only have a maximum speed of 1 Gigabit. The advantage of teaming is, that it can handle multiple connections with 1 Gigabit. How many connections depends on the amount of your network cards.

I'm using 2 network cards for this team on my server. That means I can handle 2 Gigabit connections at full rate on my server provided the rest of the hardware can deliver that speed.

There also exists 'Bonding' in the Linux world. They both do the same in theory but  for a detailed comparison check out this article about teaming in RHEL7.

To create a teaming-interface, we will first have to remove all the interface configurations we've done on the (soon to be) slave-interfaces for our team. I've edited the ifcfg-files to look like the following:

The first interface:
 [archy@server ~]$ sudo cat /etc/sysconfig/network-scripts/ifcfg-eth0  
 TYPE=Ethernet  
 BOOTPROTO=none  
 DEFROUTE=yes  
 PEERDNS=yes  
 PEERROUTES=yes  
 IPV4_FAILURE_FATAL=no  
 NAME=eth0  
 UUID=811289b3-8c1f-4f38-8885-21fa8aef107f  
 DEVICE=eth0  
 ONBOOT=no  
The second interface:
 [archy@server ~]$ sudo cat /etc/sysconfig/network-scripts/ifcfg-eth1  
 TYPE=Ethernet  
 BOOTPROTO=none  
 DEFROUTE=yes  
 PEERDNS=yes  
 PEERROUTES=yes  
 IPV4_FAILURE_FATAL=no  
 NAME=eth1  
 UUID=811289b3-8c1f-4f38-8885-21fa8aef107f  
 DEVICE=eth1  
 ONBOOT=no  
The teaming-interface:
 [archy@server ~]$ sudo cat /etc/sysconfig/network-scripts/ifcfg-team0  
 DEVICE=team0  
 PROXY_METHOD=none  
 BROWSER_ONLY=no  
 BOOTPROTO=none  
 IPADDR=172.31.0.250  
 PREFIX=24  
 GATEWAY=172.31.0.254  
 DNS1=127.0.0.1  
 DOMAIN="archyslife.lan"  
 DEFROUTE=yes  
 IPV4_FAILURE_FATAL=no  
 IPV6INIT=yes  
 NAME=team0  
 UUID=993942fd-9aee-4544-9ad7-8bedcd7beb0b  
 ONBOOT=yes  
 DEVICETYPE=Team  
 TEAM_CONFIG="{\"runner\": {\"name\": \"lacp\", \"active\": true, \"fast_rate\": true, \"tx_hash\": [\"eth\", \"ipv4\", \"ipv6\"]},\"link_watch\":    {\"name\": \"ethtool\"}}"  
Now it's time to restart the network.service to activate the changes
 [archy@server ~]$ sudo systemctl restart network.service  
The state of the teaming can be checked using the teamd-tools:
 [archy@server ~]$ sudo teamdctl team0 state  
 setup:  
  runner: lacp  
 ports:  
  eth0  
   link watches:  
    link summary: up  
    instance[link_watch_0]:  
     name: ethtool  
     link: up  
     down count: 0  
   runner:  
    aggregator ID: 3, Selected  
    selected: yes  
    state: current  
  eth1  
   link watches:  
    link summary: up  
    instance[link_watch_0]:  
     name: ethtool  
     link: up  
     down count: 0  
   runner:  
    aggregator ID: 3, Selected  
    selected: yes  
    state: current  
 runner:  
  active: yes  
  fast rate: yes  
 [archy@server ~]$ sudo teamnl team0 ports  
  3: eth0: up 1000Mbit FD   
  2: eth1: up 1000Mbit FD    
The easiest way to configure is to use the nmcli or nmtui tool provided by the NetworkManager-package.
 [archy@server ~]$ sudo nmcli connection add type team con-name team0 ifname team0 config '{"runner": {"name": "lacp"}}'   
 [archy@server ~]$ sudo nmcli connection add type team-slave con-name team0-eth0 ifname eth0 master team0  
 [archy@server ~]$ sudo nmcli connection add type team-slave con-name team0-eth1 ifname eth1 master team0  
Usually the nmcli should have brought up the connection(s) automatically, but I usually restart the network.service anyway,. Just to make sure.
 [archy@server ~]$ sudo systemctl restart network.service  
The state of the teaming can be checked using the teamd-tools:
 [archy@server ~]$ sudo teamdctl team0 state  
 setup:  
  runner: lacp  
 ports:  
  eth0  
   link watches:  
    link summary: up  
    instance[link_watch_0]:  
     name: ethtool  
     link: up  
     down count: 0  
   runner:  
    aggregator ID: 3, Selected  
    selected: yes  
    state: current  
  eth1  
   link watches:  
    link summary: up  
    instance[link_watch_0]:  
     name: ethtool  
     link: up  
     down count: 0  
   runner:  
    aggregator ID: 3, Selected  
    selected: yes  
    state: current  
 runner:  
  active: yes  
  fast rate: yes  
 [archy@server ~]$ sudo teamnl team0 ports  
  3: eth0: up 1000Mbit FD   
  2: eth1: up 1000Mbit FD   
Feel free to comment and / or suggest a topic.

Comments

  1. i follow process but i have :

    8: team0: mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx.xx.xx.xxx.x2 brd ff:ff:ff:ff:ff:ff

    ReplyDelete
    Replies
    1. This might be caused by different reasons.
      Please note that you will need to configure the switch / server on the other site of the link with 802.3ad (LACP), otherwise you've created a networking loop.
      Also this procedure might change if you are doing this on VMs.

      Delete
  2. I have tried to configure this LACP team at least 10 different ways And I cannot get the team to come up and the CISCO ports stay in suspend. and I missing something?

    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=none
    IPADDR=10.250.91.35
    PREFIX=24
    GATEWAY=10.250.91.1
    DNS1=10.250.40.102
    DOMAIN=dev.paytel.com
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=stable-privacy
    NAME=Team0
    UUID=aebbbeaa-5116-4f2e-87dc-e9774bfac23e
    DEVICE=nm-team
    ONBOOT=yes
    DEVICETYPE=Team
    TEAM_CONFIG="{\"runner\": {\"name\": \"lacp\", \"active\": true, \"fast_rate\": true, \"tx_hash\": [\"eth\", \"ipv4\", \"ipv6\"]},\"link_watch\": {\"name\": \"ethtool\"}}"


    NAME=Slave1
    UUID=20ce56af-61c4-47d6-b368-7fe73a097205
    DEVICE=em1
    ONBOOT=yes
    TEAM_MASTER=nm-team
    DEVICETYPE=TeamPort
    TEAM_MASTER_UUID=aebbbeaa-5116-4f2e-87dc-e9774bfac23e

    NAME=Slave2
    UUID=e9760bca-6235-45b7-906b-344e6f8567d4
    DEVICE=em2
    ONBOOT=yes
    TEAM_MASTER=nm-team
    DEVICETYPE=TeamPort
    TEAM_MASTER_UUID=aebbbeaa-5116-4f2e-87dc-e9774bfac23e

    YPE=Ethernet
    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=none
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=stable-privacy
    NAME=em1
    UUID=3c611987-2ea7-46bc-a5d5-3157bc0d18b1
    DEVICE=em1
    ONBOOT=no
    ~
    ~

    TYPE=Ethernet
    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=none
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=stable-privacy
    NAME=em2
    UUID=b5c363f3-5d83-49c0-a1bc-76e9fdd88fc1
    DEVICE=em2
    ONBOOT=no
    ~

    ReplyDelete
    Replies
    1. Hi there,
      your config seems to be okay from the first glance. Check your logs and verify the config on the Switche's end.
      You could also start over using the nmcli commands and go from there.

      Delete
    2. Hello Adrian

      Yes sir, I have tried that and still no connectivity. My port just won't come out of "suspend", This is my switch configs for my port-channel and the two ports

      interface GigabitEthernet 2/0/39
      description **OmniCache Test eth0**
      switchport trunk encapsulation dot1q
      switchport trunk native vlan 999
      switchport trunk allowed vlan 2-4024
      switchport mode trunk
      switchport nonegotiate
      channel-group 25 mode active
      spanning-tree portfast trunk
      end


      !
      interface GigabitEthernet 3/0/39
      description **OmniCache TestTeam eth1**
      switchport trunk encapsulation dot1q
      switchport trunk native vlan 999
      switchport trunk allowed vlan 2-4024
      switchport mode trunk
      switchport nonegotiate
      channel-group 25 mode active
      spanning-tree portfast trunk
      end


      interface Port-channel25
      description **OmniCache Team 2/0/39 3/0/39**
      switchport trunk encapsulation dot1q
      switchport trunk native vlan 999
      switchport trunk allowed vlan 2-4094
      switchport mode trunk
      switchport nonegotiate
      spanning-tree portfast trunk
      end

      Delete
    3. Alright, I'm not sure on the 'nonegotiate' but I'm fairly unfamiliar with cisco.
      Could you test your configuration using bonding as well with LACP? Hopefully we find out that way which side is misbehaving.

      A howto can be found here: https://archyslife.blogspot.com/2018/05/centos-75-bonding-and-teaming.html

      Delete
    4. not sure the degree of importance, however i do see you renamed the interfaces to slave... but in the TEAM_CONFIG I see [\"eth\ eth0 probably don't exists

      Delete
  3. I know this is a little old but in case anyone else is looking for this
    Make sure auto negotiate is set to yes for your team interfaces on your server
    nmcli con mod team0-eno1 802-3-ethernet.auto-negotiate yes

    My interfaces were set to no even though I didn't explicitly define them on initial setup, and my Cisco switch complained that LACP wasn't enabled on the server NICs and would suspend the channel group. This is on CentOS 8.4.



    ReplyDelete

Post a Comment

Popular posts from this blog

Dynamic DNS with BIND and ISC-DHCP

I personally prefer to work with hostnames instead of ip-addresses. If you have anything like freeipa or active directory, it will do that for you by registering the client you added to your realm to the managed dns and edit the records dynamically. We can achieve the same goal with just bind and isc-dhcp. I'll use a raspberry pi with raspbian 9 for this setup. So here is a quick tutorial on how to configure the isc-dhcp-server to dynamically update bind. First set a static ip to your server. [archy@ddns ~]$ sudo vim /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) # Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf' # Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d auto eth0 iface eth0 inet static address 172.31.30.5 network 172.31.30.0 broadcast 172.31.30.255 netmask 255.255.255.0 ...

Push logs and data into elasticsearch - Part 2 Mikrotik Logs

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana. Start by getting the Log Data you want to structure parsed correctly. Mikrotik Logs are a bit difficult since they show you Data in the interface which is already enriched with Time / Date. That means a message that the remote logging will send to Logstash will look like this: firewall,info forward: in:lan out:wan, src-mac aa:bb:cc:dd:ee:ff, proto UDP, 172.31.100.154:57061->109.164.113.231:443, len 76 You can check them in the grok debugger and create your own filters and mapping. The following is my example which might not fit your needs. Here are some custom patterns I wrote for my pattern matching: MIKROTIK_DATE \b(?:jan(?:uary)?|feb(?:ruary)?|mar(?:ch)?|apr(?:il)?|may|jun(?:e)?|jul(?...

FreeIPA - Integrating your DHCPD dynamic Updates into IPA

I recently went over my network configuration and noticed that the dhcp-leases were not pushed into the IPA-DNS yet. So I thought, why not do it now. The setup is very similar to setting it up on a single bind instance not managed by IPA (I've already written a guide about this here ). My setup is done with the following hosts: ipa01.archyslife.lan - 172.31.0.1 inf01.archyslife.lan - 172.31.0.5 First of all, create a rndc-key: [archy@ipa01 ~]$ sudo rndc-confgen -a -b 512 This will create the following file '/etc/rndc-key' [archy@ipa01 ~]$ sudo cat /etc/rndc.key key "rndc-key" { algorithm hmac-md5; secret "secret_key_here=="; }; We also need to make named aware of the rndc-key and allow our remote dhcp server to write dns entries: [archy@ipa01 ~]$ sudo vim /etc/named.conf ... include "/etc/rndc-key"; controls { inet 172.31.0.1 port 953 allow { 172.31.0.5; } keys ...

SSSD - Debugging PAM permission denied

Sometimes there's weird errors in IT that occur on random chance. I've had such an encounter with SSSD in combination with IPA(+AD-Trust) recently, where only sometimes, a connection to one of the IPA-Servers would fail with this error: Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: pam_sss(sshd:account): Access denied for user runner: 4 (System error) Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: fatal: Access denied for user runner by PAM account configuration [preauth] In my case, it was only happening sometimes when running a basic system setup role using ansible on every host in the entire environment. This way, there was no consistent pattern besides being the same host every time if it failed. First up, add the 'debug_level=X' to every section required in the /etc/sssd/sssd.conf where X is a number from 1 to 10 with 10 being the most verbose. Afterward, restart sssd and check the logs for any obvious problems. 1) If you are using local users, check the...