Why aggregate network interfaces (interface bonding)?
The two important reasons to create an interface bonding are :
1. To provide increased bandwidth
2. To provide redundancy in the face of hardware failure
One of the pre-requisites to configure a bonding is to have the network switch which supports Etherchannel (which is true in case of almost all switches).
Bonding modes
Depending on your requirement, you can set the bonding mode to any of the below 7 modes. The bonding mode can be set in the bonding interface network file /etc/sysconfig/network-scripts/ifcfg-bond0 as below :
BONDING_OPTS="mode=active-backup miimon=250"
or
BONDING_OPTS="mode=1 miimon=250"
Mode | Policy | How it works | Fault Tolerance | Load balancing |
---|---|---|---|---|
0 | Round Robin | packets are sequentially transmitted/received through each interfaces one by one. | No | Yes |
1 | Active Backup | one NIC active while another NIC is asleep. If the active NIC goes down, another NIC becomes active. only supported in x86 environments. | Yes | No |
2 | XOR [exclusive OR] | In this mode the, the MAC address of the slave NIC is matched up against the incoming request’s MAC and once this connection is established same NIC is used to transmit/receive for the destination MAC. | Yes | Yes |
3 | Broadcast | All transmissions are sent on all slaves | Yes | No |
4 | Dynamic Link Aggregation | aggregated NICs act as one NIC which results in a higher throughput, but also provides failover in the case that a NIC fails. Dynamic Link Aggregation requires a switch that supports IEEE 802.3ad. | Yes | Yes |
5 | Transmit Load Balancing (TLB) | The outgoing traffic is distributed depending on the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. | Yes | Yes |
6 | Adaptive Load Balancing (ALB) | Unlike Dynamic Link Aggregation, Adaptive Load Balancing does not require any particular switch configuration. Adaptive Load Balancing is only supported in x86 environments. The receiving packets are load balanced through ARP negotiation. | Yes | Yes |
Configuring an interface bonding
1. create the master bond0 interface
For this we have to create a file /etc/sysconfig/network-scripts/ifcfg-bond0 with below content :
# vi /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 BOOTPROTO=static ONBOOT=yes IPADDR=10.10.1.10 NETMASK=255.255.255.0 BONDING_OPTS="miimon=100"
BONDING_OPTS - Specify bonding module parameters, e.g. miimon - link polling interval for fault detection (in ms)
As we have configured the bonding interface with the IP address and netmask, we need not specify them in the individual interfaces files that make up the bond.
2. Creating the slave interfaces
We would be using em0 and em1 as the slave interfaces to create the bond0 bonding interface. The lines MASTER and SLAVE defines the master bonding interfaces bond0 and em0/em1 as the slave interfaces.
# vi /etc/sysconfig/network-scripts/ifcfg-em0 DEVICE=em0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes
# vi /etc/sysconfig/network-scripts/ifcfg-em1 DEVICE=em1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes
3. Configure the bonding driver
Configuration file /etc/modprobe.conf is deprecated on RHEL 6 and configuration files are now in directory /etc/modprobe.d. The older configuration file is still supported but is not recommended. Create a new file bonding.conf in directory /etc/modprobe.d to tell the kernel that it should use the bonding driver for new device bond0.
# vi /etc/modprobe.d/bond.conf alias bond0 bonding
4. Restart the network services
Restart the network services to enable the bonding interface.
# service network restart
In case if you do not want to restart the network service, you can plumb the bonding interface individually :
# ifup bond0
5. Verify
Check the new interface in the ifconfig command output :
# ifconfig bond0 bond0 Link encap:Ethernet HWaddr 00:0C:29:9B:FD:2B inet addr:10.10.1.10 Bcast:10.10.1.1 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe9b:fd2b/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:39 errors:0 dropped:0 overruns:0 frame:0 TX packets:34 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:13875 (13.5 KiB) TX bytes:3446 (3.3 KiB)
To verify if the bonding module is loaded properly :
# lsmod |grep bond bonding 122351 0
To check which interface is currently active (in case of active-backup mode) :
# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: em0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 5000 Down Delay (ms): 5000 Slave Interface: em0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:21:28:b2:65:26 Slave queue ID: 0 Slave Interface: em1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:21:28:b2:65:27 Slave queue ID: 0
In case if you want to test if the bonding is configured properly, bring down an active interface (em0 here) from the bonding. You would still find that the bonding interface is still accessible.
# ifdown em0