What is Interface bonding
Bonding (or channel bonding) is a technology-enabled by the Linux kernel and Red Hat Enterprise Linux, that allows administrators to combine two or more network interfaces to form a single, logical “bonded” interface for redundancy or increased throughput. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, they may provide link-integrity monitoring.
Why use Interface bonding?
The two important reasons to create an interface bonding are :
1. To provide increased bandwidth
2. To provide redundancy in the face of hardware failure
One of the pre-requisites to configure a bonding is to have the network switch which supports EtherChannel (which is true in case of almost all switches).
Depending on your requirement, you can set the bonding mode to any of the below 7 modes.
|Mode||Policy||How it works||Fault Tolerance||Load balancing|
|0||Round Robin||packets are sequentially transmitted/received through each interfaces one by one.||No||Yes|
|1||Active Backup||one NIC active while another NIC is asleep. If the active NIC goes down, another NIC becomes active. only supported in x86 environments.||Yes||No|
|2||XOR [exclusive OR]||In this mode the, the MAC address of the slave NIC is matched up against the incoming request’s MAC and once this connection is established same NIC is used to transmit/receive for the destination MAC.||Yes||Yes|
|3||Broadcast||All transmissions are sent on all slaves||Yes||No|
|4||Dynamic Link Aggregation||aggregated NICs act as one NIC which results in a higher throughput, but also provides failover in the case that a NIC fails. Dynamic Link Aggregation requires a switch that supports IEEE 802.3ad.||Yes||Yes|
|5||Transmit Load Balancing (TLB)||The outgoing traffic is distributed depending on the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave.||Yes||Yes|
|6||Adaptive Load Balancing (ALB)||Unlike Dynamic Link Aggregation, Adaptive Load Balancing does not require any particular switch configuration. Adaptive Load Balancing is only supported in x86 environments. The receiving packets are load balanced through ARP negotiation.||Yes||Yes|
1. Creating the bond
First of all create a bond device.
# nmcli connection add type bond ifname
For example :
# nmcli connection add type bond ifname bond1 mode active-backup miimon 100 ip4 192.168.1.22/24 gw4 192.168.1.1
2. Adding the slave devices
Lets add the slave devices, here we are using device ens1f0 and ens1f1
# nmcli connection add type bond-slave ifname ens1f0 con-name ens1f0 master bond1 # nmcli connection add type bond-slave ifname ens1f1 con-name ens1f1 master bond1
3. Add a dns server
Lets add a DNS server to the bond. This is an optional step and can be done at a global system level rather that on each interface.
# nmcli connection modify bond1 ipv4.dns 126.96.36.199
4. Bring up the bond
Lets bring up the bond. If you have configured everything right until here, you should be getting the bond up and running fine after you fire the below command.
# nmcli connection up bond1
How to disable IPv4 or IPv6 on bonded interface
These steps are only needed if bond1 will not use an ipv4 or ipv6 address
# nmcli connection modify bond1 ipv4.method disabled
# nmcli connection modify bond1 ipv6.method ignore