NIC
Teaming
In its simplest terms NIC
teaming means that we are taking multiple physical NICs on a given ESXi host
and combining them into a single logical link that provides bandwidth
aggregation and redundancy to a vSwitch. NIC teaming can be used to distribute
load among the available uplinks of the team. A NIC teaming configuration
can look like as shown in below screenshot:
There are several Load
Balancing policies available for the virtual switch. These are discussed as
below:
1:
Route Based on Originating virtual Port-ID: This is the default load
balancing policy for a vSS or vDS. This policy doesn’t require any special
configuration to be done at virtual switch level or physical switch level.
In this policy when a NIC is
added to a VM or a new VM is provisioned with a NIC and comes online, VMkernel
assigns a Port-ID to the virtual NIC of the VM. The outgoing traffic from the
VM NIC will be routed through which uplink (physical adapter) of the team is
determined by vSwitch using a modulo function where Port-ID of the VM NIC
(virtual adapter of VM) is divided by total number of uplinks present in the
team and the remainder obtained determines which uplink will be used to route
the traffic of that VM NIC.
At a given time a VM NIC can
use only one uplink to send out its traffic. In case of failure of the uplink
the traffic of that VM NIC is rerouted (failed over) among one of the available
uplink of the team. The selected uplink for a VM NIC can be changed if a VM
changes its power state or is migrated using vMotion.
For better understanding
consider the below example:
We have a virtual switch with a
port group named Production. We have 4 virtual machines connected to this port
group and 3 physical NICS connected to the virtual switch.
This policy works kind of like
round robin. VM-A will use vmnic1, VM-B will use vmnic-2, VM-C will use vmnic-3
and VM-D will use vmnic1. The virtual machines traffic are just distributed
over the available physical NIC’s.
2:
Route Based on Source MAC hash:
This policy is similar to Route based on originating Port ID but with the
difference that vSwitch uses the MAC address of the VM NIC to determine the
uplink which will be responsible for taking outgoing traffic of that VM NIC.
In this policy also, a VM NIC
can be assigned only one uplink to send traffic out at a given time but
failover is supported in case that uplinks fails. This policy is available in
both vSS and vDS.
3:
Route Based on IP Hash: This is the only load balancing
policy in which a VM NIC can send out traffic through more than one uplink at a
given time. This policy requires a special configuration i.e. Ether-Channel or
Port-Channel to be configured on physical switch.
There is one caveat in this
policy. A VM NIC can utilize more than one uplink to send outgoing traffic when
it is communicating with more than one destination (IP). If a VM is doing one
to one communication i.e. communicating with only one destination IP, traffic
will not be shared among the uplinks and only one of the uplink will be used to
send the traffic out.
4: Route Based on Physical NIC
Load: This load balancing policy is only available with vDS and
by far is the most intelligent policy to distribute load among the uplinks in a
teamed environment.
The assignment of uplinks to VM
NIC’s is based on the originating Port-ID itself but before assigning any
uplink vDS looks at the load on the physical adapters. The adapter which is
least loaded will be assigned to the VM NIC for sending out traffic. If an
adapter which was previously less utilized but suddenly becomes busy due to a
heavy network activity on a VM NIC, then that VM NIC will be moved to a
different physical adapter so as to keep balance among all uplinks as best as
possible.
This load balancing policy use
an algorithm to perform a regular inspection of load on the Physical NIC’s
every 30 seconds. When the utilization of Particular physical uplink exceeds
75% over 30 seconds, the hypervisor will move VM’s traffic to another uplink
adapter. This load balancing doesn’t require any additional configuration at
the physical switch level.
Graphic Thanks to VMwareArena.Com
5:
use explicit failover order:
This policy really doesn’t do any sort of load balancing. Instead, the first
Active NIC on the list is used to route the outgoing traffic for all VM’s. If
that one fails, the next Active NIC on the list is used, and so on, until you
reach the Standby NICs.
Note: With Explicit Failover
option if you have a vSwitch with many uplinks, only one of the uplink will be
actively used at any given time.
Comments
Post a Comment