Sunday, February 10, 2013

Configure NIC teaming and QoS with VMM 2012 SP1

This is a hot topic in these days.

Windows Server 2012 supports NIC teaming out of the box, and this gives us finally some flexible design options when it comes to Hyper-V and Hyper-V Clustering.

In a nutshell, NIC teaming gives us:

·         Load Balancing

·         Failover

·         Redundancy

·         Optimization

·         Simplicity

·         Bandwidth Aggregation

·         Scalability

Requirements for NIC teaming in Windows Server 2012

NIC Teaming requires at least one single Ethernet network adapter, that can be used for separating traffic that is using VLANs. If you require failover, at least two Ethernet network adapters must be present. Up till 32 NICs is supported in Windows Server 2012 NIC teaming.


The basic algorithms that’s used for NIC teaming is switch-independent mode where the switch doesn’t know or care that the NIC adapter is participating in a team. The NICs in the team can be connected to different switches.

Switch-dependent mode require that all NIC adapters of the team are connected to the same switch. The common choices for switch-dependent mode is Generic or static teaming (IEEE 802.3ad draft v1) that requires configuration on the switch and computer to identify which links form the team. This is a static configuration so there is no additional assistance to detect incorrectly plugged cables or odd behavior.

Dynamic teaming (IEE 802.1ax, LACP) uses the Link Aggregation Control Protocol to dynamically identify links between the switch and the computer, which gives you the opportunity to automatically create the team, as well as reduce and expand the team.

Traffic distribution algorithms

There are two different distributions methods that Windows Server 2012 supports.

Hashing and Hyper-V switch port.

Hyper-V switch port

When virtual machines have independent MAC addresses, the MAC address provides the basis for dividing traffic. Since the switch can determine the specific source MAC address is on only one connected network adapter, the switch will be able to balance the load (traffic from the switch to the computer) on multiple links, based on the destination MAC address for the VM.


Hashing algorithm creates a hash based on components of the packet, and assign packets with the hash value to one of the available network adapters. This ensures that packets from the same TCP stream are kept on the network adapter.

Components that can be used as inputs to the hashing functions include the following:

·         Source and destination IP addresses, with or without considering the MAC addresses (2-tuple hash)

·         Source and destination TCP ports, usually together with the IP addresses (4-tuple hash)

·         Source and destination MAC addresses.

To use NIC teaming in a Hyper-V environment, there are some nice new features available in powershell to separate the traffic with QoS.

More information about this can be found at

The scenario I’ll demonstrate in VMM is using NIC teaming with two 2GBe modules on the server.


·         We will create a single team on the host

·         We will create several virtual NICs for different traffic, like SMB, Live Migration, Management, Cluster and Guests

System Center Virtual Machine Manager is the management layer for your virtualization hosts, and Service Pack 1 will support management of Hyper-V hosts in Windows Server 2012. This will also include the concept of converged fabric and network virtualization.

The catch is that you must create the team with Virtual Machine Manager. If the team is created outside of VMM, VMM will not be able to import the configuration properly, and reflect the changes you make.


Create LACP trunk on physical switches

Set default VLAN if not 1

Allow required VLANs on trunk

Configure Logical Networks in Fabric

This is the first mandatory step.

1.       Create logical networks for all the actual networks you will be using. This means management, cluster, live migration, iSCSI, SMB and so on. Configure sites, VLAN/Subnet and eventually IP Pools for those networks, so that VMM can assign IP addresses to the vNics you will create.

For all your routable networks, configure default gateway, DNS suffix and DNS in the pool.

2.       Associate the logical networks with your physical network adapters on your hosts

Configure VM Networks using the Logical Networks created in Fabric

1.       Navigate to VMs and Services within the VMM console

2.       Right click on VM Networks and select create VM Networks

3.       Assign the VM Network a name, reflecting the actual logical network you are using, available from the drop down list and click next

4.       Select No Isolation, since we will be using the actual network as the basis in this configuration

5.       Click finish, and repeat the process for every network your will use in your configuration

Configure Native Port Profiles

We will create Native Port Profiles both for the physical NICs used in the team, and the vNics, and group the profiles in a logical switch that we will apply to the hosts.

Creating Uplink Port Profile

1.       Navigate to Native Port Profiles in Fabric, right click and create new Native Port Profile

2.       Select ‘Uplink Port Profile’ first and choose algorithms for configuration and distribution. I will use switch independent and HyperVPort

3.       Select the appropriate network sites, and enable network virtualization if that’s a requirement. This will instruct the team that network virtualization should be supported, and enable the network virtualization filter driver on the adapter.

4.       Click finish

Creating virtual network adapter port profile

1.       Repeat the process, and create a new Native Port Profile

2.       Select ‘Virtual network adapter port profile’ and assign a name. We will repeat this process for every vNIC we will need in our configuration, reflecting the operation we did with VM networks earlier

3.       Go through the wizard and select offload settings and security settings for the vNIC you will use for virtual machines, and specify bandwidth (QoS) for the different workloads 

Repeat this process for every vNIC

Creating Port Classifications

1.       We need to classify the different vNICs, so navigate to Port Classification in Fabric, right click, and select ‘Create new Port Classification’.

2.       Assign a name and eventually a description.

Repeat this process for every vNIC

Creating Logical Switch

The logical switch will group our configuration, and simply the creation of NIC teaming and vNICs on the hosts.

1.       In Fabric, right click on Logical Switch, and select ‘Create new Logical Switch’.

2.       Assign a name and click next

3.       Choose the extensions you want, and click next

4.       Specify which uplink profiles that should be available for this logical switch. Here you can decide if the logical switch should support teaming. If so, enable ‘team’ and add the uplink profile you created earlier, and click next.

5.       Specify the port classifications for virtual ports part of this logical switch. Add the virtual network adapter port profiles, with their corresponding port classifications you created earlier, in this step of the wizard. Click next and finish.

You have now created a logical switch that you will apply to your Hyper-V hosts

Creating a Logical Switch and virtual network adapters on your Hyper-V hosts

Navigate to your hosts in Fabric, right click and click properties.

1.       Navigate to ‘Virtual Switches’ on the left, and click ‘New Virtual Switch’ and select ‘New Logical Switch’.

2.       Select the Logical Switch you created earlier in the drop down list, and add the physical adapters that should be joining this team.

3.       Click ‘New Virtual Network Adapter’ and give the vNIC a name, select VM Network for connectivity (created earlier), enable VLAN, assign IP configuration (choose static if you want VMM to handle this from the IP pool), and select port profile.

Repeat this process and create a new virtual network adapter for all your vNICs and map them to their corresponding networks and profiles.
Important: If you want to transfer the IP address that's currently used as management IP to a virtual network adapter, remember to mark the option 'This virtual network adapter inherits settings from the physical management adapter'.

 Once you have configured this, click ‘OK’ and VMM will create the team and it’s vNICs on the host.

For the management vNIC, you can assign the IP address you are currently using on the physical NIC, and VMM will place this on the vNIC during creation.

You have now created converged fabric using VMM, and enabled your hosts to leverage network virtualization.

I would like to thank Hans Vredevoort for the healthy discussions we're having on the topic, and providing me with insight and tips for the configuration. Hans Vredevoort is a Virtual Machine MVP based in Holland and is one of the most experienced fellow I know when it comes to Fabric (servers, storage and networking). He was formerly a Cluster MVP, but has now devoted his spare time to participate in the VM community. Read his useful blogs at


Unknown said...

Thanks for this post - I am working on a virtualization project at the moment and this post is just what I am looking for.

Chennai MCSE said...

Dear Sir,

Thanks a lot for the wonderful tutorials. I am preparing for my MCSE on Windows 2012 Server.

Just confused by this term (NIC Teaming). When I read some docs it look similar to EtherChannels in Switching. For example, what I remember is Link Aggregation Control Protocol is used in EtherChannel configuration.

Need some time to understand the concept of NIC Teaming.

Thanks a lot,

Chennai MCSE

Unknown said...

Hi Kristian , Thanks for these helpful articles,
Can you tell me in windows server 2012 hyper-v how to map a Physical NIC(pnic) to a virtual NIC(vnic) , is there any way or powershell commands to find this?

Kristian Nese said...

Are you thinking about the "old way", where you create an external virtual network that is using a physical NIC in the parent partition? If so, create a 'standard virtual switch' on virtual switches on the host properties.


Ivan said...

Hi Kristian!

I've been trying to create a virtual switch for management and all my attempts failed. I have 2 NICs for management (1 of them have the proper IP address assigned via DHCP) and once I add both these NICs to the virtual switch I just loose the connectivity with the host. Am I doing something wrong?

Should both physical NICs be tagged as Management before I add them to the pool?


Kristian Nese said...

remember to enable the option 'inherit the settings from physical adapter' when you 'transfer' your management physical NIC onto the logical switch and creates a virtual network adapter.

Also note that there are several issues that have been adressed in update rollup for VMM 2012 SP1, released this week.

Anonymous said...

Hi is there some config to hide virtual nic which is part of logical switch to not share to parent partition? I have problem with this. I don't want to share vlans from hyper-v to parent partition.

Thanks OSO

Kristian Nese said...

What do you mean with share with parent partition? If management is on the same team, then management will share the virtual switch that you have in Hyper-V, with VMs.
Remember that VLANs for virtual machines are not tagged at the team or virtual adapters on the hosts, but trunked to the virtual adapters on the virtual machines.