Let’s start off with some clarifications:
-
IP Re-write is not supported in VMM 2012 SP1.
-
NVGRE is the default and only option for network virtualization with VMM 2012
SP1
·
What the heck is network virtualization – and why do we need it?
·
How does it work?
·
How to prepare our networking fabric for
network virtualization
What the heck is
network virtualization – and why do
we need it?
As you may be aware of, Windows Server 2012 is truly a
cloud OS. It’s more than just a server.
It’s the private (and public) cloud enabler and the most important ingredient
in your infrastructure to design a multi-tenant infrastructure at low cost, in
conjunction with other pieces. Like storage for example, but that’s another
story (SMB, Storage Pools/Spaces, resource groups ++).
I`ve
been working a lot with hosters over the last years and a common challenge is a
secure and scalable solution for multi-tenancy. First thing that you might
think of in relation to network is to use VLAN`s. Fair enough, that`s a wide
adopted technology to separate networks, but it is also complex and not suited
to scale. When I say scale, I am thinking of big time scale, for those major
hosters.
In these days when cloud computing is all over the place, we are expecting our service providers to provision infrastructure, platform and software as a Service quite rapidly, working together with anything else and without making any changes to our environment. Unfortunately this is very challenging and not practically realistic.
One additional challenge to VLAN`s is that when you need to scale your Fabric with new virtualization hosts, storage and networking, you are in some ways limited to one physical location.
VLAN can’t span multiple logical subnets and will therefore restrict the placement of virtual machines. So how can you get a solution that works for your customers – even when they have already existing solutions that they want to move to the cloud?
By using traditional networking and VLAN`s you will have to
reassign IP addresses when moving to the cloud, since mostly of the
configuration is relying on the IP configuration on those machines. This will
include policies, applications, services and everything else that is used for
layer 3 network communications. With the limitations of VLAN`s, the physical
location will determine the virtual machine`s IP addresses. In these days when cloud computing is all over the place, we are expecting our service providers to provision infrastructure, platform and software as a Service quite rapidly, working together with anything else and without making any changes to our environment. Unfortunately this is very challenging and not practically realistic.
One additional challenge to VLAN`s is that when you need to scale your Fabric with new virtualization hosts, storage and networking, you are in some ways limited to one physical location.
VLAN can’t span multiple logical subnets and will therefore restrict the placement of virtual machines. So how can you get a solution that works for your customers – even when they have already existing solutions that they want to move to the cloud?
This is where Network Virtualization in Windows Server 2012
– Hyper-V comes to the rescue.
It removes the challenges related to IaaS adoption for
customers, and will provide the datacenter administrator an easy an effective
way to scale their network fabric for virtual machines.
Network Virtualization will let you run several virtual
machines – even with the same identical IP assigned, without letting them see
each other, which sounds like the solution for multi-tenancy.
How does it work?
So, let’s explain network virtualization in a
nutshell-manner:
You can virtualize any network and run them all on a single
physical network fabric.
Just like you would virtualize your servers on one single
physical host, where they’re running in their own isolated environment.
Each virtual network adapter in Hyper-V Network
Virtualization is associated with two
IP addresses.
Customer Address (CA):
The CA is the address that’s assigned by the customer based on their subnet, IP
range and so on. This address is only visible to the virtual machine and
eventually other virtual machines within the same subnet/VM network if you
allow routing. However, the important part here is that it’s only visible to
the VM and the customer, not the underlying fabric. So think of it as a layer
of abstraction. The CA is maintained by the customer’s network topology through
the tenant-concept in VMM 2012 SP1.
Note: Tenants in VMM 2012 SP1 is a user role with additional
permissions compared to the Self-Service User role.
Provider Address
(PA): The PA is the address that is assigned
by the server administrator/service provider/virtualization guru, based on
their physical network infrastructure. The PA is only visible on the physical
network, where the Hyper-V hosts are exchanging packets.
Summary
Each IP address assigned to a VM (CA) is mapped to an IP
address on the physical host (PA)
VMs send data packets in the CA space, which are put into an
“envelope” with a PA source and a destination pair based on the mapping
The CA-PA mappings must allow the hosts to differentiate
packets for different customer virtual machines
The PA is the “ship”
and the CA is the “load”.
Prior to the final
release of Service Pack 1 for System Center, we had two options when it came to
network virtualization.
IP rewrite
where the customer IP address (CA) packets were modified on the virtual machine
before they were transferred on the physical network fabric, and IP Encapsulation (NVGRE) where the
packets were encapsulated with a new header before they were sent on the
physical network.
From now on, only IP Encapsulation (NVGRE) is supported.
Network
Virtualization with Generic Routing Encapsulation (NVGRE)
If you want to scale, and doesn’t require optimization on
your VMs virtual NICs (like VMQ etc), NVGRE is ideal. NVGRE is intended for the
majority of datacenters deploying Hyper-V Network Virtualization. The packet is
encapsulated inside another packet (think “envelope”), and the header of this
new packet has the appropriate source and destination PA IP address in addition
to the Virtual Subnet ID, which is stored in the Key field of the GRE header.
Customer network/VM
Networks
Each VM network contains one or more virtual subnets. A VM
Network is an isolated network where the virtual machines within a VM network
can communicate with each other. Virtual subnets implements the Layer 3 IP
subnet semantics for VMs in the same virtual subnet. The Virtual Subnet is
similar to a VLAN when it comes to broadcasting, where the Virtual Subnet is a
broadcast domain. Each and every virtual subnet belongs a Routing Domain ID
(RDID which has a GUID format) to identify the VM Network which is assigned by
the virtualization guru with VMM 2012 SP1 and has a universally unique virtual
subnet ID (VSID).
The Virtual Subnet ID (included
with the GRE header) allows hosts to identify customer’s virtual machines for
any given packet. Since this is a “policy driven” solution, the PA’s and the CA’s
on the packets may overlap without any problems. This means that all virtual
machines on the same host can share a single PA. ß
That means scalability!
The less the switches in your infrastructure needs to
learn about IP and MAC addresses, the more they smile.
How to prepare your
networking fabric for network virtualization
VMM is the key to a successful cloud world, and will work
as a policy supervisor on the Hyper-V switches, controlling every VM network,
VLID
VMM
2012 SP1 introduces several new features and capabilities related to networking
in the fabric space.
A
close look at the Fabric view will summarize that.
The
idea behind these new concepts is to create consistently and identical
capabilities for network adapters across multiple hosts in your fabric, using
profiles and logical switches. Both
profiles and logical switches works
as containers for properties or capabilities that you want your network
adapters to have.
This
means you can create a set of profiles, associate them with logical switches
and apply the same configuration to all of your hosts. Again, think
scalability.
Native
port profile for uplinks (Connectivity)
This
is a profile that specifies which logical networks can connect through a
specific physical network adapter. An uplink port profile can be added to a
logical switch, and the logical switch can be applied to a network adapter on
the host. If you’re interesting in NIC teaming in Windows Server 2012, this is
where you configure it in VMM. You select ‘team’ for the Uplink mode to enable teaming, as well as selecting load-balancing
algorithm and teaming mode.
Native
port profile for virtual network adapters (Capabilities)
This
is a profile for virtual network adapters connected to an extensible virtual switch
in Hyper-V. All from offload settings to security settings are managed through
this profile. Together with Native port profile for uplinks, you can associate
these profiles as well with a logical switch, assuring that the VMs connected
to the switch has the right capabilities.
Port
classification
Provides a global name for identifying different types of virtual network adapter port profiles and can be used across multiple logical switches. Example is to create one classification named “Bronze” where the customers who pays you the least amount of bucks get slower performance than the “Gold” customers. Traditionally called as SLA.
Logical
Switch
Logical
switches brings port profiles, classifications and switch extensions together
for you to apply a bunch of configuration to multiple hosts at once,
streamlined and easy. The logical switch is also responsible to tell the
physical network adapter that it should support network virtualization.
(Enabling network virtualization filtering driver on the physical NIC on the
parent partition).
How to
create a Native Port Profile for Uplinks in VMM 2012 SP1
1.
In the Fabric workspace, right click on
Native Port Profiles and click ‘Create Native Port Profile’.
2. Assign
the profile a name and a description, and make sure you have enabled ‘Uplink
port profile’.
Default,
the load balancing algorithm is HyperVPort, and Teaming mode is set to
SwitchIndependent.
Choose
what you want and click next.
3.
Select the network sites supported by
this uplink port profile. This is the logical network you have available in
VMM. Since network virtualization requires PA’s to work, you must have an
underlying networking fabric to make the magic happen’. Once you have selected
the right network sites, check the ‘Enable Windows Network Virtualization’
option at the bottom of the screen – and click next, and finish.
We
have now created an uplink profile that will enable network virtualization on
our physical network adapters. The next thing we’ll do, is to create a native
port profile for virtual network adapters to assure we configure the proper
capabilities for your vNICs.
How to
create a Native Port Profile for Virtual Network Adapters
1.
Navigate to the networking space in the
Fabric Workspace, select Native Port Profiles, right click, and create Native
port profiles.
2. This
time, make sure the ‘Virtual network adapter port profile’ is selected, assign
the profile a name and a description and click next.
3. Select
the offload settings for the virtual network adapter port profile. VMQ, IPsec
task offloading and SR-IOV is available. Select the required configuration and
click next.
4. Next,
specify the security settings for this profile. All of these capabilities is a
result of the work that has been done in the extensible virtual switch in
Hyper-V. Click next once you’re done.
5.
Select the bandwidth settings for the
profile. QoS is another feature that’s available directly on the vNICs in Hyper-V,
and you can specify those settings here. Click next once you’re done, and
finish.
The last thing we’ll do before we go ahead with our
Logical switch, is to create a port classification
How to create a
Port Classification
1. In
the Fabric Workspace, right click Port Classification and create new Port
Classification
2. Assign
a name. Based on the previous steps when you created your native port profile
for virtual network adapters, name the classification something meaningful to
you, as you will use this classification when you’re creating the logical
switch. Type a description and click OK.
Also note that VMM ships a
bunch of pre-configured classifications, for you to use in your environment.
Take a look at them to get a better understanding of their intentions.
We have now configured the required prerequisites for our
logical switch.
How to create a
Logical Switch
1. Right
click Logical Switches in Fabric workspace, and create logical switch.
2. Enter
a name and a description for the logical switch. Here you can also enable
SR-IOV since this is a HW feature on the NIC. Make sure you have adapters that
supports SR-IOV prior to this. Click next
3. Choose
the extensions you want to use with this logical switch. If you haven’t added
any switch extensions (add on to VMM) you will only have the default switch
extensions provided by Hyper-V in this step. Click next once you’re done.
4. Select
the uplink mode (teaming or no teaming), and add the Uplink port profile you
created earlier. Click next once you’re done. As you can see, when you created
the Uplink port profile, you selected the network sites where the profile
should be available. If the host group already has access to this logical
network, it will be visible here.
5. Specify
the port classification for virtual ports part of this logical switch. Add your
virtual port profile you created earlier in this step along with the port
classification.
6. Click
next – and finish.
The last thing to do, is to assign the logical switch to
a physical network adapter (or team) on your hosts.
Go to a host group, select a host, right click and select
properties.
In the Virtual Switches view, click ‘New Virtual Switch’,
find your logical switch you created and select your uplink profile.
3 comments:
you need to provide social capabilities on your blog so your posts can be shared and go viral. FB, Twitter, etc.
another 1.0 from Microsoft
Hello Kristian,
When I follow your instructions I am able to set one of the Hyper-V hosts. On another one I see that the DHCP is not enabled in the Properties/Hardware/Network Adapters of the Hyper-V host. Do you know how to enable it?
Additional info:
- Hyper-V Host OS: Windows 2012.
- TCP Task Offloading option is absent on any of the NIC interfaces.
- Microsoft VMM DHCPv4 Server Switch Extension Filter is enabled.
- The network driver is up-to-date 2/23/2012 14.8.1.13. At least I was unable to get the newer one installed.
- The image of the settings which I want to change http://social.technet.microsoft.com/Forums/getfile/258089
Thank you very much for your time and attention in advance!
Post a Comment