Sunday, January 27, 2013

NIC 2013 - Presentations available for download

The Nordic Infrastructure Conference took place in Oslo this week, and I had 3 sessions covering private cloud, VM mobility and Disaster Recovery.


This session was about System Center 2012 SP1 and how to configure and deploy the private cloud, covering all the components. Demos focusing on Virtual Machine Manager, App Controller, Operations Manager, Orchestrator and Service Manager.


Showing all the different migration scenarios, how to configure security and prepare for a mobile, flexible and dynamic infrastructure with Hyper-V in Windows Server 2012.


An overview of Hyper-V Replica and much demos showing the different scenarios, workflows and configurations.

Feel free to download and use these presentations.

Tuesday, January 15, 2013

Hyper-V Replica Broker: Cluster network name resource failed to create its associated computer object in domain


Cluster network name resource failed to create its associated computer object in domain...
 
As the title says, there is some permission issues in Active Directory.
A customer of mine was deploying Hyper-V Replica in their datacenter today, and everything seemed OK during creation.
However, after checking the roles afterwards, the Hyper-V Replica Broker was having some errors.
A closer look at the critical events for this source showed that the cluster account was not able to create child objects in the correct container in Active Directory.

The text for the associated error code is: A constrained violation occurred.

When you enable the Hyper-V Replica Broker Role within a Failover Cluster, you must specify a name for the role as well as an IP address. This is because the Hyper-V Replica Broker is a HA role, where the secondary replica servers will target this object during replication. It is just like any other active/passive cluster role.

The Cluster Name Object (hyper-v_cluster_name$) is responsible for the creation of objects, and if this object does not have the required permissions to create child objects in the same container where it lives, you will get this error.

Talk with your domain administrator to sort this out.
Normally the domain administrator will delegate control through Active Directory so that your CNO can create child objects.

After we fixed this, everything was in order and the Hyper-V Replica Broker role was Running, healthy and smiling.

Friday, January 4, 2013

The Network Virtualization Guide with SC 2012 - VMM SP1

Let’s forget about my previously post about this subject, as things have changed in the final bits of SP1 for System Center 2012.

Let’s start off with some clarifications:

-          IP Re-write is not supported in VMM 2012 SP1.

-          NVGRE is the default and only option for network virtualization with VMM 2012 SP1

 This blog post will cover the following: 

·         What the heck is network virtualization – and why do we need it?

·         How does it work?

·         How to prepare our networking fabric for network virtualization


What the heck is network virtualization – and why do we need it?

 Not everyone needs network virtualization, but the industry itself need a better way to meet the requirements for a secure multi-tenant infrastructure. To isolate tenants from each other without purchasing all the network infrastructure in the world.

As you may be aware of, Windows Server 2012 is truly a cloud OS. It’s more than just a server. It’s the private (and public) cloud enabler and the most important ingredient in your infrastructure to design a multi-tenant infrastructure at low cost, in conjunction with other pieces. Like storage for example, but that’s another story (SMB, Storage Pools/Spaces, resource groups ++).

I`ve been working a lot with hosters over the last years and a common challenge is a secure and scalable solution for multi-tenancy. First thing that you might think of in relation to network is to use VLAN`s. Fair enough, that`s a wide adopted technology to separate networks, but it is also complex and not suited to scale. When I say scale, I am thinking of big time scale, for those major hosters.
In these days when cloud computing is all over the place, we are expecting our service providers to provision infrastructure, platform and software as a Service quite rapidly, working together with anything else and without making any changes to our environment. Unfortunately this is very challenging and not practically realistic.
One additional challenge to VLAN`s is that when you need to scale your Fabric with new virtualization hosts, storage and networking, you are in some ways limited to one physical location.
VLAN can’t span multiple logical subnets and will therefore restrict the placement of virtual machines. So how can you get a solution that works for your customers – even when they have already existing solutions that they want to move to the cloud?
By using traditional networking and VLAN`s you will have to reassign IP addresses when moving to the cloud, since mostly of the configuration is relying on the IP configuration on those machines. This will include policies, applications, services and everything else that is used for layer 3 network communications. With the limitations of VLAN`s, the physical location will determine the virtual machine`s IP addresses.

This is where Network Virtualization in Windows Server 2012 – Hyper-V comes to the rescue.
It removes the challenges related to IaaS adoption for customers, and will provide the datacenter administrator an easy an effective way to scale their network fabric for virtual machines.

Network Virtualization will let you run several virtual machines – even with the same identical IP assigned, without letting them see each other, which sounds like the solution for multi-tenancy.

How does it work?

So, let’s explain network virtualization in a nutshell-manner:

You can virtualize any network and run them all on a single physical network fabric.

Just like you would virtualize your servers on one single physical host, where they’re running in their own isolated environment.

Each virtual network adapter in Hyper-V Network Virtualization is associated with two IP addresses.

Customer Address (CA): The CA is the address that’s assigned by the customer based on their subnet, IP range and so on. This address is only visible to the virtual machine and eventually other virtual machines within the same subnet/VM network if you allow routing. However, the important part here is that it’s only visible to the VM and the customer, not the underlying fabric. So think of it as a layer of abstraction. The CA is maintained by the customer’s network topology through the tenant-concept in VMM 2012 SP1.

Note: Tenants in VMM 2012 SP1 is a user role with additional permissions compared to the Self-Service User role.

Provider Address (PA): The PA is the address that is assigned by the server administrator/service provider/virtualization guru, based on their physical network infrastructure. The PA is only visible on the physical network, where the Hyper-V hosts are exchanging packets.

Summary

Each IP address assigned to a VM (CA) is mapped to an IP address on the physical host (PA)

VMs send data packets in the CA space, which are put into an “envelope” with a PA source and a destination pair based on the mapping

The CA-PA mappings must allow the hosts to differentiate packets for different customer virtual machines

The PA is the “ship” and the CA is the “load”.

Prior to the final release of Service Pack 1 for System Center, we had two options when it came to network virtualization.

IP rewrite where the customer IP address (CA) packets were modified on the virtual machine before they were transferred on the physical network fabric, and IP Encapsulation (NVGRE) where the packets were encapsulated with a new header before they were sent on the physical network.

From now on, only IP Encapsulation (NVGRE) is supported.

Network Virtualization with Generic Routing Encapsulation (NVGRE)

If you want to scale, and doesn’t require optimization on your VMs virtual NICs (like VMQ etc), NVGRE is ideal. NVGRE is intended for the majority of datacenters deploying Hyper-V Network Virtualization. The packet is encapsulated inside another packet (think “envelope”), and the header of this new packet has the appropriate source and destination PA IP address in addition to the Virtual Subnet ID, which is stored in the Key field of the GRE header.

 



 

Customer network/VM Networks

Each VM network contains one or more virtual subnets. A VM Network is an isolated network where the virtual machines within a VM network can communicate with each other. Virtual subnets implements the Layer 3 IP subnet semantics for VMs in the same virtual subnet. The Virtual Subnet is similar to a VLAN when it comes to broadcasting, where the Virtual Subnet is a broadcast domain. Each and every virtual subnet belongs a Routing Domain ID (RDID which has a GUID format) to identify the VM Network which is assigned by the virtualization guru with VMM 2012 SP1 and has a universally unique virtual subnet ID (VSID).

The Virtual Subnet ID (included with the GRE header) allows hosts to identify customer’s virtual machines for any given packet. Since this is a “policy driven” solution, the PA’s and the CA’s on the packets may overlap without any problems. This means that all virtual machines on the same host can share a single PA. ß That means scalability!

The less the switches in your infrastructure needs to learn about IP and MAC addresses, the more they smile.

How to prepare your networking fabric for network virtualization

 A good reason to buy System Center 2012 SP1 is management of network virtualization. If you’re not using Virtual Machine Manager, you must brush off your Powershell Ninja capabilities, and it’s really painful to maintain your environment when it scales.

VMM is the key to a successful cloud world, and will work as a policy supervisor on the Hyper-V switches, controlling every VM network, VLID

 If you are not familiar with Network Fabric in VMM, I`ll suggest that you read this blog post http://kristiannese.blogspot.no/2011/05/create-networks-with-vmm-2012.html for a guidance on how to setup Logical Networks, Virtual Networks, IP Pools and more. This is related to PA – Provider Addresses – which is the IP addresses that the Hyper-V hosts will be able to see and use.

VMM 2012 SP1 introduces several new features and capabilities related to networking in the fabric space.

A close look at the Fabric view will summarize that.

The idea behind these new concepts is to create consistently and identical capabilities for network adapters across multiple hosts in your fabric, using profiles and logical switches. Both profiles and logical switches works as containers for properties or capabilities that you want your network adapters to have.

This means you can create a set of profiles, associate them with logical switches and apply the same configuration to all of your hosts. Again, think scalability.

Native port profile for uplinks (Connectivity)

This is a profile that specifies which logical networks can connect through a specific physical network adapter. An uplink port profile can be added to a logical switch, and the logical switch can be applied to a network adapter on the host. If you’re interesting in NIC teaming in Windows Server 2012, this is where you configure it in VMM. You select ‘team’ for the Uplink mode to enable teaming, as well as selecting load-balancing algorithm and teaming mode.

Native port profile for virtual network adapters (Capabilities)

This is a profile for virtual network adapters connected to an extensible virtual switch in Hyper-V. All from offload settings to security settings are managed through this profile. Together with Native port profile for uplinks, you can associate these profiles as well with a logical switch, assuring that the VMs connected to the switch has the right capabilities.

Port classification

Provides a global name for identifying different types of virtual network adapter port profiles and can be used across multiple logical switches. Example is to create one classification named “Bronze” where the customers who pays you the least amount of bucks get slower performance than the “Gold” customers. Traditionally called as SLA.

Logical Switch

Logical switches brings port profiles, classifications and switch extensions together for you to apply a bunch of configuration to multiple hosts at once, streamlined and easy. The logical switch is also responsible to tell the physical network adapter that it should support network virtualization. (Enabling network virtualization filtering driver on the physical NIC on the parent partition).

How to create a Native Port Profile for Uplinks in VMM 2012 SP1


1.       In the Fabric workspace, right click on Native Port Profiles and click ‘Create Native Port Profile’.

2.       Assign the profile a name and a description, and make sure you have enabled ‘Uplink port profile’.

Default, the load balancing algorithm is HyperVPort, and Teaming mode is set to SwitchIndependent.

Choose what you want and click next.

3.       Select the network sites supported by this uplink port profile. This is the logical network you have available in VMM. Since network virtualization requires PA’s to work, you must have an underlying networking fabric to make the magic happen’. Once you have selected the right network sites, check the ‘Enable Windows Network Virtualization’ option at the bottom of the screen – and click next, and finish.

We have now created an uplink profile that will enable network virtualization on our physical network adapters. The next thing we’ll do, is to create a native port profile for virtual network adapters to assure we configure the proper capabilities for your vNICs.

 

How to create a Native Port Profile for Virtual Network Adapters 

1.       Navigate to the networking space in the Fabric Workspace, select Native Port Profiles, right click, and create Native port profiles.

2.       This time, make sure the ‘Virtual network adapter port profile’ is selected, assign the profile a name and a description and click next.

3.       Select the offload settings for the virtual network adapter port profile. VMQ, IPsec task offloading and SR-IOV is available. Select the required configuration and click next.

4.       Next, specify the security settings for this profile. All of these capabilities is a result of the work that has been done in the extensible virtual switch in Hyper-V. Click next once you’re done.

5.       Select the bandwidth settings for the profile. QoS is another feature that’s available directly on the vNICs in Hyper-V, and you can specify those settings here. Click next once you’re done, and finish.

The last thing we’ll do before we go ahead with our Logical switch, is to create a port classification

How to create a Port Classification
 
1.       In the Fabric Workspace, right click Port Classification and create new Port Classification

2.       Assign a name. Based on the previous steps when you created your native port profile for virtual network adapters, name the classification something meaningful to you, as you will use this classification when you’re creating the logical switch. Type a description and click OK.

Also note that VMM ships a bunch of pre-configured classifications, for you to use in your environment. Take a look at them to get a better understanding of their intentions.

We have now configured the required prerequisites for our logical switch.

How to create a Logical Switch


1.       Right click Logical Switches in Fabric workspace, and create logical switch.

2.       Enter a name and a description for the logical switch. Here you can also enable SR-IOV since this is a HW feature on the NIC. Make sure you have adapters that supports SR-IOV prior to this. Click next

3.       Choose the extensions you want to use with this logical switch. If you haven’t added any switch extensions (add on to VMM) you will only have the default switch extensions provided by Hyper-V in this step. Click next once you’re done.

4.       Select the uplink mode (teaming or no teaming), and add the Uplink port profile you created earlier. Click next once you’re done. As you can see, when you created the Uplink port profile, you selected the network sites where the profile should be available. If the host group already has access to this logical network, it will be visible here.

5.       Specify the port classification for virtual ports part of this logical switch. Add your virtual port profile you created earlier in this step along with the port classification.

6.       Click next – and finish.

 

The last thing to do, is to assign the logical switch to a physical network adapter (or team) on your hosts.

Go to a host group, select a host, right click and select properties.

In the Virtual Switches view, click ‘New Virtual Switch’, find your logical switch you created and select your uplink profile.

 

Wednesday, January 2, 2013

New job - CTO at Lumagate

"New Year, new wife, new job and new shoes. "

To all the readers of Virtualization and some coffee: Happy new year!

I appreciate that you are reading my blog posts and get in touch with me whether it’s just a question or you want to discuss something more in detail.

This is the year we have been waiting for, us that’s working in the Microsoft cloud space.
Windows Server 2012 is up and running, and now SP1 for System Center is knocking on our door.
Maybe we can take a short break soon?

Fortunately, we can’t. It’s time to get down to business and execute all the promises we made prior to SP1.

This is the ship – and we are sailing right now.

And that, my friends, is the main reason why I’m starting this year in a new job.

I have joined Lumagate, a System Center specialist in the North focusing entirely on System Center, as the Chief Technology Officer.

As a CTO, I will drive our dedication to System Center further, and have a wide diversity of responsibilities. Like business development, working on large projects with key customers, pre-sales, POC’s, workshops and internal training. I will have the best and brightest consultants under my wings, ready to deploy world class System Center solutions, addressing each and every challenge related to datacenter and cloud management, and clients too.

Last year (2012) I worked for Microsoft as an Infrastructure Ranger. I will still continue to work closely with the local MS office, but as a partner and vTSP.

I am really looking forward to start in this job and be as technical as possible so I can stimulate my own needs, and the community.

Speaking of community: NIC 2013 is about to start, and I will have three sessions.
I am currently working on a freshly installed lab, running several Hyper-V clusters and System Center 2012 SP1.
While working on the setups, I thought I should document something useful as I am preparing a ‘Configuring and Deploy the Microsoft Private Cloud’ session for NIC.

Next blog post will be focusing on Fabric configuration with Microsoft technologies, prior Win 2012 and Post 2012.