Real World example of using the new capabilities in
Windows Server 2012, Hyper-V, and System Center 2012 SP1.
Let me start this
blog post by explaining how glad I am that we are finally here, with Windows
Server 2012 and System Center 2012 SP1.
The waiting has been
tough, and many customers have been on the edge before implementing Hyper-V
without management. However, my experience is that many customers are
moving away from VMware and jumps over to Hyper-V and System Center. V2V is
probably my closest friend in these days, together with a couple of Logical
Switches. More on that later in this blog post.
So in this example,
I would like to tell you about this enterprise customer who’s running major
datacenters using VMware with vCenter. They were doing it all in the
traditional way, using Fibre Channel from their hosts, connected to some heavy,
expensive and noisy storage.
So how did we
present a better solution for them, more suited for the future, using
technology from Microsoft?
The customer would
like to utilize their investments better, and do things more economically and cost
effective, without losing any performance, functionality, availability and all
the other factors you put into your SLA.
Key elements:
·
Windows
Server 2012
·
Hyper-V
o
SMB3.0
o
Scale-Out
File Server Role
o
NIC
Teaming
o
Network
Virtualization
o
Failover
Clustering
·
System
Center 2012 SP1
o
Virtual
Machine Manager
o
Operations
Manager
o
Orchestrator
·
Windows
Azure on Windows Server (Katal)
o
SPF (SC
2012 SP1 – Orchestrator)
Since this is a
large environment, designed to scale. The first thing we did was to install
Virtual Machine Manager.
In a solution like
this, VMM is key to streamline the configuration of Hyper-V hosts and manage
the Fabric (pooled infrastructure resources). So since this would be a very
important component, we installed VMM in a failover cluster as a HA role.
·
Dedicated
cluster for Virtual Machine Manager
·
Two
Windows Server 2012 nodes
·
Configuration
of Distributed Key Management
·
Connected
to dedicated SQL cluster for System Center
·
VMM
console installed on a dedicated management server
With this baseline
in place, we started to prepare the fabric.
Instead of using the
traditional way of delivering shared storage to the hosts, SMB3.0 was
introduced as an alternative. The customer was interested to see the
performance of this, and the ability to manage it from Virtual Machine Manager.
In test environment, we setup the hosts with multiple storage options.
Test environment:
·
Two
Hyper-V hosts in a cluster managed by Virtual Machine Manager
·
Both
hosts connected to shared storage using:
o
Fibre
Channel directly to their SAN
o
2 x
10Gbe NICs in a NIC team, using dedicated virtual network adapters for SMB3.0
traffic, accessing a scale-out file server cluster.
Overview
After testing this,
the results were clear.
1.
The customers
had gained the same performance as using Fibre Channel.
2.
In addition,
they had now simplified management by using file shares instead of dedicating
LUNs to their clusters, leading to better utilization.
3.
Moreover,
by better utilization, they were able to scale their clusters in a very new
manner than before.
4.
By calculating
on this for production, they were able to reduce their costs significantly by
using Ethernet infrastructure instead of Fibre Channel. And this was key, since
they could leverage Ethernet and move away from HBA adapters on their hosts.
The networking part
was probably the most interesting in this scenario, because if you think about
it, a Hyper-V cluster configuration is all about networking.
And by using NIC
teaming, QoS, network virtualization, SMB3.0 and more, it’s important to pay
attention to the goal of the design as well as the infrastructure in general.
Every host had 2 x
10Gbe modules installed. And the customer wanted Load Balancing and Failover on
every network.
We designed the
following logical networks in Virtual Machine Manager:
·
Management
·
Live
Migration
·
Guests
·
SMB1 (on
the Scale-Out File Server cluster nodes, we made the SMB networks available for
clients and registered the IP addresses in DNS. This is required if you want to
use Multi-Channel)
·
SMB2
·
Cluster
Then, we created
network sites and IP subnets with associated VLANs.
For each logical
network, we created a VM Network associated with the logical networks.
For more information about NIC teaming in VMM and Network Virtualization, check these blog posts:
NIC Teaming with VMM: http://kristiannese.blogspot.no/2013/02/configure-nic-teaming-and-qos-with-vmm.html?showComment=1363359589894
Network Virtualization Guide: http://kristiannese.blogspot.no/2013/01/the-network-virtualization-guide-with.html?showComment=1363359589894
We prepared the Fabric additionally by integrating with PXE and WSUS for securing the life cycle management of the resources in the Fabric.
All set. We started
to deploy Hyper-V hosts, and streamlined the configuration by putting then into
right hosts groups, applied logical switches and presented file shares to them.
By taking a couple of steps back, I can
clearly see that VMM is an absolute necessary framework for a highly available
datacenter solution today. Almost every step was performed from the VMM
console, and this was highly appreciated by the customer.
The next steps was
to deploy virtual machines and leverage the flexibility of templates, profiles
and services.
Ok, we had a Private
Cloud infrastructure up and running, but still there was some work to do.
Migration from VMware to Hyper-V J
Ok, if you want to perform this operation in bulk,
converting many virtual machines at once, then you must either use your
Powershell ninja skills combined with Orchestrator, or some secret tool from
Microsoft that also involves Veeam.
But if you want to take this slowly while doing other
things simultaneously, then VMM is your friend.
This to be aware of:
-
Make sure the networks that your VMware virtual
machines are connected to, are available on the Hyper-V hosts
-
Make sure you have a supported VMware
infrastructure (5.1 is the only one that is supported
but it might work if you are using 5.0 also).
-
Uninstall VMware tools manually on the VMs you
will convert.
-
Power off the VMs afterwards.
-
Add both vCenter and then VMware ESX
hosts/clusters in VMM.
-
Run Virtual 2 Virtual Conversion in Virtual
Machine Manager.
This is an ongoing process and will require some downtime
for the VMs. An extra step by converting the VHDs to dynamically VHDX can also
be evaluated.
Hopefully this blog post gave you some ideas on how to
leverage Windows Server 2012, Hyper-V and Virtual Machine Manager.
Of course we integrated with Operations Manager as well,
to get monitoring in place for our fabric and virtual machines. This is key to
ensure availability and stable operations. The self-service solution landed on Katal, so that they could expose their Clouds/Stamps/Plans to their customers in a really good-looking UI with lots of functionality. I will cover this in a more detailed blog post later.
1 comment:
Kul artikkel. Her ser man flere komponeneter spiller på lag som virkelig gir mening :)
Post a Comment