Tuesday, May 28, 2013

Windows Azure & System Center Roadshow

Next month I will travel around in Norway to demonstrate hybrid cloud scenarios with Windows Azure and System Center.

The cloud is for real, and this goes for both the private and the public cloud.
Combining these two gives us a hybrid approach where we can consume, leverage and gain benefits from both. System Center together with Windows Server are the essential ingredients in a private cloud deployment. When we add Windows Azure to the summary, we can utilize a bunch of features and capabilities from this major public cloud from Microsoft.
Management is key and System Center is able to span private, public and service provider clouds (with SPF) to give you a holistic view of the environments and their capabilities.

Why am I doing this?
Windows Azure, back in the days, was first launched as a Platform as a Service cloud only.
This changed dramatically last summer when Microsoft added Infrastructure as a Service capabilities to their public cloud. We can now deploy virtual machines, create virtual networks and have site-2-site VPN between on-premise and Azure. Only this has opened the door to a bunch of new scenarios in the Microsoft world. In addition, several offerings are in preview, and some of these are on the agenda for this roadshow.
I would like to highlight some of the key investments here and show how you and your customers can gain ROI and sleep better in the future.

In the end of the day, I would like you to remember some important things after joining these sessions:

1)      Windows Azure and System Center provides you with premium tools to simplify complex solutions.
2)      You can get started right away and it is fairly easy to understand, deploy and utilize.
3)      We, Lumagate, know what we are doing and are ready to help you with everything I’ll talk about.

I will cover the following topics:

·         Windows Azure Recovery Services
See how using Hyper-V, System Center Virtual Machine Manager 2012 Sp1 and Windows Azure make disaster recovery easy together, to secure your environments in a worst-case scenario.

·         Using Windows Azure in your offsite backup strategy
Windows Server 2012, System Center Data Protection Manager 2012 Sp1 and Windows Azure can ensure that you get the sleep you require every night. Instead of using dedicated hardware for offsite backup (long-term), we can now put Azure storage into our consideration when planning for secure backups.

·         Working with Virtual Machines using Windows Azure Portal and App Controller
Windows Server 2012, System Center App Controller 2012 Sp1 and Windows Azure let you expand your datacenter and deploy virtual machines to private, public and service provider clouds in an easy manner. See how to move things around, back and forth while having a simplified management experience for your self-service users.

To join us on this free community event, please register on the following link: http://lumagate.no/windows-azure-og-system-center-roadshow

-kn



Monday, May 20, 2013

Questions regarding Networking in VMM 2012 SP1


Once in a while, I get questions from my beloved readers of my blog.
Some of them may also be quite relevant to the rest of the community, and this is the case for this blog post. I received some questions about networking in VMM and can happily share the Q&A with you here:
--------------------------------------------------------------------------------------------------------------------------
Environment:
I would like to implement the converged fabric method via SCVMM 2012 SP1.  Currently we do not have plans to use NVGRE,everything is using VLANs.
Our hosts have 2x10Gb and 4x1Gb physical NICs. For storage we use HBA's connected to EMC SAN.

Q1: Logical switches:
Is it a good idea to create two logical switches in SCVMM? One for datacenter(vNIC LM, vNIC Cluster, vNIC Mgmt) and one for VM Guests. Should I use the 2x10Gb for the VMGuests and the 4x1Gb for the datacenter traffic?  Will the 4x1 Gb be sufficient for datacenter traffic?
During the MMS 2013 session of Greg Cusanza there is only 1 logical switch used.

A1:
It depends on the physical adapters in most cases. If you have, let’s say 2x10GBe presented on your host, I would create one team (equal to one logical switch in VMM) and have the different traffic spread among virtual network adapters with corresponding QoS assigned to them.
But when you mix with NICs with different speed (1GBe) then you would not be too happy with the load balancing in that team. For this, you can safely create two logical switches with VMM and separate those NICs in those team, and assign the preferred traffic to each team. To decide which team and adapters you should use to each traffic, I would recommend to give Live Migration and Storage (iSCSI or SMB) a higher guarantee on minimum bandwidth. This to ensure that live migration traffic is able to execute faster, and that your virtual machines hard disks have sufficient IOPS.

See common configurations here (Examples are shown in Hyper-V in Windows Server 2012 with Powershell): http://technet.microsoft.com/en-us/library/jj735302.aspx

Q2: Logical networks:
The following blogsite mentions to create an logical network for each traffic (LM, Cluster, Mgmt, AppA-VLAN, AppB-VLAN, AppC-VLAN)
http://blogs.technet.com/b/scvmm/archive/2013/04/29/logical-networks-part-ii-how-many-logical-networks-do-you-really-need.aspx

On the otherhand the following videoblogpost shows to create only two logical networks. 1 Datacenter and 1 VM Guests, each with several Network Sites.
http://blogs.technet.com/b/yungchou/archive/2013/04/15/building-private-cloud-blog-post-series.aspx

What is your opinion about this? Which one is best practice? Has one got (dis)advantages? Would I loose a functionality if I choose one above the other?
(taking into account that we currently have 20 VLANs)

A2:
A logical network in VMM should represent the actual networks and sites that serves a function. Let’s say that ‘Management’ is the management network, where hosts connected to this network can communicate with each other. You can have different sites and subnets here (also VLANs) but all in all it’s the same logical network, serving the function for management traffic. Also remember that VM networks (which is an abstracted network of the logical network) is assigned to virtual network adapters while using logical switches and teaming. So in order to get this straight, you must have a logical network for every different network traffic you would use in this configuration. This is because a VM network can only be associated with one logical network.
Typically, you will end up with a similar configuration when using converged fabric in VMM, according to best practice:

1 Logical Network for Management
1 Logical Network (dedicated subnet/VLAN) for Live Migration
1 Logical Network (dedicated subnet/VLAN) for Cluster communication
1 or more Logical Networks for SMB3.0 traffic (to support multi-channel in a scale-out file server cluster)
1 or more Logical Networks for iSCSI traffic
1 or more Logical Networks for VM guests (the VM network you create afterwards will be associated with this logical network. By using Trunk you can easily assign right subnet, VLAN directly on your VMs virtual adapters).

For more information about common configuration with VMM, see http://blogs.technet.com/b/privatecloud/archive/2013/04/03/configure-nic-teaming-and-qos-with-vmm-2012-sp1-by-kristian-nese.aspx

Q3: Teaming:
In the same videoblog of Yung Chou, they mention that for the backend traffic we should use the uplink pp with teaming loadbalance alg: TransportPorts. This would give better loadbalancing.
For the VMguest traffic we should use Hyper-Vport.
This is the first time that I see this recommendation. What is your experience with this?

A3:
This is a tricky question and the answers is depending on how many NICs you have present on your host.
If the number of virtual NICs greatly exceeds the number of team members, then Hyper-V Port is recommended.
Address hashing is best used when you want maximum bandwidth availability for each connection.

I would recommend you to order the book ‘Windows Server 2012 – Hyper-V, Installation and configuration guide’ by Aidan Finn and his crew to get all the nasty details here.
For this to work from a VMM perspective, you would need to create to logical switches with different configurations. 

Friday, May 10, 2013

Intelligent Placement and Dynamic Optimization in VMM 2012 SP1

This blog post will cover what Intelligent Placement is in VMM 2012, how it works and how it fits into Dynamic Optimization.

High-level overview of Intelligent Placement

When you deploy a virtual machine with VMM, there is a lot going on under the hood to ensure that you will have a successful deployment. Things like hardware configuration and host resources is critical, and also consistency when you deploy to a cluster. In summary, there is a lot of variables in the mix that can give you a warning or error. Some of the warnings and errors you might get could seems irrelevant to you at first, but VMM like to ensure that your environment is fit, healthy and properly configured.

 This is why I am writing this blog post to enlighten you to be able to understand what is going on and how to correct these things. I’ve seen a lot of consultants, customers and forum users blaming VMM for not being able to deploy a virtual machine, while it is successful when using Failover Cluster Manager or/and Hyper-V Manager.

Virtual Machine Manager – The Management layer and its intelligence

Once you add your virtualization hosts (Hyper-V) to VMM, the VMM agent gets installed as part of the process.

The VMM management server communicates with its agents on the hosts and get the information its need to detect how the hosts are operating. The Fabric workspace in VMM is where you configure your infrastructure – the building blocks for your datacenter and cloud. This will include storage, network and computing power. Computing power in this context is equal to virtualization hosts and the different server roles required supporting their life cycle. You can configure Host groups within VMM and the host groups contain several properties for the hosts it contains.

One of the properties that is relevant to this blog post and intelligent placement is host reserves and dynamic optimization.

You can specify an amount of resources that should be reserved for the hosts within the host group at all time. VMM will bring this (among other variables, as we will see) into consideration during deployment of virtual machines and services. And just to be a bit more complex, you have the opportunity to specify host reserves on each host individually as well. As you can see, there is a lot of things to be aware of. Someone should keep this in mind all the time. There might be just you that is deploying virtual machines in your organization, or other administrators as well. Luckily, VMM is our rescue here and keep track of this all the time.

Networking is another major factor that may affect the deployment. This is also something you map to your host groups, so that VMM is able to tell if a virtual machine can be deployed to these hosts or clusters and ensure network connectivity.

Imagine this in a huge environment. You would probably need a couple of spreadsheets to keep this documented, and remember you would have to maintain it too.

Dynamic Optimization

When you have a cluster managed by VMM, you can enable both Dynamic Optimization and Power Optimization at the host group level. Dynamic Optimization will load balance the cluster(s) when enabled, based on the configuration you specify. You can decide how aggressive it should be (move virtual machines for less gain) and how frequently the optimization should run. Power Optimization will power your hosts on and off when needed, to support the workload. Power Optimization requires Dynamic Optimization to be enabled and that your hosts are configured for out of band management.

Dynamic Optimization is one of the features you should really care of if you want to enable a private cloud and have a dynamic environment for your virtual machines. Instead of check the state of your cluster, hosts prior to a new virtual machine deployment, make sure you have enough resources; VMM will do this for you thanks to intelligent placement, and ensure that the environment is correctly balanced with dynamic optimization.

Dynamic Optimization was first new in VMM 2012 and did not require SCOM integration to work, as it required in VMM 2008 R2.

So instead of using a SCOM agent on your hosts to get this information, VMM monitors and acts natively in the VMM service with the VMM agents instead and relies on the intelligent placement feature.. Since VMM is now in control of it, you will have a centralized decision maker seeing your cloud fabric in context.

We now know that Intelligent Placement is the enabler for Dynamic Optimization and you should really use Dynamic Optimization (and eventually Power Optimization if possible) when managing your cloud environment. We will now have a look at some of the checks that intelligent placement is doing for you

What does Intelligent Placement actually check before it places its virtual machines on the hosts?

Platform type (Hyper-V, XenServer, VMWare)
Three different hypervisors is potential equal to three different kinds of errors.
Hyper-V has its own disk formats and scalability limitations, and the same goes for both VMware and XenServer. If you try to deploy a virtual machine with VHDX disks to an ESXi host, or trying to deploy a VM that has Dynamic Memory enabled when the hypervisor does not support it, this would fail beyond recognition. Intelligent Placement will check this, and you can create hardware profiles for your virtual machines that matches the three different hypervisors that VMM supports and categorize these in the library.

CPU compatibility
Make model, stepping and architecture.
A known rule in the Hyper-V world is that you can perform Live Migration between hosts as long as the CPU is from the same manufacturer. You can also enable the option to allow migration to different CPU architectures on the HW profile for the virtual machine. This is something that VMM checks for you during deployment and intelligent placement.

Host Reserves
As mentioned earlier, each host group and each individual host can have its own reservations. Intelligent Placement will ensure this will be considered during placement of VMs so you do not overcommit your hosts and clusters. 

Logical processor count on the host
With Hyper-V in Windows Server 2012, you can have as many vCPU’s within a virtual machine as there are logical CPU’s on the hosts. The maximum number a virtual machine supports is 64 vCPU’s.
To summarize, if you deploy a virtual machine with 14 vCPU’s and the host only has 8 logical CPU’s, this will result in an error telling you what is going on and why it is failing.

NUMA configuration match between VM and host
VMM detects and check the NUMA configuration to make sure nothing is in disharmony before it deploys a virtual machine.

Snapshot compatibility
Compatibility on migration or deployment from libraries will be checked during intelligent placement.

Host state
Checks if the hosts are available for placement, responding, has healthy agents and so on.

Networking
This could probably be an entire blog post by itself. Networking plays a huge role in VMM, and VMM is responsible and able to ensure that everything is properly configured. Network connectivity for the hosts groups, associated physical network adapters on the hosts to the right logical networks, VM networks associated with the right logical networks according to the host groups and hosts, if network virtualization is enabled on the logical network and/or the logical switch, native port profiles, if the load balancer is available for host group and much more is checked both during placement and during Dynamic Optimization.

Possible owners, preferred owners (Cluster)
Now in VMM 2012 SP1, you can specify both possible owners and preferred owners for each virtual machine. This is a setting we know from Failover Cluster in Windows Server and VMM and these settings are now exposed in VMM. VMM will bring this into consideration during placement and when Dynamic Optimization kicks in.

Availability sets and Anti-Affinity settings
When you configure availability sets for the virtual machines, VMM will not place these virtual machines on the same hosts. Please note that this is not a hard block.

Machine name length
Allowable characters in VM name per platform.

Allowable characters in VM name
This setting is individual to each platform.

Disks
Disk matching, space, classification, disk IO capacity, shared disk type, and pass-through disk checks.

Cluster overcommit checks
VMM can configure ‘cluster reserves’ for each cluster. If the value results in overcommit for your cluster, you’ll get a warning during placement, and dynamic optimization might not run as expected.

RemoteFX
Checks if RemoteFX is available on the hosts compared to the VM configuration

Each of these checks is taking into account on every deployment, migration, dynamic optimization, maintenance mode and is continually evaluated by intelligent placement so that the appropriate actions can be taken if there are any problems.

I would like to thank Hilton Lange on the VMM team for providing me with valuable insight and tips to create this blog post. Thank you!

There is a lot more going on, and I will likely be updating this blog post when something new is added – or detected on the way.

Thursday, May 2, 2013

System Center 2012 Virtual Machine Manager Cookbook


System Center 2012 Virtual Machine Manager cookbook

A couple of months ago, I got an e-mail from a fellow MVP, Alessandro Cardoso.

And it’s funny how this community works. I remember when I started at the forums, and especially the Hyper-V forum, there was this person called ‘Alessandro Cardoso’ who served long and detailed answers to the Hyper-V community. The same Alessandro Cardoso was asking if I could be the technical reviewer of his book ‘Virtual Machine Manager – Cookbook’.

Before you continue reading this blog post, make sure you will grab a copy right now:


I have been writing a similar book myself, and participated on several other on this topic. That’s why I keep doing this. Working with Hyper-V, Windows Server and System Center is a part of my day job, and when it comes to Virtual Machine Manager (which is my second home), I cannot resist.

It was a pleasure assisting Alessandro on this book and help him with my insight on network virtualization and the other hot new topics in Service Pack 1.

It is very well written and covers a bunch of ninja tricks and tips for you to get the kick-start you need with this technology.