Tuesday, March 29, 2011

How to create a Cloud in VMM 2012

New terms and new stuff in VMM 2012. One of the most interesting is the ability to create a cloud.  A real private cloud.

“A private cloud is a cloud that is provisioned and managed on-premise by an organization. The private cloud is deployed using an organization`s own hardware to leverage the advantages of the private cloud model. Through System Center Virtual Machine Manager 2012, an organization can manage the private cloud definition, access to the private cloud, and the underlying physical resources”.

The important thing here, is the ‘underlaying physical resources’  thing. This need to be available and proper configured, and it`s located in the Fabric.


  • Configuring host groups
  • Configuring the library
  • Configuring networking
  • Configuring storage
The host groups:

You have the possibility to segment your hosts in logical groups. For example by clusters, locations, priority, and others. It`s the host groups that is available during the creation of a cloud in VMM.

The library:

The library is important, and a well configured library can save you a lot of time and work.
Here is the place to configure services templates, applications templates, VM templates, Guest/Host/HW profiles, and also SQL profiles.
You will be able to assign your cloud with library resources during the creation.

The networking:

Create logical networks/subnets/VLANs, IP-pools, MAC-pools, and make them available to your cloud.
You can also create load balancers and VIP templates. (In VMM 2012, you can add supported hardware load balancers to the VMM console, and create associated virtual IP (VIP) templates.
A virtual IP template contains load balancer-related configuration settings for a specific type of network traffic. For example, you could create a template that specifies the load balancing behavior for HTTPS traffic on a specific load balancer manufacturer and model. These templates represent the best practices from a load balancer configuration standpoint.)
The storage
VMM 2012 provides deep storage integration. For further detailed information, check out Hans Vredevoort`s post here: http://www.hyper-v.nu/blogs/hans/?p=673
After you have configured the underlying physical resources, you can start to create a cloud.
In VMM 2012 console, navigate to ‘VMs and Services’
Make sure the ‘Home’ tab is selected, and click ‘Create Cloud’.




Type the name of the cloud as well as the description.
 Assign host groups that should be available for this cloud. The host groups is created and sorted in the Fabric.
 Assign logical networks that you have created in the Fabric. These networks will be available for this cloud.
If you are lucky to have a HW load balancer, and have configured it in the Fabric, you assign them in this process.
 VIP profiles. Also created in the Fabric. I`m still not lucky enough to have a HW load balancer to provide you with the details here.
 Storage defined in the Fabric will be available for the cloud in this step. (Again, check Hans post)
 Assign the cloud with library resources, that it can use to deploy services, VMs, and so on.
Define the cloud-magic. Configure elasticity.
(You can change the capacity any time you`d like, by navigating to the properties for you clouds)
Select the Hypervisors that defines this cloud.
ESX, Xen, and Hyper-V

Summary













Congrats!
You have now created your cloud.
After this, you will be able to deploy VMs, services, and much more to your cloud.
If you already have VMs that is running on a host group, you need to power off the VM before you can assign them to a cloud.







Saturday, March 26, 2011

The cloud is moving closer to the IT-Pro

System Center Project Codename “Concero”
More outstanding news from the great System Center team:




Some key capabilities of “Concero” are:
(a) Ability to access resources across multiple VMM Servers
(b) Ability to register and consume capacity from multiple Windows Azure subscriptions
(c) Deploy and manage Services and virtual machines on private clouds created within VMM 2012 and on Windows Azure
(d) Copy Service templates (and optional resources) from one VMM Server to another
(e) Copy Windows Azure configuration, package files and VHD’s from on-premises and between Windows Azure subscriptions
(f) Enable multiple users authenticated through Active Directory to access a single Windows Azure subscription

This is awesome, and fits us perfectly. Finally we will have an integrated management interface that combines both worlds (private+public = hybrid).

I have stressed before that it is important for the IT-pro to know the whole ‘Azure-concept’ –  to understand that it`s a PaaS and not an IaaS. Time to brush of that knowledge once again.

Stay tuned!

Until next time,

Friday, March 25, 2011

It`s all about the cloud

It`s all about the cloud

System Center Virtual Machine Manager 2012 Beta is now available.
We already know that Microsoft`s definition of a private cloud is a composition of Hyper-V and System Center Virtual Machine Manager. And Virtual Machine Manager 2012 is all about the cloud.

New terms in VMM 2012 (Fabric, Cloud)

What`s going on in the Fabric?

Fabric is infrastructure.
It`s at the fabric-level you configure the resources for your clouds.
By adding the various hypervisors (XenServer, VMware, and Hyper-V), configure networking (for example, logical networks, IP address pools, and load balancers) to be used to deploy virtual machines and services , storage (for example, storage classifications, logical units, and storage pools) to be used by Hyper-V hosts and host clusters, and much more.

So what is the meaning of the term ‘Cloud’ in Virtual Machine Manager 2012?

A private cloud is subset of hosts, networking, storage, and library resources together. In other words, a way to segment the datacenter. You can create multiple clouds, based on location, services, and so on. Quite interesting – and fun to finally be able to work with a real ‘cloud’ on-premise as well, and not just the Azure platform.

VMM 2012 is still quite new, and I need to do some extensively testing in the next couple of days.

What will be covered in ‘Virtualization and some coffee’?

As much as possible. VMM 2012 is the ideal tool for the IT-generalist that covers the most of the ecosystem. But I would leave certain things to the experts.

If you`re interested in the Citrix-integration in VMM 2012, I recommend Brian Ehlert`s blog – http://itproctology.blogspot.com  - (and yes, he works at Citrix)


Until next time,

Wednesday, March 23, 2011

New deployment stuff in SCVMM 2012

Christmas came early this year.
System Center Virtual Machine Manager 2012 BETA is now available - http://technet.microsoft.com/nb-no/evalcenter/gg678609.aspx

New deployment stuff in SCVMM 2012

Create virtual machine templates - Create virtual machine templates that can be used to create new virtual machines and to configure machine tiers in services - http://go.microsoft.com/fwlink/?LinkID=212412

Create service templates - Use the VMM Service Template Designer to create service templates that can be used to deploy services - http://go.microsoft.com/fwlink/?LinkID=212414

Deploy virtual machines - Deploy virtual machines to private clouds or hosts by using virtual machine templates - http://go.microsoft.com/fwlink/?LinkID=212412

Deploy services - Deploy services to private clouds or hosts by using a service template - http://go.microsoft.com/fwlink/?LinkID=212414

Scale out a service - Add additional virtual machines to a deployed service - http://go.microsoft.com/fwlink/?LinkID=212410

Service a service - Make changes to a deployed service - http://go.microsoft.com/fwlink/?LinkID=212413

Monday, March 14, 2011

Networking and Hyper-V

Recently, we had a discussion in a user group where I participate.

I asked the members which competence/skills they defined as most valuable for them.
Most of the guys answered networking. (one guy did actually mention SQL security, but that`s what he does for a living).

A quite interesting answer in these days, I must say.
While the world is occupied by virtualization technologies, clouds, and the – as a Service models, these guys are still attached to their beloved networking skills.

And I agree.
Networking is still the most used skill in my day-to-day work.
I would like to give you an example of one my latest Hyper-V projects for a customer.
They needed to save some of their old physical servers. Running critical workloads and had no backups. The servers were located in three different subnets, and one of them was in a secure zone.

That brings us back to the topic: networking
The Hyper-V host was installed with 4 NICs.
1 NIC dedicated for host management
1 NIC dedicated for the 192.168.5.0/24 network
1 NIC dedicated for the 10.10.2.0/24 network
1 NIC dedicated for the 192.168.70.0/24 network

To be able to convert these physical machines to virtual, we have to use SCVMM. (Disk2VHD is not suited when converting Windows Server 2000 since there is no VSS service available)
And SCVMM requires an Active Directory Domain, and to be able to convert a physical server to virtual and place it on a Hyper-V host, the Hyper-V host requires to be member of a domain.

Scenario:
The domain and the required servers were connected through an Internal Virtual Network in Hyper-V.
We created 3 External Virtual Networks, and attached the NICs to the proper networks in the physical switch.
The SCVMM server was equipped with 3 vNICs. One NIC for the internal virtual network, one NIC for the External 192.168.5.0/24 network, and one NIC for the External 192.168.70.0/24 network.
We assigned static IP addresses to the vNICs and were able to connect to the source, and do the P2V conversion and place them safely on the Hyper-V host.
All this was done in an evening. And the most fun of every P2V conversion, is when the costumer says that there are no changes, and wondering when we`ll get started.
-          The machines are now virtual and identical as they were the old physical one.



There are some best practices when it comes to networking in Hyper-V

Dedicated NICs
The Hyper-V host should have a dedicated NIC for host management. Securing stability, cluster management, and to not impact the workloads running in Hyper-V

The VMs running in Hyper-V should have at least one dedicated NIC function as a virtual switch. Cause when you`re creating an External Virtual Network in Hyper-V, you are actually binding that physical NIC to the virtual network. And attach a vNIC to every VM that should communicate on that network. If you are running VMs with heavy network traffic – you could also have a dedicated NIC for this VM – meaning that no other VMs should connect to that vSwitch.

When you are dealing with Failover Clustering combined with Hyper-V, you are able to connect to the shared storage through iSCSI. This brings additional requirements for NICs on the Hyper-V hosts. The VMs would be located on this shared storage, and the NICs intended for this should be of high quality. It`s common to use NIC teaming (though not supported by Microsoft), or MPIO for redundancy and more throughput.
You should also add additional NICs for Live Migration, and Cluster heartbeat communication.


Conclusion
Networking is one of the most important part of an infrastructure. It`s relevant for the system administrator, virtualization administrator, system architect, and – the network administrator.
So to be able to manage the clouds, datacenters, and also your SQL servers - you still need some basic networking skills.
(Even the SQL-guy agreed that networking was important to his job)

Cheers,

Saturday, March 12, 2011

Who runs VMware voluntary in 2011? (Hyper-V vs. VMware)

So far this year, I have been struggling to explain people why Windows Azure is not an IaaS platform, and the value Dynamic Memory in Service Pack 1 can bring to your datacenter. Today’s topic is all about the last one – Dynamic Memory and virtualization in general.

Have you ever gone to bed, and wondering what you might have started?

I had one of these thoughts earlier this week.

The reason for that is that I had a meeting with a large company. They are mainly focusing on hosting, and deliver ‘Virtual Private Servers’. They have 4 clusters and over 20 nodes, and runs over 150 VMs.
The one thing in common for those VMs, is that they are all Windows Servers (2003 R2, 2008, and mostly 2008 R2).
So I asked them: - Which hypervisor are you using?
The answer: - VMware
That got me started…
-          Who runs VMware voluntary in 2011?

When I explained the licensing part of Windows Server 2008 R2 Datacenter – that gives you an unlimited numbers of VMs, they went quite silent for a while.
I actually served the” big shot” at the beginning and then followed up with some sort of a metaphor about parent/child-relationships, explaining the architecture of Hyper-V.
When you install the Hyper-V role in Windows Server 2008 R2, you cause the hypervisor to be installed between the physical hardware and the Windows kernel at system boot time. This turns the Windows installation into this special guest – the parent. The parent is still the boss when it comes to access to the hardware, but are responsible to provide additional services to the other partitions (child partitions/VMs). And since the child partitions are running Windows Server as well, you might be tempted to think that Windows knows what`s best for the VMs  – just like you knows what`s best for your children – I said.
He got the point he said, and started to explain that they needed VMware to be able to have a High Available solution, along with a feature called ‘vMotion’.
That brought me over to talk about Failover Cluster, CSV and Live Migration. Everything you actually need for this is built into the OS – Windows Server 2008 R2 Enterprise/Datacenter.
In addition, I explained the Dynamic Memory feature in Service Pack 1 in detail for them – compared to VMware.

VMware has the so-called ‘memory overcommit technique’ and provide more RAM to VMs than the physical machine actually has.
1)      Serve more memory to the VMs than the physical machine has
2)      Identify the same memory blocks (by hash) – in multiple VMs.
3)      Compress the host memory by storing those blocks only once.

In Hyper-V, instead of compressing the RAM, it allows the VMs to talk with the host (parent partition) through the VMBus (VSP/VMBus/VSC) to demand more memory. You configure the startup RAM that the VM need to be able to boot, and can also set a limit (maximum RAM). Dynamic Memory works in that way that the host take all available memory (without the memory that is exclusively reserved for the host) and shares it with the running VMs. You can even prioritize your VMs, as well as configure the memory buffer – for performance and optimizing.

-          What does it cost?  he said.

-          Nothing. The Dynamic Memory feature is included in Service Pack 1 for Windows Server 2008 R2.

So after I had provided these guys with some links: Hyper-V vs. VMware and the licensing calculator
They had some thinking to do.

One final question
-          How could we move our existing VMs from VMware to Hyper-V?
-          You can use SCVMM 2008 R2 for this. You can even administer your VMware as well as your entire virtual environment with this tool.
So after a walk-through of their infrastructure, we could confirm that they had everything in place for a Hyper-V deployment, connected to their storage with iSCSI.

The contract they currently have with a vendor that provides VMware solutions expires this summer.
I`ll guess they will re-consider this one, and focus on Hyper-V the next months.

Quite honestly, I am surprised that they actually never had considered Hyper-V for their environment. They just needed some guy to share some basic general information.

Cheers,

Thursday, March 10, 2011

Webcast - Dynamic Memory in Windows Server 2008 R2 SP1 (Hyper-V)

Microsoft Norway published a webcast I created this month.
You can find it here (it`s in Norwegian)

I talk basically about the benefits and values it brings to the virtual datacenter, how to get started, and explains the various settings.

Saturday, March 5, 2011

Changes in ‘Intelligent Placement’ – SCVMM 2008 R2 SP1

Since SP1 now supports Dynamic Memory and RemoteFX and this will affect the ‘intelligent placement’.

·         Checking for GPU compatibility (RemoteFX VMs)
VMM will check that an identical GPU is available on the destination node. RemoteFX should be enabled in RDS.

·         Dynamic Memory footprint (VMs with DM enabled)
VMM will check the current memory usage on the running VM – and require that amount available on the destination node.
Example: If your VMs is configured with a startup RAM of 512MB, and currently using 4GB, the destination node must have at least 4GB available memory for a successful migration

You`ll find the settings for DynamicMemory in SCVMM on the properties on the VMs.
(If you need to configure the priority, this one is located under the ‘Priority-tab’ together with the CPU-priority)



Tuesday, March 1, 2011

Considerations using Hyper-V with Dynamic Memory in Failover Clustering

Ok, so you have had your VM`s running for a while with the dynamic memory feature enabled.
Everything works great, and you plan to put one of your nodes in maintenance mode since there are some important and recommended new available Windows Updates to install.
To place your node in maintenance mode, you need to migrate (Quick Migration or/and Live Migration) your VMs to a node which has available resources.
You are familiar that with SP1 and Dynamic Memory, the only considerations when it comes to capacity planning as far as RAM concern, is the ‘startup RAM’ on each VM.
If a VM is using 20GB RAM, and the destination node has less available, how would this affect the various migrations?



Since the VM now have a certain amount of memory assigned, it would not easily let go of it.
It also depends on the workloads running within the VM, and how much memory the VM actually demand from the parent.

Workaround:

1.       Try to decrease the memory buffer on the VMs running on the destination node
2.       Try to decrease the memory buffer on the VM that is using much memory
3.       Try to decrease the priority on the VM that is using much memory, and decrease the VMs running on the destination node
4.       Are there any possibilities to decrease the memory within the VM? If your VM is running SQL server, you could try to decrease the maximum server memory
5.       If none of the above helps, you may need to power off some of the running VMs to allow the migration (if the node is running some non –critical workloads and you can live with service interruption), or in worst case – power off the VM you intend to migrate, and let it start up on the destination node with the defined startup RAM.


Here is some guidance`s for how you should plan for Dynamic Memory:

1.       Do not configure too high value for the startup RAM setting on your VMs. The whole idea with Dynamic Memory, is that the VM should communicate with the parent (VSP/VMBus/VSC) to determine the memory need. Remember that the value specified on startup RAM is allocated from the host when the VM is powered on, and cant decrease below this value. The Dynamic Memory algorithm will always strive to make this minimum amount of memory available to the VM.
2.       If your VMs are running some heavy workloads (SQL server etc) then there is a good chance that the VM is using too much memory from the host. It`s always important to know your workload, also when you`re using Dynamic Memory. You might find it valuable to specify the maximum RAM setting on these VMs.
3.       Do not increase the memory buffer if it`s not necessary. If your VM is using 20GB RAM, and the memory buffer is configured with 50%, you are actually telling the parent that it should try to allocate an additional of 10GB RAM to this VM, and it may affect the other VMs.
4.       Documentation – it`s important to have an updated documentation, as well as to monitor your VMs after enabling Dynamic Memory.
It can be a lot more complex than this, especially if you`re using preferred nodes, and other ratings. As a best practice, try to document every setting on the VMs – including vNICs, numbers of vCPU, startup RAM/maximum RAM, vhd`s. You may need to restore a VM from time to time, and I have found these documentations very helpful.

Cheers,