Monday, August 3, 2015

Explaining PowerShell Direct

One of the most frequently asked questions I get from my customers is something like this:

“We have a multi-tenant environment where everything is now software-defined, including the network by using network virtualization. As a result of that, we can no longer provide value added services to these customers, as we don’t have a network path into the environments”.

Last year, I wrote a blog post that talks about “Understanding your service offerings with Azure Pack” – which you can read here: http://kristiannese.blogspot.no/2014/10/understanding-windows-azure-pack-and.html

I won’t get into all of those details, but a common misunderstanding nowadays is that both enterprises and service providers expect that they will be able to manage their customers in the same way as they always have been doing.
The fact that many organizations are now building their cloud infrastructure with several new capabilities, such as network virtualization and self-servicing, makes this very difficult to achieve.

I remember back at TechDays in Barcelona, when I got the chance to talk with one of the finest Program Manager’s at Microsoft, Mr. Ben Armstrong.
We had a discussion about this and he was (as always) aware of these challenges and sad he had some plans to simplify service management in a multi-tenant environment directly in the platform.

As a result of that, we can now play around with PowerShell Direct in Windows Server 2016 Technical Preview.

Background

Walking down the memorial lane, we used to have Virtual Server and Virtual PC when we wanted to play around with virtualization in the Microsoft world. Both of these solutions were what we call a “type 2 hypervisor”, where all the hardware access was emulated through the operating system that was actually running the virtual instances.
With Windows Server 2008, we saw the first version of Hyper-V which was truly a type 1 hypervisor.
In the architecture of Hyper-V – and also the reason why I am telling you all of this, is that we have something called VMBus.

The VMBus is a communication mechanism (high-speed memory) used for interpartition communication and device enumeration on systems with multiple active virtualized partitions. The VMBus is responsible for the communication between the parent partition (the Hyper-V host) and the child partition(s) (virtual machines with Integration Components installed/enabled).

As you can see, the VMBus is critical for communication between host and virtual machines, and we are able to take advantage of this channel in several ways already.

In Windows Server 2012 R2, we got the following:

·         Copy-VMFile

Copy-VMFile let you copy file(s) from a source path to a specific virtual machine running on the host. This was all done within the context of the VMBus, so there’s no need for network connectivity to the virtual machines at all. For this to work, it required you to enable “Guest Services” on the target VMs as part of the integration services.

Here’s an example on how to achieve this using PowerShell:

# Enable guest services
Enable-VMIntegrationService -Name 'Guest Service Interface' -VMName mgmtvm -Verbose

# Copy file to VM via VMBus
Copy-VMFile -Name mgmtvm -SourcePath .\myscript.ps1 -DestinationPath “C:\myscript.ps1” -FileSource Host -Verbose

·         Remote Console via VMBus

Another feature that was shipped with Windows Server 2012 R2 was something called “Enhanced Session Mode”. This would leverage a RDP session via the VMBus.
Using RDP, we could now logon to a virtual machine directly from Hyper-V Manager and even copy files in and out of the virtual machine. In addition, USB and printing would also now be possible – without any network connectivity from the host to the virtual machines.

And last but not least, this was the foundation for the Remote Console feature with System Center and Windows Azure Pack- which you can read more about here: http://kristiannese.blogspot.no/2014/02/configuring-remote-console-for-windows.html

And now back to the point. With Windows Server 2016, we will get PowerShell Direct.

With PowerShell Direct we can now in an easy and reliable way run PowerShell cmdlets and scripts directly inside a virtual machine without relying on technologies such as PowerShell remoting, RDP and VMConnect.
Leveraging the VMBus architecture, we are literally bypassing all the requirements for networking, firewall, remote management – and access settings.

However, there are some requirements in the time of writing this:

·         You must be connected to a Windows 10 or a Windows Server technical preview host with virtual machines that are running Windows 10 or Windows Server technical preview as the guest operating system
·         You must be logged in with Hyper-V Admin creds on the host
·         You need user credentials for the virtual machine!
·         The virtual machine that you want to connect to must run locally on the host and be booted

Clearly, it should be obvious that both the host and the guest need to be on the same OS level. The reason for this is that VMBus is relying on the virtualization service client in the guest – and the virtualization service provider on the host, which need to be the same version.

But what’s interesting to see here is that in order to take advantage of PowerShell Direct, we need to have user credentials for the virtual machine’s guest operating system itself.
Also, if we want to perform something awesome within that guest, we probably need admin permission too – unless we are able to dance around with JEA, but I have been able to test that yet.

Here’s an example on what we can do using PowerShell Direct

# Get credentials to access the guest
$cred = Get-Credential

# Create a PSSession targeting the VMName from the Hyper-V Host
Enter-PSSession -VMName mgmtvm -Credential $cred

# Running a cmdlet within the guest context
Get-Service | Where-Object {$_.Status -like "*running*" -and $_.name -like "*vm*" }

[mgmtvm]: PS C:\Users\administrator.DRINKING\Documents> Get-Service | Where-Object {$_.Status -like "*running*" -and $_.name -like "*vm*" }

Status   Name               DisplayName                           
------   ----               -----------                          
Running  vmicguestinterface Hyper-V Guest Service Interface      
Running  vmicheartbeat      Hyper-V Heartbeat Service            
Running  vmickvpexchange    Hyper-V Data Exchange Service        
Running  vmicrdv            Hyper-V Remote Desktop Virtualizati...
Running  vmicshutdown       Hyper-V Guest Shutdown Service       
Running  vmictimesync       Hyper-V Time Synchronization Service 
Running  vmicvmsession      Hyper-V VM Session Service           
Running  vmicvss            Hyper-V Volume Shadow Copy Requestor

As you can see, [mgmtvm] shows that the context is the virtual machine and we have successfully listed all the running services related to the integration services.

Although this is very cool and shows that it works, I’d rather show something that might be more useful.

We can enter a PSSession as showed above, but we can also directly invoke a command through invoke-command and use –scriptblock.

#Invoke command, create and start a DSC configuration on the localhost
Invoke-Command -VMName mgmtvm -Credential (Get-Credential) -ScriptBlock {
# DSC Configuration
Configuration myWeb {
    Node "localhost" {
        WindowsFeature Web {
            Ensure = "Present"
            Name = "Web-Server"
        }
    }
}
# Enuct the DSC config
myWeb

# Start and apply the DSC configuration
Start-DscConfiguration .\myWeb -Wait -Force -Verbose }

From the example above, we are actually invoking a DSC configuration that we are creating and applying on the fly, from the host to the virtual machine using PowerShell Direct.

Here’s the output:

PS C:\Users\knadm> #Invoke command, create and start a DSC configuration on the localhost
Invoke-Command -VMName mgmtvm -Credential (Get-Credential) -ScriptBlock {
# DSC Configuration
Configuration myWeb {
    Node "localhost" {
        WindowsFeature Web {
            Ensure = "Present"
            Name = "Web-Server"
        }
    }
}
# Enuct the DSC config
myWeb

# Start and apply the DSC configuration
Start-DscConfiguration .\myWeb -Wait -Force -Verbose }
cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
WARNING: The configuration 'myWeb' is loading one or more built-in resources without explicitly importing associated modules. Add Import-DscResource –ModuleName ’PSDesire
dStateConfiguration’ to your configuration to avoid this message.


    Directory: C:\Users\administrator.DRINKING\Documents\myWeb


Mode                LastWriteTime         Length Name                                                         PSComputerName                                            
----                -------------         ------ ----                                                         --------------                                             
-a----       03-08-2015     11:34           1834 localhost.mof                                                mgmtvm                                                    
VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespace
Name' = root/Microsoft/Windows/DesiredStateConfiguration'.
VERBOSE: An LCM method call arrived from computer MGMT16 with user sid S-1-5-21-786319967-1790529733-2558778247-500.
VERBOSE: [MGMT16]: LCM:  [ Start  Set      ]
VERBOSE: [MGMT16]: LCM:  [ Start  Resource ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]: LCM:  [ Start  Test     ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] The operation 'Get-WindowsFeature' started: Web-Server
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] The operation 'Get-WindowsFeature' succeeded: Web-Server
VERBOSE: [MGMT16]: LCM:  [ End    Test     ]  [[WindowsFeature]Web]  in 22.0310 seconds.
VERBOSE: [MGMT16]: LCM:  [ Start  Set      ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Installation started...
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Continue with installation?
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Prerequisite processing started...
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Prerequisite processing succeeded.
WARNING: [MGMT16]:                            [[WindowsFeature]Web] You must restart this server to finish the installation process.
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Installation succeeded.
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] successfully installed the feature Web-Server
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] The Target machine needs to be restarted.
VERBOSE: [MGMT16]: LCM:  [ End    Set      ]  [[WindowsFeature]Web]  in 89.0570 seconds.
VERBOSE: [MGMT16]: LCM:  [ End    Resource ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]:                            [] A reboot is required to progress further. Please reboot the system.
WARNING: [MGMT16]:                            [] A reboot is required to progress further. Please reboot the system.
VERBOSE: [MGMT16]: LCM:  [ End    Set      ]
VERBOSE: [MGMT16]: LCM:  [ End    Set      ]    in  113.0260 seconds.
VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 115.028 seconds

In this example I am using one of the built-in DSC resources in Windows Server. If I wanted to do more advanced configuration that would require custom DSC resources, I would have to copy those resources to the guest using the Copy-VMFile cmdlet first. All in all, I am able to do a lot around vm management with the new capabilities through VMBus.

So, what can we expect to see now that we have the opportunity to provide management directly, native in the compute platform itself?

Let me walk you through a scenario here where the tenant wants to provision a new virtual machine.

In Azure Pack today, we have a VM extension through the VM Role. If we compare it to Azure and its new API through Azure Resource Manager, we have even more extension to play around with.
These extensions gives us an opportunity to do more than just OS provisioning. We can deploy – and configure advanced applications just the way we want to.
Before you continue to read this, please note that I am not saying that PowerShell Direct is a VM extension, but still something useful you can take advantage of in this scenario.

So a tenant provision a new VM Role in Azure Pack, and the VM Role is designed with a checkbox that says “Enable Managed Services”.

Now, depending on how each service provider would like to define their SLA’s etc, the tenant has now made it clear that they want managed services for this particular VM Role and hence need to share/create credentials for the service provider to interact with the virtual machines.

I’ve already been involved in several engagements in this scope and I am eager to see the end-result once we have the next bits fully released.

Thanks to the Hyper-V team with Ben and Sarah, for delivering value added services and capabilities on an ongoing basis!

Tuesday, July 28, 2015

Re-associate orphaned virtual machine with its VM role in System Center 2012 R2 with Update Rollup 7

If you have been using Azure Pack and the VM Cloud Provider, you have most likely been tempted to use the concept of VM Roles too.

VM Roles is a powerful technology that makes it possible for you to provide a lot more than just a sysprep’d operating system to your tenants. Through a resource extension, a VM Role can be deployed with any application you’d like, ready to go for the tenants.

However, there has been some challenges since the release of Azure Pack and VM Roles.
Some of the challenges has been related to Azure Pack directly, and some of the challenges has been related to Virtual Machine Manager.
I won’t cover everything here, but the following picture should summarize some of the challenges of a VM Role and a stand-alone VM, where some parts such as “static disk” wasn’t enabled for VM Roles before UR5. With UR6, we also got support for Gen2 VMs as part of the VM Roles.



Also note that “backup” and “DR” on VM Role is categorized as a “no go”.

Luckily and as usual, David and his great team at Microsoft has listened to our feedback – and with Update Rollup 7 for Virtual Machine Manager 2012 R2, we are now able to re-associate a VM Role!

Background:

A couple of months ago, I reached out to the VMM team through David Armour and explained a rather bad situation for him that one of my customer suddenly was in the middle of.
It turned out that several of my MVP friends also had experienced similar issues and this was becoming a critical issue for those customers. Here’s some details around the problem we saw:

In the case of some underlying storage issues in the cloud environment, many of the virtual machines that was running in VMM, SPF and Azure Pack ended up in a pretty bad state, and the only way to solve it was to generate new IDs for those VMs.

Now, this sounds very tempting and applicable in certain scenarios. But given the fact that the VMs actually were part of a VM Role in Azure Pack, turned out to be a bad experience.
Once a VM is no longer associated with the VM Role in WAP, it will appear as a stand-alone VM with no way for you to perform advanced operations through the tenant portal. The VM Role itself will appear as an orphaned object.

Our biggest challenge in this satiation was:

1)      There was no way to re-associate a VM instance with a VM role once this relationship was broken (so Remove-SCVirtualMachine with –Force parameter was not an option)
2)      If we could re-associate with a VM role (once the VM appeared in VMM again with new ID), the usage would be broken for that VM. Yes the customer was actually using the usage API in WAP to charge their tenants.

For this customer the issue was most likely caused by some underlying storage problems. However, you could easily end up in a similar situation by using native Microsoft technologies such as backup/restore and DR through Hyper-V Replica/ASR. Or simplier, by removing and adding a host/cluster to a VMM Cloud

With Update Rollup 7, we have finally support for re-associate both an orpahned VM from a VM Role and a Service Template deployment.

Example of a PowerShell cmdlet that will join an orphaned virtual machine to a VM Role:

$myvm = Get-SCVirtualMachine –Name “KN01”
$myVMRole = Get-CloudResource –Name “mywebservice”
Join-SCVirtualMachine –VM $myvm –VMRole $myVMRole

For more information, please read the following KB:




Thursday, July 2, 2015

Cloud Consistency with Azure Resource Manager - Finally available!

I am very glad to announce the following:

1)      I am renewed as a Cloud & Datacenter MVP – for the fifth time!
2)      As a courtesy, we are releasing our newest whitepaper “Cloud Consistency with Azure Resource Manager

I have been doing a lot of engineering the last year, and Azure Resource Manager is one of the technologies I consider as a game changer and let us finally be able to achieve what we have always wanted, without knowing it was this we really wanted.
An idempodent and declarative way to describe our cloud resources, regardless of location and resource type.
Together with Desired-State Configuration, this is one of my big bets as we move forward.

I really hope that you will enjoy this whitepaper while we are all waiting for Microsoft Azure Stack, that will bring the ARM capabilities on-prem.

If you want to see more of Azure Resource Manager and how to model your cloud resources and applications, I would like to invite you to System Center Universe in Basel in August, where I will be giving several deep dive sessions on the topic:


You can download the whitepaper from TechNet Gallery by following this URL:


BTW: Here's the look on my daughters face when witnessing the capabilities of Azure Resource Manager



Thank you!


Tuesday, June 2, 2015

Announcing a new whitepaper!

Hi everyone!

It has been quite quiet here on this blog since last month, but there’s of course some reasons for that.
I would like to use this opportunity to give you a heads-up on an upcoming whitepaper that I have been working on together with a few other subject matter experts.

This blog post is not about the specific whitepaper itself, but the goal is rather to give you an explanation of why we are having this approach – putting a lot of effort into a whitepaper instead of publishing books.

I have personally been authoring books myself, and also together with other authors. The experience was interesting to say the least, and also required a lot of my time. Not just to do the research, testing and writing, but also to meet the deadlines, engage with reviewers and much more.
In short, the flexibility you have to modify – or even change the subject, is very very limited when working with books.

The limited flexibility is a showstopper in a business where drastic changes (as in new features and releases) happens at a much faster cadence than ever before.
In order to be able to adopt, learn and apply all what’s happening, – writing whitepapers seems like a better idea than doing books.

At least this is what we think. When discussing this with some of our peers, we often get questions around royalties etc. to be honest, you will never ever get rich by writing a book, unless you are writing some fiction about some magic wizard with glasses, or a girl describing her fantasies of a rich man.

So jokes aside, we do this because of the following reasons:

·         We enjoy doing it

This is not a secret at all. Of course we will spend some massive amount of time on these projects, and probably our significant others would have a grin every now and then. But we enjoy so much, that it is worth the risk and potential penalty we might get.

·         For our own learning and knowledge

Let us be honest. We dive deep into this to learn it by heart. There’s no secret that the technology we will cover will be our bread and butter, so we better know what we are doing.

·         To share it with the community

Do it once –and do it right. We spend a lot of our time in forums, conferences, etc and engage with the community. Being able to point towards a rather comprehensive guide that many can benefit from, instead of supporting 1:1 is beneficial for all of us

·         Recognition

If you do something good and useful, I can ensure you that many people – regardless whether they know you or not, will appreciate it and give you credits. We’ve heard several times from our previous whitepaper (Hybrid Cloud with NVGRE (Cloud OS) ) that it helped peers, IT-pro’s, engineers, students and CxO’s to make a real difference. This is probably worth the effort all alone.

So let me introduce you to the upcoming whitepaper that will hit the internet very shortly:

“Cloud Consistency with Azure Resource Manager”

This whitepaper will focus on cloud consistency using Azure Resource Manager in both the public cloud with Azure, as well as the private and hosted clouds with Azure Stack.

I won’t disclose more about the content, structure or the initial thoughts right now, but I encourage you to stay tuned and download it once it is available on the TechNet Gallery.

Thanks for reading!


Friday, May 8, 2015

Microsoft Azure Stack with a strong ARM

How did God manage to create the world in only 6 days?
-          He had no legacy!

With that, I would like to explain what the new Microsoft Azure Stack is all about.

As many of you already know, we have all been part of a journey over the last couple of years where Microsoft is aiming for consistency across their clouds, covering private, service provider and public.
Microsoft Azure has been the leading star and it is quite clear with a “mobile first, cloud first” strategy that they are putting all their effort into the cloud, and later, make bits and bytes available for on-prem where it make sense.
Regarding consistency, I would like to point out that we have had “Windows Azure Services for Windows Server” (v1) and “Windows Azure Pack” (v2) – that brought the tenant experience on-prem with portals and common API’s.

Let us stop there for a bit.
The API’s we got on-prem as part of the service management APIs was common to the ones we had in Azure, but they weren’t consistent nor identical.
If you’ve ever played around with the Azure Powershell module, you have probably noticed that we had different cmdlets when targeting an Azure Pack endpoint compared to Microsoft Azure.

For the portal experience, we got 2 portals. One portal for the Service Provider – where the admin could configure the underlying resource providers, create hosting plans and define settings and quotas through the admin API. These hosting plans were made available to the tenants in the tenant portal with subscriptions, where that portal – was accessing the resources through the tenant API.

The underlying resource providers were different REST APIs that could contain several different resource types. Take the VM Cloud resource provider for example, that is a combination of System Center Virtual Machine Manager and System Center Service Provider Foundation.

Let us stop here as well, and reflect of what we have just read.

1)      So far, we have had a common set of APIs between Azure Pack and Azure
2)      On-prem, we are relying on System Center in order to bring IaaS into Azure Pack

With cloud consistency in mind, it is about time to point out that to move forward, we have to get the exact same APIs on-prem as we have in Microsoft Azure.
Second, we all know that there’s no System Center components that are managing the Hyper-Scale cloud in Azure

Let us take a closer look at the architecture of Microsoft Azure Stack



Starting at the top, we can see that we have the same – consistent browser experience.
The user facing services consists of hubs, a portal shell site and  RP extensions for both admins (service provider) and tenant. This shows that we won’t have two different portals as we have in Azure Pack today, but things are differentiated through the extensions.

These components are all living on top of something called “Azure Resource Manager”, which is where all the fun and consistency for real is born.
Previously in Azure, we were accessing the Service Management API when interacting with our cloud services.
Now, this has changed and Azure Resource Manager is the new, consistent and powerful API that will be managing all the underlying resource providers, regardless of clouds.

Azure Resource Manager introduces an entirely new way of thinking about your cloud resources.
A challenge with both Azure Pack and the former Azure portal was that once we had several components that made up an application, it was really hard to manage the life-cycle of it. This has drastically changed with ARM, where we can now imagining a complex service, such as a SharePoint farm – containing many different tiers, instances, scripts, applications. With ARM, we can use a template that will create a resource group (a logical group that will let you control RBAC, life-cycle, billing etc on the entire group of resources, but you can also specify this at a lower level on the resources itself) with the resources you need to support the service.
Also, the ARM itself is idempotent, which means it has a declarative approach. You can already start to imagine how powerful this will be.

In the context of the architecture of Azure Stack as we are looking at right now, this means we can:

1)      Create an Azure Gallery Template (.json)
a.       Deploy the template to Microsoft Azure
or/and
b.      Deploy the template to Microsoft Azure Stack

It is time to take a break and put a smile on your face.

Now, let us explain the architecture a bit further.

Under the Azure Resource Manager, we will have several Core Management Resource Providers as well as Service Resource Providers.

The Core Management Resource Providers consists of Authorization – which is where all the RBAC settings and policies are living. All the services will also share the same Gallery now, instead of having separate galleries for Web, VMs etc as we have in Azure Pack today. Also, all the events, monitoring and usage related settings are living in these core management resource providers. One of the benefits here is that third parties can now plug in their resource providers and harness the existing architecture of these core RPs.

Further, we have currently Compute, Network and Storage as Service Resource Providers.

If we compare this with what we already have in Azure Pack today through our VM Cloud Resource Provider, we have all of this through a single resource provider (SCVMM/SCSPF) that basically provides us with everything we need to deliver IaaS.
I assume that you have read the entire blog post till now, and as I wrote in the beginning, there’s no System Center components that are managing Microsoft Azure today.

So why do we have 3 different resource providers in Azure Stack for compute, network and storage, when we could potentially have everything from the same RP?

In order to leverage the beauty of a cloud, we need the opportunity to have a loosely coupled infrastructure – where the resources and different units can scale separately and independent of each other.

Here’s an example of how you can take advantage of this:

1)      You want to deploy an advanced application to an Azure/Azure Stack cloud, so you create a base template containing the common artifacts, such as image, OS settings etc
2)      Further, you create a separate template for the NIC settings and the storage settings
3)      As part of the deployment, you create references and eventually some “depends-on” between these templates so that everything will be deployed within the same Azure Resource Group (that shares the same common life-cycle, billing, RBAC etc)
4)      Next, you might want to change – or eventually replace some of the components in this resource group. As an example, let us say that you put some effort into the NIC configuration. You can then delete the VM (from the Compute RP) itself, but keep the NIC (in the Network RP).

This gives us much more flexibility compared to what we are used to.

Summary

So, Microsoft is for real bringing Azure services to your datacenters now, as part of the 2016 wave that will be shipped next year. The solution is called “Microsoft Azure Stack” and won’t “require” System Center – but you can use System Center if you want for managing purposes etc., which is probably a very good idea.

It is an entirely new product for you datacenter – which is a cloud-optimized application platform, using Azure-based compute, network and storage services

In the next couple of weeks, I will write more about the underlying resource providers and also how to leverage the ARM capabilities. 

Stay tuned for more info around Azure Stack and Azure Resource Manager.



Monday, May 4, 2015

Azure Site Recovery: Generation 2 VM support

For almost a year ago, Microsoft announced the preview of a cloud service that has turned out to be the leading star when it comes to Hybrid Cloud scenarios, out of the box from Microsoft.

Microsoft Azure Site Recovery let customers extend their datacenter solutions to the cloud to ensure business continuity and availability on-demand.
The solution itself is state of the art and covers many different scenarios – and can rather be seen as their “umbrella” when it comes to availability and recovery in the cloud, as it has several different offerings in different flavors under its wings. 

Besides supporting DR protection of VMware and Physical computers (newly announced), Azure Site Recovery is considered as mandatory for organizations that need DR for their Hyper-V environments, regardless whether the cloud or a secondary location on-prem is the actual DR target.

Just recently, Microsoft announced support for protecting Generation 2 Virtual Machines to Azure.
This is fantastic good news and shows that the journey towards cloud consistency is established for sure.

Let me add some context before we look into the details.

I’ve been working with the brilliant Azure Site Recovery Product Group at Microsoft for a long time now, and I have to admit that these guys are outstanding. Not only do they ship extremely good quality of code, but they also listen to feedback. And when I say listen, they actually engage with you and really tries to understand your concern. In the end of the day, we are all on the same team, working towards the best experience and solution possible.

During TechEd in Barcelona, I was co-presenting “Microsoft Azure Site Recovery: Leveraging Azure as your Disaster Recovery Site” (http://channel9.msdn.com/Events/TechEd/Europe/2014/CDP-B314 ) together with Manoj, and this is when our real discussion started.
Using Azure as the secondary site for DR scenarios makes perfect sense and many customers would like to take benefit from this as soon as possible. However, we often saw that these customers had deployed their virtual machines as Generation 2 VMs – which wasn’t suited for the Azure platform. This was a blocker and the amount of Gen2 VMs were increasing every day.

Earlier in January this year, I made a community survey around the topic and the result was very clear:

Yes – people would love to use Azure as their secondary site, if there was support of Generation 2 VMs in the Cloud.

I am glad to say that the Product Group listened and now we can start to protect workloads on Gen2 VMs too.
But, how does this work?

When you enable a VM for protection, the data is sent to an endpoint in Azure, and nothing special has happened so far.

However, the ASR service will perform a conversion in the service at the time of failover to Gen1.

What?

Let me explain further.

In case of a disaster where you need to perform a failover to Azure, the VM(s) is converted and started as Gen1, running in Azure.
The ASR backend services used during failover has the conversion logic. At failover time, backend service reads Gen2 OS disk and convert the disk to Gen1 OS disk (hence the requirements of the OS disk in Azure).
If you need/want/have to failback to your on-prem Hyper-V environment, the VM will of course be converted back to Gen2.

For more details – check out the official blog post by one of the PM’s, Anoob Backer


Thursday, April 30, 2015

VM Checkpoints in Windows Azure Pack

Fresh from the factory, Update Rollup 6 has been released and shipped by Microsoft.

This isn’t a blog post that will point out all the bug fixes and the amazing work all of the teams has been doing, but rather point you towards a highly requested feature, that finally made its way to the tenant portal in Windows Azure Pack.

With Update Rollup 6, we now supports creation and restore of Hyper-V checkpoints on virtual machines, provided by the VM Cloud Resource Provider.

Tenants that have deployed virtual machines may now create checkpoints and restore them on their own, without any interaction from the cloud provider.

Let us have a closer look at how this actually works, how to configure it and what additional steps you might want to take as part of this implementation.

Enabling create, view and restore of virtual machine checkpoints at the Hosting Plan level

Once the UR6 is installed for WAP and the underlying resource provider, you will notice some changes in the admin portal.

First, navigate to a Hosting Plan of yours – that contains the VM Cloud Resource Provider.
When you scroll down, you can see that we have some settings related to checkpoints.



Create, view and restore virtual machine checkpoints – will let the tenants that has subscriptions based on this hosting plan, be able to perform these actions on their virtual machines.

View and restore virtual machine checkpoints – let the tenants view and restore virtual machine checkpoints, but not create them. This can for example be performed by the cloud provider on an agreed schedule.

When you enable either of these options, an update job is taking place at the plan level and communicates the changes back to VMM, ensuring that the tenants will have permissions to take these actions in the tenant portal once it has completed.



If we switch over to the tenant portal, we can see that when we drill into one of the existing VMs (click on the VMàDashboard) we have some new actions available.



If you would manage checkpoints for your VM Roles, you can of course do that too, but you then have to drill into each specific instance, as the VM role potentially can have multiple instances when supporting scale-out.



To create a new checkpoint, simply click on Checkpoint and type the name of the checkpoint and eventually a description.



If we switch back to the fabric and VMM, we can see that a VMM job has completed with details about the checkpoint process for this specific tenant, with the name and description we typed.



If we would like to perform the same operation again, creating an additional checkpoint on the same virtual machine, we get a message telling us that the existing checkpoint will be deleted.



This is because that the current checkpoint integration in WAP will only keep one checkpoint, and avoid the scenario where you could potentially have a long chain of differential disks.

When we create the second checkpoint, we can switch back to VMM to see what’s actually happening:

First, a new checkpoint is created.
Second, the previous checkpoint is deleted.



When we explore the checkpoints settings on the VM itself afterwards, we see that we only have the latest checkpoint listed.



Regarding the restore process, we can also perform this from the same view in the tenant portal.
Once you click on the restore button, the tenant portal will show you the metadata of the available checkpoint, such as name, description and when it was created. Once you click the confirm button, the restore process will start in VMM.





Now what?

If you are familiar with how checkpoints in Hyper-V works, then you know that each static disk will  be either .vhd or .vhdx – depending on the format you are using (.vhdx was introduced with Windows Server 2012 and should be the preferred format, but Azure is still using .vhd).
Once you create a checkpoint, a new disk (.avhd or .avhdx) will be created– a differential disk, containing all the new write operations, while read operations will occur on both the parent disk (vhdx) and the newly created differential disk. 



To summarize, this might not be an ideal situation when it comes to performance, life-cycle management and storage optimization.

Since we don’t have any action in the tenant portal to perform a delete operation, this can be scary in some scenarios.
The fact that the VM will always run on a checkpoint once a checkpoint is created, means you will always be able to restore to your latest checkpoint from the portal.

In order to solve this challenge, we can leverage the integration of Service Management Automation in Azure Pack.
One of the best things with Azure Pack and the VM Cloud resource provider, is that we can extend it and create valued added solutions and services by linking certain actions happening in the tenant portal, to automated tasks that are executed by a SMA runbook in the backend.

The following screenshot shows that there’s an event related to creation of VMM Checkpoints performed by the tenant, which can easily be linked to a runbook.



Here’s an example of a runbook that will check for checkpoints created on VMs belonging to a specific VMM Cloud that is used in a Hosting Plan in WAP. If there’s any checkpoints that exists, they will be deleted and the VMs will have their disks merged back to a static disk (.vhd/.vhdx).
<#
   
.SYNOPSIS
Wokflow to check for - and eventually delete old VM checkpoints
  #> 

 workflow delete-scvmcheckpoint  
    # Connection to access VMM server. 
  
    $VmmConnection = Get-AutomationConnection -Name 'SCVMM'  
    $VmmServerName = $VmmConnection.ComputerName  
    
    inlinescript
{  

 # Import VMM module. 
Import-Module virtualmachinemanager  
         
# Connect to VMM server. 
Get-SCVMMServer -ComputerName $Using:VmmServerName  

$vms = Get-SCVirtualMachine | Where-Object {$_.Cloud -like "*Copenhagen IaaS*" -and  $_.VMCheckpoints }
      
       foreach ($vm in $vms)
    {
      
Get-SCVMCheckpoint -VM $vm | Remove-SCVMCheckpoint -RunAsynchronously
            
    }   

}-PSComputerName $VmmServerName -PSCredential $VmmCredential 

This simple code can so be added to a schedule that will execute this runbook on a daily basis – as an example, ensuring that no VMs in the cloud will run on a checkpoint on a long term.

Thanks for reading!