Monday, June 30, 2014

Azure Pack - Working with the Tenant Public API

In these days, you are most likely looking for solutions where you can leverage powershell to gain some level of automation no matter if it’s on premises or in the cloud.
I have been writing about the common service management API in the Cloud OS vision before, where Microsoft Azure and Azure Pack is sharing the same exact management API.

In this blog post, we will have a look at the tenant public API in Azure Pack and see how to make it available for your tenants and also how do some basic tasks through powershell.

Azure Pack can either be installed with the express setup (all portals, sites and API’s on the same machine) or distributed, where you have dedicated virtual machines for each portal, site and components. By having a look at the API’s only, you can see that we have the following:

Windows Azure Pack and its service management API includes three separate components.

·         Windows Azure Pack: Admin API (Not publicly accessible). The Admin API exposes functionality to complete administrative tasks from the management portal for administrators or through the use of Powershell cmdlets. (Blog post: http://kristiannese.blogspot.no/2014/06/working-with-admin-api-in-windows-azure.html )

·         Windows Azure Pack: Tenant API (Not publicly accessible). The Tenant API enables users, or tenants, to manage and configure cloud services that are included in the plans that they subscribe to.

·         Windows Azure Pack: Tenant Public API (publicly accessible). The Tenant Public API enables end users to manage and configure cloud services that are included in the plans that they subscribe to. The Tenant Public API is designed to serve all the requirements of end users that subscribe to the various services that ha hosting service provider provides

Making the Tenant Public API available and accessible for your tenants

Default, the Tenant Public API is installed on port 30006 – which means it is not very firewall friendly.
We have already made the tenant portal and the authentication site available on port 443 (described by Flemming in this blog post: http://flemmingriis.com/windows-azure-pack-publishing-using-sni/ ), and now we need to configure the tenant public API as well.

1)      Create a DNS record for your tenant public API endpoint.
We will need to have a DNS registration for the API. In our case, we have registered “api.systemcenter365.com” and are ready to go.

2)      Log on to your virtual machine running the tenant public API.
In our case, this is the same virtual machine that runs the rest of the internet facing parts, like tenant site and tenant authentication site. This means that we have already registered cloud.systemcenter365.com and cloudauth.systemcenter365.com to this particular server, and now also api.systemcenter365.com.

3)      Change the bindings on the tenant public API in IIS
Navigate to IIS and locate the tenant public API. Click bindings, and change to port 443, register with your certificate and also type the correct hostname that the tenants will be using when accessing this API.



4)      Reconfigure the tenant public API with Powershell
Next, we need to update the configuration for Azure Pack using powershell (accessing the admin API).
The following cmdlet will change the tenant public API to use port 443 and host name “api.systemcenter365.com”.

Set-MgmtSvcFqdn –Namespace TenantPublicAPI –FQDN “api.systemcenter365.com” –Connectionstring “Data Source=sqlwap;Initial Catalog=Microsoft.MgmtSvc.Store;User Id=sa;Password=*” –Port 443

That’s it! You are done, and have now made the tenant public API publicly accessible.

Before we proceed, we need to ensure that we have the right tools in place for accessing the API as a tenant.
It might be quite obvious for some, but not everyone. To be able to manage Azure Pack subscriptions through Powershell, we basically need the powershell module for Microsoft Azure. That is right. We have a bunch of cmdlets in the Azure module for powershell that is directly related to Azure Pack.



You can read more about the Azure module and download it by following this link: http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/
Or simply search for it if you have Web Platform Installer in place on your machine.

Deploying a virtual machine through the Tenant Public API

Again, if you are familiar with Microsoft Azure and the powershell module, you have probably been hitting the “publishsettings” file a couple of times.

Normally when logging into Azure or Azure Pack, you reach for the portal, get redirected to some authentication site (can also be ADFS if not using the default authentication site in Azure Pack) and then sent back to the portal again which in our case is cloud.systemcenter365.com.

The same process will take place if you are trying to access the “publishsettings”. Typing https://cloud.systemcenter365.com/publishsettings in the internet explorer will first require you to logon and then you will have access to your published settings. This will download a file for you that contains your secure credentials and additional information about your subscription for use in your WAP environment.



Once download, we can open the file to explore the content and verify the changes we did when making the tenant public API publicly accessible in the beginning of this blog post.



Picture api content
Next, we will head over to Powershell to start exporing the capabilities.

1)      Import the publish settings file using Powershell

Import-WAPackPublishSettingsFile “C:\MVP.Publishsettings”



Make sure the cmdlet fits your environment and points to the file you have downloaded.

2)      Check to see the active subscriptions for the tenant

Get-WAPackSubscription | select SubscriptionName, ServiceEndpoint



3)      Deploy a new virtual machine

To create a new virtual machine, we first need to have some variables that stores information about the template we will use and the virtual network we will connect to, and then proceed to create the virtual machine.




4)      Going back to the tenant portal, we can see that we are currently provisioning a new virtual machine that we initiated through the tenant public API



Sunday, June 22, 2014

Microsoft Azure Site Recovery

In January, we had a new and interesting service available in Microsoft Azure, called “Hyper-V Recovery Manager”. I blogged about it and explained how to configure this on-premises using a single VMM management server. For details you can read this blog post: http://kristiannese.blogspot.no/2013/12/how-to-setup-hyper-v-recovery-manager.html

Hyper-V Recovery Manager provided organizations using Hyper-V and System Center with automated protection and orchestrating of accurate recovery of virtualized workloads between private clouds, leveraging the asynchronous replication engine in Hyper-V – Hyper – V Replica.

In other words, no data were sent to Azure except the metadata from the VMM clouds.
This has now changed and the service is renamed to Microsoft Azure Site Recovery that finally let you replicate between private clouds and public clouds (Microsoft Azure).

This means that we are still able to utilize the automatic protection of workloads that we are familiar with through the service, but now we can use Azure as the target in addition to private clouds.
This is also a door opener for migration scenarios where organizations considering moving VMs to the cloud, can easily do this with almost no downtime using Azure Site Recovery.

Topology

In our environment, we will use a dedicated Hyper-V cluster with Hyper-V Replica. This means we have added the Hyper-V Replica Broker role to the cluster. This cluster is located in its own host group in VMM and the only host group we have added to a cloud called “E2A”. Microsoft Azure Site Recovery requires System Center Virtual Machine Manager, which will be responsible for the communication and aggregation of the desired instructions made by the administrator in the Azure portal.


Pre-reqs

-          You must have an Azure account and add Recovery Services to your subscription
-          Certificate (.cer) that you upload to the management portal and register to the vault. Each vault has a single .cer certificate associated with it and it’s used when registering VMM servers in the vault.
-          Certificate (.pfx) that you import on each VMM server. When you install the Azure Site Recovery Provider on the VMM server, you must use this .pfx certificate.
-          Azure Storage account, where you will store the replicas replicated to Azure. The storage account needs geo-replication enabled and should be in the same region as the Azure Site Recovery service and associated with the same subscription
-          VMM Cloud(s). A cloud must be created in VMM that contains Hyper-V hosts in a host group enabled with Hyper-V Replica

-          Azure Site Recovery Provider must be installed on the VMM management server(s)
In our case, we had already implemented “Hyper-V Recovery Manager”, so we were able to do an in-place upgrade of the ASR Provider.
-          Azure Recovery Services agent must be installed on every Hyper-V host that will replicate to Microsoft Azure. Make sure you install this agent on all hosts located in the host group that you are using in your VMM cloud.

Once we had enabled all of this in our environment, we were ready to proceed and to the configuration of our site recovery setup.

Configuration


Login to the Azure management portal and navigate to recovery services to get the details around your vault, and see the instructions on how to get started.

We will jump to “Configure cloud for protection” as the fabric in VMM is already configured and ready to go.
The provider installed on the VMM management server is exposing the details of our VMM clouds to Azure, so we can easily pick “E2A” – which is the dedicated cloud for this setup. This is where we will configure our site recovery to target Microsoft Azure.



Click on the cloud and configure protection settings.



On target, select Microsoft Azure. Also note that you are able to setup protection and recovery using another VMM Cloud or VMM management server.



For the configuration part, we are able to specify some options when Azure is the target.

Target: Azure. We are now replicating from our private cloud to Microsoft Azure’s public cloud.
Storage Account: If none is present, then you need to create a storage account before you are able to proceed. If you have several storage accounts, then choose the accounts that are in the same region as your recovery vault.
Encrypt stored data: This is default set to “on”, and not possible to change in the preview.
Copy frequency: Since we are using Hyper-V 2012 R2 in our fabric – that introduced us for additional capabilities related to copy frequencies, we can select 30 seconds, 5 minutes and 15 minutes. We will use the “default” that is 5 minutes in this setup.
Retain recovery points: Hyper-V Replica is able to create additional recovery points (crash consistent snapshots) so that you can have a more flexible recovery option for your virtual workload. We don’t need any additional recovery points for our workloads, so we will leave this to 0.
Frequency of application consistent snapshots: If you want app consistent snapshots (ideally for SQL servers, which will create VSS snapshots) then you can enable this and specify it here.
Replication settings: This is set to “immediately” which means that every time a new VM is deployed to our “E2A” cloud in VMM with protection enabled, will automatically start the initial replication from on-premises to Microsoft Azure. For large deployments, we would normally recommend to schedule this.

Once you are happy with the configuration, you can click ‘save’.



Now, Azure Site Recovery will configure this for your VMM cloud. This means that – through the provider, the hosts/clusters will be configured with these settings automatically from Azure.
-          Firewall rules used by Azure Site Recovery are configured so that ports for replication traffic are opened
-          Certificates required for replication are installed
-          Hyper-V Replica Settings are configured
 Cool!

You will have a job view in Azure that shows every step during the actions you perform. We can see that protection has been successfully enabled for our VMM Cloud.




If we look at the cloud in VMM, we also see that protection is enabled and Microsoft Azure is the target.



Configuring resources

In Azure, you have had the option to create virtualized networks for many years now. We can of course use them in this context, to map with our VM networks present in VMM.
To ensure business continuity it is important that the VMs that failover to Azure are able to be reached over the network – and that RDP is enabled within the guest. We are mapping our management VM network to a corresponding network in Azure.



VM Deployment

Important things to note:
In preview, there are some requirements for using Site Recovery with your virtual machines in the private cloud.

Only support for Gen1 virtual machines!
This means that the virtual machines must have their OS partition attached to an IDE controller. The disk can be vhd or vhdx, and you can even attach data disks that you want to replicate. Please note that Microsoft Azure does not support VHDX format (introduced in Hyper-V 2012), but will convert the VHDX to VHD during initial replication in Azure. In other words, virtual machines using VHDX on premises will run on VHD’s when you failover to Azure. If you failback to on-premises, VHDX will be used as expected.

Next, we will deploy a new VM in VMM. When we enable protection on the hardware profile and want to deploy to a Cloud, intelligent placement will kick in and find the appropriate cloud that contains Hyper-V hosts/clusters that meet the requirements for replica.



After the deployment, the virtual machine should immediately start with an initial replication to Microsoft Azure, as we configured this on the protection settings for our cloud in Azure. We can see the details of the job in the portal and monitor the process. Once it is done, we can see – at a lower level that we are actually replicating to Microsoft Azure directly on the VM level.




After a while (depending on available bandwidth), we have finally replicated to Azure and the VM is protected.





Enabling protection on already existing VMs in the VMM cloud

Also note that you can enable this directly from Azure. If you have a virtual machine running in the VMM cloud enabled for protection, but the VM itself is not enabled in VMM, then Azure can pick this up and configure it directly from the portal.



If you prefer to achieve this by using VMM, it is easy by open the properties of the VM and enable for protection.




One last option is to use the VMM powershell module to enable this on many VMs at once.

Set-SCVirtualMachine –VM “VMName” –DRProtectionRequired $true –RecoveryPointObjective 300

Test Failover

One of the best things with Hyper-V Replica is that complex workflows, such as test failovers, planned failovers and unplanned failovers are integrated into the solution. This is also exposed and made available in the Azure portal, so that you easily can perform a test failover on your workloads. Once a VM is protected – meaning that the VM has successfully completed the initial replication to Azure, we can perform a test failover. This will create a copy based on the recovery point you select and boot that virtual machine in Microsoft Azure.







Once you are satisfied with the test, you can complete the test failover from the portal.
This will power off the test virtual machine and delete it from Azure. Please note that this process will not interfere with the ongoing replication from private cloud to Azure.



Planned failover

You can use planned failover in Azure Site Recovery for more than just failover. Consider a migration scenario where you actually want to move your existing on-premises workload to Azure, planned failover will be the preferred option. This will ensure minimal downtime during the process and start up the virtual machine in Azure afterwards.
In our case, we wanted to simulate planned maintenance in our private cloud, and therefore perform a planned failover to Azure.



Click on the virtual machine you want to failover, and click planned failover in the portal.
Note that if the virtual machine has not performed a test failover, we are recommending you to do so before an actual failover.
Since this is a test, we are ready to proceed with the planned failover.



When the job has started, we are drilling down to the lowest level again, Hyper-V Replica, to see what’s going on. We can see that the VM is preparing for planned failover where Azure is the target.



In the management portal, we can see the details for the planned failover job.



Once done, we have a running virtual machine in Microsoft Azure, that appears in the Virtual Machine list.



If we go back to the protected clouds in Azure, we see that our virtual machine “Azure01” has “Microsoft Azure” as its active location.



If we click on the VMs and drill into the details, we can see that we are able to change the name and the size of the virtual machine in Azure.



We have now successfully performed a planned failover from our private cloud to Microsoft Azure!

Failback from Microsoft Azure

When we were done with our planned maintenance in our fabric, it was time to failback the running virtual machine in Azure to our VMM Cloud.
Click on the virtual machine that is running in Azure that is protected, and click planned failover.
We have two options for the data synchronization. We can either use “Synchronize data before failover” that will perform something similar as “re-initializing replication” to our private cloud. This means synchronization will be performed without shutting down the virtual machine, leading to minimal downtime during the process.
The other option “Synchronize data during failover only” will minimize synchronization data but have more downtime as the shutdown will begin immediately. Synchronization will start after shutdown to complete the failover.
We are aiming for minimal downtime, so option 1 is preferred.



When the job is started, you can monitor the process in Azure portal.



Once the sync is complete, we must complete the failover from the portal so that this will go ahead and start the VM in our private cloud.



Checking Hyper-V Replica again, we can see that the state is set to “failback in progress” and that we currently have no primary server.



The job has now completed all the required steps in Azure.



Moving back to Hyper-V Replica, we can see that the VM is again replicating to Microsoft Azure, and that the primary server is one of our Hyper-V nodes.



In VMM, our virtual machine “Azure01” is running again in the “E2A” cloud



In the Azure management portal in the virtual machines list, our VM is still present but stopped.

Thanks for joining us on this guided tour on how to work with Azure Site Recovery.
Next time we will explore the scenarios we can achieve by using recovery plans in Azure Site Recovery, to streamline failover of multi-tier applications, LOB applications and much more.

Monday, June 16, 2014

Understanding Hosting Plans, VMM clouds and multi-tenancy - Part One

This is the first post in a series of blog posts related to Hosting Plans in Azure Pack and how things are mapped towards VMM management servers, VMM clouds in the context of multi-tenancy.

To show you an overview, have a look at the following figure:



In this case, we are dealing with a single management stamp (VMM management server) that contains several scale units, a VMM cloud and is presented to the service management API through Service Provider Foundation.
Note that we are not referring to any specific Active Directory Domain here, nor specific subnets.
This is basically a high-level overview of the dependencies you see when dealing with a hosting plan in Azure Pack to deliver VM Clouds.

Explanation

The picture contains everything you are able to present to a VMM cloud, which is basically the foundation of any hosting plan that is offering VM clouds.

In VMM, we can create host groups containing our virtualization hosts. These host groups contains several settings, policies and configuration items based on your input. In the example above we have designed the host group structure to reflect our physical locations, Copenhagen and Oslo – under the default “All hosts” group in VMM.

Further, we have added some logical networks that are present to these hosts, so we can assume we are using SMB, clustering, live migration, management, PA network (NVGRE) and front-end for all of the involved Hyper-V nodes and clusters we are managing.
Since we will be using NVGRE with WAP, only the PA network is added as a logical network to the VMM cloud. This will be covered in details in a later blog post.

We have also some port classifications which is an abstraction of the virtual port profiles, so that we can present those to a cloud and classify the VM NICs for a desired configuration.

Storage classification is used in a similar way so that the storage we add to the cloud is the only storage that should be used for our VHDs, matching the HW profiles of the VM templates. The host groups added need to be associated with these classifications.

To present the library resources in the tenant portal for VM deployments etc, we must add at least one read-only library share that can contains vhds, templates, profiles, scripts and more. If using VM roles in WAP, resource extensions is located in this library too.

The VMM Cloud abstracts the fabric resources, add read-only library shares and specifies the capacity of this cloud that defines the available amount of resources to consume through plans in WAP.

Service Provider Foundation is a multi-tenant REST Odata API for System Center that enables IaaS, and is the endpoint that connects the Service Management API in Azure Pack to your VMM management server(s) and VMM clouds.


Have a look at the figure as I will use this as a reference, as well as covering the details in the upcoming blog posts as well.

Shape the future of Windows Azure Pack!

You can help shape the future of Windows Azure Pack

Windows Azure Pack delivers Microsoft Azure technologies for you to run inside your datacenter. It offers rich, self-service, multi-tenant services and experiences that are consistent with Microsoft’s public cloud offering.
You can help shape the future of Windows Azure Pack. The Windows Azure Pack team has created a user voice site where you can post feature suggestions and vote on the suggestions of others.
You can find the Azure Pack user voice site here http://feedback.azure.com/forums/255259-azure-pack
Sign in to track your submitted ideas and comments.
When you would like to submit a new suggestion, type in one or more relevant keyword. This will automatically filter the already submitted items. If somebody else already submitted the same suggestion, it allows you to vote on that suggestion. As a signed in user you will have a total of 10 votes. With these votes you can submit new suggestions or vote on existing ones.

Vote for existing suggestions

When you vote for existing items, you can choose to give 1, 2, or 3 votes for more weight. You are able to change your assigned votes afterwards. When suggestions are closed, the votes you assigned to that suggestion are available again.

Submit a new suggestion

To submit a new suggestion, provide the title for the suggestion and optionally enter a description and category. Select to attach a file if that helps to explain the suggestion and choose how many votes you would like to put on this suggestion.

Help shape Windows Azure Pack with the user voice site http://feedback.azure.com/forums/255259-azure-pack