Showing posts with label SPF. Show all posts
Showing posts with label SPF. Show all posts

Monday, October 20, 2014

Understanding Windows Azure Pack and your service offerings

Understanding Windows Azure Pack and your service offerings

From time to time, I meet with customers (and also other system integrators) that is not fully aware of the definition of cloud computing.
I never expect people to know this to the very nasty details, but have an overview of the following:

·         Deployment models
·         Service models
·         Essential characteristics

What’s particular interesting when discussing Windows Azure Pack, is that the deployment model that’s relevant, is the private cloud. Yes, we are touching your own datacenter with these bits – the one you are in charge of.

For the service models, we are embracing Infrastructure as a Service (IaaS – using the VM Cloud Resource Provider), and Platform as a Service (PaaS – Using the Web Site Cloud Resource Provider).

The essential characteristics are also very important, as we’ll find elasticity, billing/chargeback, self-service, resource pooling and broad network access.

If you combine just self-service and IaaS, this tells us that we empower our users to deploy virtual machines on their own. Right?
So having the flexibility to provide such service, we also rely on the underlying architecture to support this. Due to scalability (elasticity), we need to ensure that these users constantly have access to the solution – no matter what device they are using (broad network access), we need to find out who is consuming what (billing/chargeback), and last but not least – be able to produce these services in an efficient way that makes it cost effective and profitable (resource pooling).

So, it starting to make sense.

There is a reason for what we are seeing and we are providing these services by abstracting the underlying resources into clouds, plans and subscriptions with the Cloud OS.

Implementing a complete IaaS solutions may bring some obstacles to the table.

Organizations tends to think that IaaS is something they have provided for years. Perhaps they have provided virtual machines, but not a complete IaaS solution.
The reason for that is that IaaS is relying on abstraction at every layer. This is not only about virtual compute (memory, CPU), but also about virtual storage and virtual networking.
This is when it gets interesting, using network virtualization.

Remember that self-service is an essential characteristic of the cloud, right?
So delivering IaaS would also mean that the user is able to do stuff with the networking aspect as well, with no interaction from the service provider/cloud administrator.
This is why Software-Defined Networking (NVGRE) is so essential to this service model, and hence we run into the following obstacles.

·         The customer (most often service provider) wants to continue to provide managed services, such as:
o   Backup (both crash consistent and app consistent)
o   Monitoring (above the operating system level, covering the application stack)

This is what they are doing today, with their infrastructure. But this also has a high cost to operate due to all the manual operations needed and involved to get the wheels moving.

Luckily, Windows Azure Pack is able to cover both scenarios, providing a consistent experience to users/tenants no matter if they are running resources in a “legacy” infrastructure, or a new modern IaaS infrastructure.

The following architecture shows that we are using two Virtual Machine Management Stamps.
Both of these are located behind the SPF endpoint – which present the capabilities, capacity and much more to the service management API in Azure Pack.



A cloud administrator then creates a Hosting Plan in the Admin Portal of Azure Pack, which is associated with the legacy cloud in the legacy VMM server. This plan is available for the users/tenants who are subscribing to managed services.

A new plan is created, associated with the IaaS cloud and the IaaS VMM server, available for the users/tenants that need IaaS, without the requirement of managed services. They are dealing with these themselves.

Hopefully this blog post gave you an overview of what’s possible to achieve using Azure Pack and combine both kind of services using a single solution.

(Want more info? – please join my TechEd session in Barcelona next week).

Wednesday, September 10, 2014

How Azure Pack is using Service Provider Foundation

How Azure Pack is using Service Provider Foundation

A while ago, I wrote several posts about the different APIs in Azure Pack.
As you may be aware of, Azure Pack consists of what we often refer to as “Service Management API”.
The API is similar to the one we will (not literally) find in Microsoft Azure, where the portal interacts with the APIs, that again aggregate all the wide diversity of resource providers available for us to consume.

A short summary

The Azure Pack Management Portal offers a familiar, self-service interface that every subscriber (tenant) uses to provision and manage services such as the web site offerings and the virtual machine with its virtual network capabilities.
We have portals for the admin (service provider) and the tenants.

Underlying the Management Portal is an OData Rest application programming interface (API) known as the Service Management API.
This API provides access to the underlying services and enables automation and replacement of the existing management portal.

Some of my API posts:



API summary:

Administrator API
REST APIs that are only available to Service Management for administrators. Default this Admin API is using port 30004, so the URI requests should reflect that.

Tenant API
REST APIs that are available for both administrators and tenants. Default the tenant API is using port 30005.

Public tenant API
Public REST APIs that support end-user subscription management for services presented by the service management API. Default the port is set to 30006.

Let us get back on track

When we are working with the VM Cloud Resource Provider in WAP, we are touching many many APIs on our journey, and one of the important ones (well, all of them are important for this to work) is the Service Provider Foundation (SPF).

SPF is provided with System Center 2012 R2 – Orchestrator (no, you don’t have to install Orchestrator, but the SPF setup is located in the Orchestrator setup/media).
SPF exposes an extensible OData web service that interacts with VMM. This enables service providers to design and implement multi-tenant self-service portals that integrate IaaS capabilities available in System Center 2012 R2 and Windows Server 2012 R2 – Hyper-V.

SPF contains several web services that has two locations to set credentials. On the server that has the SPF installed we use the application domain pool in IIS and the respective group in Computer Management. These groups (SPF_Admin, SPF_VMM, SPF_Usage and SPF_Provider) must contain a local credential (not a domain credential) that is also member of the Administrators group on the SPF server.

The SPF_VMM user must be added as an administrator to VMM in order to invoke actions from the WAP portal.

The Service Provider Foundation Web Services:


The admin web service is used to create and manage tenants, user roles, servers (like Remote Console), stamps (VMM), and other administrative objects.


The VMM web service invokes the VMM server to perform requested operations.
Examples of operations could be:

-          Creating virtual machines
-          Creating virtual networks
-          Creating user role definitions
-          Create cloud services and other fabric

Communication is bidirectional, so that actions triggered by a portal that’s using SPF (like WAP) as well as actions happening directly in VMM will be reflected on both sides.

An example:

You do something in VMM that affect one or more tenants, like adding a new VM to the tenant’s subscription. This will pop up in the tenant portal of WAP.

Another example is when a tenant makes changes to a virtual network in the portal, the jobs are triggered in VMM, aggregated by SPF and shows immediately.

Usage Web Service

SPF has also a Usage Web Service that can only be used by WAP, and uses data from Operations Manager’s data warehouse, which is integrated with VMM in order to collect information of the virtual machine metrics. You must use the spfcmdlets to register SCOM with SPF.

Provider Web Service

Resource providers for delivering infrastructure as a service (IaaS) uses this web service that provides a Microsoft ASP.NET web API. This one uses also the VMM and Admin web services but is not an Open Data (OData) service.


Registering SPF endpoint with Windows Azure Pack

As an administrator, you log on to the management portal and register the Service Provider Foundation endpoint. This will register a connection between the Service Management API and SPF.
Since SPF provides a programmatic interface to the stamps (VMM management servers), it enables service providers and enterprises to design and implement multi-tenant self-service portals that leverage IaaS capabilities provided by System Center and Windows Server.



After you have registered the SPF endpoint with the Service Management API:

·         All stamps that you have created directly in SPF will be listed in the management portal for administrators

·         All clouds created within the VMM stamp(s) will appear in the management portal for administrators

·         You can register stamps directly using the management portal for administrators

·         You can remove/change the association between stamp and service provider foundation


Monday, August 18, 2014

VM Cloud is missing in Windows Azure Pack

Recently, I’ve encountered a bug when working with WAP and VM Cloud as the resource provider.

Symptoms

You have connected the service management API to your SPF endpoint and added a VMM management stamp together with a Remote Desktop Gateway.

If you decide to change the FQDN of the Remote Desktop Gateway registered with your VMM management stamp, you will end with a blank VM Cloud in the admin portal.
The connection to the SPF endpoint is still present, but the VMM management stamp with its cloud is missing.



This causes also the VMs and the virtual network for the tenants to appear as missing in the tenant portal.

On the SPF server you will find the following event logged for ManagementODataServices:


On the server where the admin API is installed, you will find the following in the event viewer:




When you make changes to the FQDN of the Remote Desktop Gateway in WAP, you will have another SCSPFServer record present in SPF together with a SCSPFSetting that has the same ID as the previous records.

As you can see from the screenshot below, we have now two records of the ServerType “RDGateway”.



If we dig deeper, the following screenshot shows that we have two entries with the same ID, both registered to the VMM management stamp.



In short, the VMM management stamp is registered again, which generates a duplicate ID that results in this behavior.


Resolution

In order to clean up, we have to work directly on the SPF server using the SPFAdmin module with PowerShell.

Note: when doing this correctly, you will not delete, loose or cause any harm to your production environment so pay attention.

1.       Log on to your SPF server and import the SPFAdmin module



2.       Run the following cmdlets to identify and remove your RDGateway servers! In our case, we have two records and have to remove both of them before we later add the RDGateway we want.
The reason for that is that because when you try to add the RDGateway in WAP afterwards, you will see that this column is empty although it exist in SPF. If you try to add the RDGateway again, you will end up in the exact same situation. Therefore we must remove both servers in SPF.




3.       Remove the duplicate SCSpfSetting with the following cmdlets. The SCSpfSetting on the top is the setting you want to remove with the duplicate ID.



4.       Next, we want to register the RDGateway directly to our stamp with SPF to avoid creating duplicate ID's.



Once this is done, you can perform a refresh in both the admin portal and the tenant portal, and your VMM management stamp should again be present.
Also edit the connection to verify that the RDGW is registered with the correct values.






 Please note: If you register your VM Cloud resource provider in WAP with all the settings at once, you will not run into this issue. It's only if you decide to add the RDGateway afterwards, or are making changes to the existing one.



Monday, July 7, 2014

Windows Azure Pack - Infrastructure as a Service Jump-start

If you are interested in Azure Pack and especially the VM Clouds offering (Infrastructure as a Service), then you should mark the date and time so that you are able to join us this week.

We will be arranging a MVA Jump-Start: Windows Azure Pack – Infrastructure as a Service Jump-Start.


“IT Pros, you know that enterprises desire the flexibility and affordability of the cloud, and service providers want the ability to support more enterprise customers. Join us for an exploration of Windows Azure Pack's (WAP's) infrastructure services (IaaS), which bring Microsoft Azure technologies to your data center (on your hardware) and build on the power of Windows Server and System Center to deliver an enterprise-class, cost-effective solution for self-service, multitenant cloud infrastructure and application services. 

Join Microsoft’s leading experts as they focus on the infrastructure services from WAP, including self-service and automation of virtual machine roles, virtual networking, clouds, plans, and more. See helpful demos, and hear examples that will help speed up your journey to the cloud. Bring your questions for the live Q&A!”

To get a solid background and learn more on what we are going to cover, I highly recommend to download and read the whitepaper we created on the subject earlier this year.


Together with some of the industry experts, I will be answering questions during the event – so please use this opportunity to embrace and adopt the Azure Pack.


Monday, June 16, 2014

Understanding Hosting Plans, VMM clouds and multi-tenancy - Part One

This is the first post in a series of blog posts related to Hosting Plans in Azure Pack and how things are mapped towards VMM management servers, VMM clouds in the context of multi-tenancy.

To show you an overview, have a look at the following figure:



In this case, we are dealing with a single management stamp (VMM management server) that contains several scale units, a VMM cloud and is presented to the service management API through Service Provider Foundation.
Note that we are not referring to any specific Active Directory Domain here, nor specific subnets.
This is basically a high-level overview of the dependencies you see when dealing with a hosting plan in Azure Pack to deliver VM Clouds.

Explanation

The picture contains everything you are able to present to a VMM cloud, which is basically the foundation of any hosting plan that is offering VM clouds.

In VMM, we can create host groups containing our virtualization hosts. These host groups contains several settings, policies and configuration items based on your input. In the example above we have designed the host group structure to reflect our physical locations, Copenhagen and Oslo – under the default “All hosts” group in VMM.

Further, we have added some logical networks that are present to these hosts, so we can assume we are using SMB, clustering, live migration, management, PA network (NVGRE) and front-end for all of the involved Hyper-V nodes and clusters we are managing.
Since we will be using NVGRE with WAP, only the PA network is added as a logical network to the VMM cloud. This will be covered in details in a later blog post.

We have also some port classifications which is an abstraction of the virtual port profiles, so that we can present those to a cloud and classify the VM NICs for a desired configuration.

Storage classification is used in a similar way so that the storage we add to the cloud is the only storage that should be used for our VHDs, matching the HW profiles of the VM templates. The host groups added need to be associated with these classifications.

To present the library resources in the tenant portal for VM deployments etc, we must add at least one read-only library share that can contains vhds, templates, profiles, scripts and more. If using VM roles in WAP, resource extensions is located in this library too.

The VMM Cloud abstracts the fabric resources, add read-only library shares and specifies the capacity of this cloud that defines the available amount of resources to consume through plans in WAP.

Service Provider Foundation is a multi-tenant REST Odata API for System Center that enables IaaS, and is the endpoint that connects the Service Management API in Azure Pack to your VMM management server(s) and VMM clouds.


Have a look at the figure as I will use this as a reference, as well as covering the details in the upcoming blog posts as well.

Sunday, June 15, 2014

Webinar - Windows Azure Pack with VM Clouds and Request Management

Webinar – Windows Azure Pack with VM Clouds and Request Management

Recently, my company announced a new partnership with Gridpro.

(More info can be found on my Lumagate blog at http://lumagate.com/cto )

Together with Patrik Sundqvist, we presented some of the most interesting stuff with Azure Pack and also focused on their custom resource provider that is now deeply integrated to Azure Pack setup.

The presentation is split in two, where I am presenting the first hour giving you an overview of Cloud OS, Azure Pack, focusing on IaaS with VMM and SPF as resource providers, covering network virtualization, remote console and cloud offerings.

Patrik presents afterwards and is demoing their Request Management solution which is using System Center Service Manager as the resource provider, to extend service offerings in this portal.

The event was open for everyone and we saw quite a few known members from the community on the call as well J

If you missed it, you can watch it on-demand by clicking on this link:



Tuesday, March 4, 2014

Authoring VM Roles for Windows Azure Pack

You may be aware of VM Roles within Windows Azure Pack.
The ability to extend your service offering with services and applications, using the rich framework in VMM is really a killer and a “must” for those who adopt Windows Azure Pack in these days, and when they want to have a VM Cloud.

For more information about how to get started, please see an older blog post:

Microsoft is creating ready-to-use gallery items that you can download with Web Platform Installer.
One of the good things with these packages, is that you can edit them directly using the VM Authoring tool. (Download VMAuthoring Tool from Codeplex: https://vmroleauthor.codeplex.com/ )

The VM Role in WAP and System Center 2012 R2 introduces an application model to deploy virtual machine workloads. The tool is used to author VM Role artifacts – Resource Definitions and Resource Extension Packages.

In this blog post, we will create a basic VM Role that can be joined to an existing Active Directory Domain.

We need to create both a Resource Definition – and a Resource Extension for the VM Role.

Resource Definition is the package that speaks a language that Windows Azure Pack is able to understand. (RESDEF) is a versioned template that describes how a resource should be provisioned, and includes information such as VM size, OS settings, OS image, Allowable Extensions and Resource Extension References. In addition, the Resource Definition also contains the view definition (VIEWDEF) which presents the tenants for a user interface in the portal, providing them with descriptions to the input fields and prompt them for required information.

Resource Extension is the package that speaks a language that VMM is able to understand. The extensions contains information about the requirements for the resource definitions towards the building blocks in the VMM library, and describes how a resource should be installed and configured described by a Resource Definition File. The resource extension can only be imported with Powershell, and may have requirements to its VHD’s in order be used in Windows Azure Pack.
For instance, a VM Role that should work as a SQL server would have certain criteria’s that must be met in the resource extension, like a VHD tagged with “SQL”, so that the resource definition and its view definition will list the valid disks within the portal during the creation wizard.

For more information and a good guidance on how to create VM Roles with VMAuthoring Tool, please check these great tutorials by Charles:

VM Role Resource Extension: http://www.youtube.com/watch?v=iCilD2P8vhE

VM Role Resource Definition: http://www.youtube.com/watch?v=66zznivfh_s

Consider this as mandatory before you proceed with this blog post J

I will create a new VM Role that will join an existing Active Directory Domain and also enable the File Service within the guest post deployment.

1)      Start VM Authoring tool and create a new Resource Definition Package and a new Windows Resource Extension Package



2)      As you can see, we have both artifacts presented in this tool, and we will mainly be focusing on the resource definition since we are not putting so much applications within the VM Role.


3)      On the resource requirements for the resource exention, I have added a tag for the VHD, which is “WindowsServer2012”. That means that the vhd used with with extension must be tagged with this tag


4)      On the Roles & Features section, I have simply enabled “File Server” so that VMM will configure the guests as part of the process with this server role


5)      On the Resource Definition, we also have ‘Extension References’ that will link to the resource extension we will import into VMM library. The references here are important, so that the definition file know where to look, and VMM know what to present to the portal when the VM Role is selected. As you can see, I have referenced to my resource extension file in the upper left corner.


6)      At the operating System Profile in the resource definition, I want to configure the VM role to join an Active Directory Domain. Default, the profile is configured with “Workgroup”, so select “JoinDomain” and from the drop-down list side-by-side with DomainToJoin and DomainJoinCredentials, click generate a new parameter on both. Navigate to the “parameter” in the Resource Definition afterwards

7)      We have now two new parameters and the tools is auto creating the data type recommended for these fields. In this case, string and credentials are mapped with the new parameters


8)      Moving over to the section for the View Definition, we can see the OSVirtualHardDisk and the requirement for tags. In this case, a tag of “WindowsServer2012” is required on the vhd used for this VM role, and we must tag this vhd with powershell in VMM

Save the packages to a location on your HDD. Note that you can always verify your input and the tool will point out any errors in the configuration for you to fix.

This was some very small modifications, but we now have the basics in place in order to have a new VM Role that will join the domain during deployment, and also install and configure the file server.

Let us move over to the service management portal in Windows Azure Pack and import the resource definition.

1)      Log on to the Windows Azure Pack Administrator portal. This is considered as a high privileged server and should be located behind your corporate firewall.
2)      On the VM Clouds, go to Gallery and click import. Browse to the location of your newly created gallery item and import the resource definition.



3)      Make the Gallery Item Public and save the changes.


Before we can add the gallery item to a Plan created in Windows Azure Pack, we must first import the resource extension to VMM so that the resource definition know what to look for.

1)      Navigate to VMM and launch Powershell

The following script can be used to import a resource extension, and also to verify the content afterwards.

### Sample script that imports the Web VM Role into VMM Library

### Get Library share
### Get resource extensions from folder
### Import resource extension to VMM library

$libraryShare = Get-SCLibraryShare | Where-Object {$_.Name -eq 'MSSCVMMLibrary'} 

$resextpkg = $Env:SystemDrive + "\Users\administrator.INTERNAL\Desktop\GalleryTemp\KNDemo-03-03-2014-18-36-06\KN.resextpkg"

Import-CloudResourceExtension –ResourceExtensionPath $resextpkg -SharePath $libraryshare -AllowUnencryptedTransfer



### Get virtual hard disk that should be associated with the resource extension
### Ask VMM for operating systems equal to 64-bit edition of Windows Server 2012 Datacenter
### Set virtual hard disk to be tagged as Windows Server 2012 Datacenter

$myVHD = Get-SCVirtualHardDisk | where {$_.Name –eq 'webg1.vhdx'} 
$WS2012Datacenter = Get-SCOperatingSystem | where { $_.name –eq '64-bit edition of Windows Server 2012 Datacenter' } 
Set-scvirtualharddisk –virtualharddisk $myVHD –OperatingSystem $WS2012Datacenter

### Define tags
### Tag vhd with familiy name (Windows Server 2012) and extension requirements (.NET3.5)
### Set properties on vhd

$Tags = $myvhd.tag
if ( $tags -cnotcontains "WindowsServer2012" ) { $tags += @("WindowsServer2012") }
if ( $tags -cnotcontains ".NET3.5" ) { $tags += @(".NET3.5") }
Set-SCVirtualHardDisk -VirtualHardDisk $myvhd -Tag $tags
Set-SCVirtualHardDisk -VirtualHardDisk $myvhd -FamilyName "Windows Server 2012 Datacenter" -Release "1.0.0.0"

### Verify cloud resource extensions

Get-CloudResourceExtension | Format-List -Property State, Description, Name

### Verify cloud resources deployed

Get-CloudResource | Format-List -Property name

### Verify tags on vhds

Get-SCVirtualHardDisk | Format-List -Property familyname, OperatingSystem, VHDFormatType, release

This script is for your reference.

Once this has completed, we should be able to add the gallery item to an existing Plan in WAP.

1)      Navigate back to the service management portal and locate your newly imported gallery item
2)      On plans, click add and select the Plan you want this to be added.
Note: based on the number of subscriptions accessing this plan, it can take a minute or two before everything is populated and exposed to them.

Now, let us logon as a tenant and deploy or new VM Role.

Note: If you are using NVGRE and want the VM Role to join an Active Directory Domain, you must specify the right DNS server for the network in the portal prior to deployment of this role. If you are only using a public DNS for internet connectivity for your tenants, you won’t be able to join.

1)      Logon to the tenant portal
2)      Launch the wizard, select new Virtual Machine Role and select ‘from gallery’


3)      Since we have imported both the resource definition file and the resource extension file, that also have the corresponding requirements to see each other, we can see the newly created VM Role “KNDemo” which has a version of “1.0.0.0”. Click to proceed



4)      Assign a unique name for the VM role and continue



5)      The view definition will present us with the required input fields and map this back to the configuration of the VM role. As you can see, I am able to specify my Active Directory Domain to join, and which credentials I should use. Once this is done, we can deploy the VM Role.
Note that you could also separate different configuration tasks in different sections/windows in this wizard, so that everything is not placed in a long list as in this example.




6)      The VM Role will now be provisioned, joined to my network (NVGRE in this context) and my domain


Once the VM is deployed, we can log on (using the great Remote Console feature) remotely and verify the configuration.

First, we see that the VM has joined the domain, and I am able to log on with domain credentials:


Next, we can verify that we have installed the File Server role:



I hope this blog post was useful in how to get started with authoring your own VM Roles using VMAuthoring tool.
If times allow, I will be back with other examples in the near future.

Wednesday, February 12, 2014

What is a Management Stamp?


Recently, I had an interesting discussion with one of our service provider customers.
As planning to leverage Windows Azure Pack, we had to take a look at the current VMM infrastructure.

The question was: should we use the same VMM infrastructure for our Windows Azure Pack environment, or create a new VMM infrastructure with its own scale units?

To give you a better understanding of the decision here, we had to discuss the real topic here.

What is a Management Stamp?

Stamps and stamp is a new concept we first saw with Service Provider Foundation in Service Pack 1 for System Center.
Ideally, a stamp is representing scale units (networking, storage, compute) and managed by Virtual Machine Manager.
Virtual Machine Manager can embrace your entire datacenter, all locations and consolidate the view and management of each scale unit. So to put it right, a Stamp is actually a VMM infrastructure containing the scale units.
The stamp should also be monitored and secured through compliances and backups

So a stamp is important in this context, as Service Provider Foundation is an endpoint that orchestrate processes through the abstraction layer in VMM through the cloud you configure and presents, that is based on scale units.

A stamp could be representing a rack, a geographical location, functions or different kinds of services.

Windows Azure Pack can create Plans which is bound to a stamp exposed with Service Provider Foundation.

The conclusion

Instead of adding all the new functionality to an existing stamp, like Remote Console, NVGRE, resource extensions in the VMM library (Gallery items in WAP), since especially the logical networks was modeled to support Network Virtualization, we ended up by creating a new stamp which was dedicated to SPF and Windows Azure Pack.
This gave us a lot more flexibility to create public Plans in WAP leveraging the new stamp which was designed for WAP and its tenants, while a private plan was created to meet the requirements for the IT department, for deploying virtual machines within the corporate infrastructure.





Three important updates for your Cloud OS Infrastructure

If you are working with Windows Azure Pack, you have probably noticed that Service Provider Foundation and Virtual Machine Manager (System Center 2012 R2) is a requirement to have a resource provider that enables the IaaS offering.

Recently, we got UR1 for VMM 2012 R2.
Now, we have also gotten UR1 for both Windows Azure Pack and Service Provider Foundation.

Grab the links and please pay attention to VMM which requires you to run a SQL script post installation!






Thursday, February 6, 2014

Configuring Remote Console for Windows Azure Pack


Configuring Remote Console for Windows Azure Pack

This is a blog post that is part of my Windows Azure Pack findings.
Lately, I have been very dirty on my hands, trying to break, fix and stress test Windows Azure Pack and its resource providers.
Today, I will explain how we configure Remote Desktop as part of our VM Cloud resource provider, to give console access to the virtual machines running on a multi-tenant infrastructure.

Background

Windows Server 2012 R2 – Hyper-V introduced us for many new innovations, and a thing called “Ehanced VM session mode”, or “RDP via VMBus” was a feature that no one really cared about at first.

To put it simple: The traditional VMConnect session you initiate when connecting to a virtual machine (on port 2179 to the host, that then exposes the virtual machine) now supports redirecting local resources to a virtual machine session. This has not been possible before, unless you are going through a TCP/IP RDP connection directly to the guest – that indeed required network access to the guest.

Hyper-V’s architecture has something called “VMBus” which is a communication mechanism (high-speed memory) used for interpatition communication and device enumeration on systems with multiple active virtualized partitions. If you do not install the Hyper-V role, the VMBus is not used for anything. But when Hyper-V is installed, the VMBus are responsible for communication between parent/child with the Integration Services installed.
The virtual machines (guests/child partitions) do not have direct access to the physical hardware on the host. They are only presented with virtual views (synthetic devices). The synthetic devices take advantages when Integration Services is installed for storage, networking, graphics, and input system. The Integration Services is a very special virtualization aware implementation, which utilizes the VMBus directly, and bypasses any device emulation layer.

In other words:

The enhanced session mode connection uses a Remote Desktop Connection session via the VMBus, so no network connection to the virtual machine is required.

What problems does this really solve?

·         Hyper-V Manager let you connect to the VM without any network connectivity, and copy files between the host and VM.
·         Using USB with the virtual machine
·         Printing from a virtual machine to a local printer
·         Take advantage of all of the above, without any network connectivity

·         Deliver 100% IaaS to customers/tenants

The last point is important.

If you look at the service models in the cloud computing definition, Infrastructure as a Service will give the tenants the opportunity to deploy virtual machines, virtual storage and virtual networks.
In other words, all of the fabric content is managed by the service provider (Networking, Storage, Hypervisor) and the tenants simply get an operating system within a virtual machine.
Now, to truly deliver that, through the power of self-service, without any interaction from the service provider, we must also support that the tenants can do whatever they want with this particular virtual machine.
A part of the operating system is also the networking stack. (Remember that abstraction is key here, so the tenant should also manage – and be responsible for networking within their virtual machines, not only their applications). So to let tenants have full access to their virtual machines, without any network dependencies, Remote Desktop via VMBus is the solution.



Ok, so now you know where we’re heading, and will use RDP via VMBus together with System Center 2012 R2 and Windows Azure Pack. This feature is referred to as “Remote Console” in this context, and provides the tenants with the ability to access the console of their virtual machines in scenarios where other remote tools (or RDP) are unavailable. Tenants can use Remote Console to access virtual machines when the virtual machine is on an isolated network, an untrusted network, or across the internet.

Requirements

Windows Server 2012 R2 – Hyper-V
System Center 2012 R2 – Virtual Machine Manager
System Center 2012 R2 – Service Provider Foundation (which was introduced in SP1)
Windows Azure Pack
Remote Desktop Gateway

The Remote Desktop Gateway in this context will act (almost similar) like it does for the VDI solution, signing connections from MSTSC ro the gateway, but rather redirect to VMBus and not a VDI guest.

After you have installed, configured and deployed the fabric, you can add the Remote Desktop Gateway to your VM Cloud resource provider. You can either add this in the same operation as when you add your VMM server(s), or do it afterwards. (This requires that you have installed a VM with the RDGateway role, configured SSL certificates, both for VMMàHost->RDGW communication, and CA cert for external access).



Before we start to explain about the required configuration steps, I would like to mention some important things.
This has been a valuable learning experience, and I have been collaborated with Marc Van Eijk (Azure MVP), Richard Rundle (PM at MS), Stanislav Zhelyazkov (Cloud MVP), and last but not least, Flemming Riis (Cloud MVP).
Thanks for all the input and valuable discussions, guys!

As part of this journey, I have been struggling with certificates to get everything up and running. As you may be aware of, I am not a PKI master, and I am not planning to become one either, but it is nice to have a clear understanding of the requirements in this setup.

1)      The certificate you need for your VMM server(s), Hyper-V hosts (that is a part of a host group that is in a VMM cloud, that is further exposed through SPF to a Plan in WAP) and the RD Gateway can be self-signed. I bet many will try to configure this with self-signed certificates in their lab, and feel free to do so. But you must configure it properly. I’ve been burned here. Many times.
2)      The certificate you need to access this remotely should be from a CA. If you want to demonstrate or use this in a real world deployment, this is an absolute requirement. This certificate is then only needed on the RD Gateway, and should represent the public FQDN on the RD Gateway that is accessible on port 443 from the outside.
3)      I suggest you repeat step 1 and 2 before you proceed.
4)      I also suggest you to get your hands on a trusted certificate so that you don’t have to stress with the Hyper-V host configuration, as described later in this guide

Configuring certificates on VMM

If you are using self-signed certificates, you should start by creating a self-signed certificate that meets the requirement for this scenario.

1)      The certificate must not be expired
2)      The Key Usage field must contain a digital signature
3)      The Enhanced Key Usage field must contain the following Client Authentication object identifier: (1.3.6.1.5.5.7.3.2)
4)      The root certificate for the certification authority (CA) that issued the certificate must be installed in the Trusted Root Certification Authorities certificate store
5)      The cryptographic service provider for the certificate must support SHA256

You can download makecert, and run the following cmdlet to create a working certificate:

makecert -n "CN=Remote Console Connect" -r -pe -a sha256 -e <mm/dd/yyyy> -len 2048 -sky signature -eku 1.3.6.1.5.5.7.3.2 -ss My -sy 24 "remoteconsole.cer"

Once this is done, open MMC and add the certificate snap-in and connect to local user.
Under personal, you will find these certificates.

1)      Export the certificate (.cer) to a folder.
2)      Export the private key (.pfx) to a folder – and create a password

For the VMM server, we load the pfx into the VMM database so that VMM doesn’t need to rely on the certs being in the cert store of each node. You shouldn’t need to do anything on the VMM server except import the pfx into the VMM database using Set-SCVMMServer cmdlet. The VMM server is responsible for creating tokens.
Now, open VMM and launch the VMM Powershell module, and execute these cmdlets, since we also must import the PFX to the VMM database:

$mypwd = ConvertTo-SecureString "password" -AsPlainText -Force
$cert = Get-ChildItem .\RemoteConsoleConnect.pfx
$VMMServer = VMMServer01.Contoso.com
Set-SCVMMServer -VMConnectGatewayCertificatePassword $mypwd -VMConnectGatewayCertificatePath $cert -VMConnectHostIdentificationMode FQDN -VMConnectHyperVCertificatePassword $mypwd -VMConnectHyperVCertificatePath $cert -VMConnectTimeToLiveInMinutes 2 -VMMServer $VMMServer

This will import the pfx, and configure VMM to setup the VMConnectGateway password, certificate, the host identification mode (which is FQDN) and the time to live in minutes.

Once this is done, you can either wait for VMM to refresh the Hyper-V hosts in each host group – to deploy the certificates, or trigger this manually through powershell with this cmdlet:

Get-SCVMHost -VMMServer "VMMServer01.Contoso.com" | Read-SCVMHost

Once each host is refreshed in VMM, it installs the certificate in the Personal certificate store of the Hyper-V hosts and configure the Hyper-V host to validate tokens by using the certificate.

The downside of using a self-signed certificate in this setup, is that we have to do some manual actions on the hosts afterwards:

Configuring certificates on the Hyper-V hosts

Hyper-V will accept tokens that are signed by using specific certificates and hash algorithms. VMM performs the required configuration for the Hyper-V hosts.

Since using a self-signed certificate, we must import the public key (not the private key) of the certificate to the Trusted Root Certificateion Authorities certificate store for the Hyper-V hosts. The following script will perform this for you:

Import-Certificate -CertStoreLocation cert:\LocalMachine\Root -Filepath "<certificate path>.cer"

You must restart the Hyper-V Virtual Machine Management service if you install a certificate after you configure Virtual Machine Manager. (If you have running virtual machines on the hosts, put one host at a time in maintenance mode with VMM, wait till it is empty, reboot, and perform the same action on every other hosts before you proceed. Yes, we are getting punished for using self-signed certificates here).

Please note:
This part, where the Hyper-V Virtual Machine Management Service requires a restart, is very critical. If remote console is not working at all, then it could have been due to the timing of when the self-signed certificate was added to the trusted root on the Hyper-V hosts. If the certificate is added to the trusted root after VMM has pushed the certificate, Hyper-V won’t recognize the self-signed cert as trusted since it queries the cert store on process startup, and not for each token it issues.

Now we need to verify that the certificate is really installed in the Personal certificate store of the Hyper-V hosts, using the following cmdlet:

dir cert:\localmachine\My\ | Where-Object { $_.subject -eq "CN=Remote Console Connect" }



Also, we must check the hash configuration for the trusted issuer certificate by running this cmdlet:

$Server = “nameofyourFQDNHost”

$TSData = Get-WmiObject -computername $Server -NameSpace "root\virtualization\v2" -Class "Msvm_TerminalServiceSettingData"

$TSData



Great, we are now done with both VMM and our Hyper-V hosts.

Configuring certificates on the Remote Desktop Gateway

This Remote Desktop Gateway can only be used for Remote Console once it is configured for this. A configuration change will occur, which makes the gateway unusable for other purposes, as we will install an authentication plug-in from VMM media to this server.

In order to support federated authentication, VMM has a VMM Console Connect Gateway which is located at CDLayout.EVAL\amd64\Setup\msi\RDGatewayFedAuth.

For a HA scenario, you can install multiple quantities of RD Gateways with the Console Connect Gateway behind a load balancer.

Once you have installed and configured the RD Gateway with a trusted certificate from a CA for the front-end part (the public FQDN that is added to the VM Cloud resource provider in WAP), you can move forward and import the public key of the certificate into the Personal certificate store on each RD Gateway server, using the following cmdlet:

C:\> Import-Certificate -CertStoreLocation cert:\LocalMachine\My -Filepath "<certificate path>.cer"

Since we are using a self-signed certificate in this setup, we must do the same for the trusted root certification authorities certificate store for the machine account with the following cmdlet:

C:\> Import-Certificate -CertStoreLocation cert:\LocalMachine\Root -Filepath "<certificate path>.cer"

When the RD Gateway is authenticating tokens, it accepts only tokens that are signed by using specific certificates and hash algorithms. This configuration is performed by setting the TrustedIssuerCertificateHashes and the AllowedHashAlgorithms properties in the WMI FedAuthSettings class

Use the following cmdlet to set the TrustedIssuerCertificateHashes property:

$Server = “rdconnect.internal.systemcenter365.com”
$Thumbprint = “thumbrpint of your certificate”
$Tsdata = Get-WmiObject –computername $Server –NameSpace “root\TSGatewayFedAuth2” –Class “FedauthSettings”
$TSData.TrustedIssuerCertificates = $Thumbprint
$TSData.Put()

Now, make sure that the RD Gateway is configured to use the Console Connect Gateway (VMM plug-in) for authentication and authorization, by running the following cmdlet:

C:\> Get-WmiObject -Namespace root\CIMV2\TerminalServices -Class Win32_TSGatewayServerSettings



Next, we must make sure that the certificate has been installed in the personal certificate store for the machine account, by running the following command:

Dir cert:\localmachine\My\ | where-Object { $_.subject –eq “CN=Remote Console Connect” }


And last, check the configuration of the Console Connect Gateway, by running this cmdlet:

Get-WmiObject –computername $Server –NameSpace “root\TSGatewayFedAuth2” –Class “FedAuthSettings”

s


Now, if you have added your RD Gateway to Windows Azure Pack, you can deploy virtual machines after subscribing to a plan, and test the Remote Console Connect feature.