Monday, February 23, 2015

When your WAPack tenants are using VLANs instead of SDN

When your tenants are using VLANs instead of SDN

Ever since the release of Windows Azure Pack, I’ve been a strong believer of software-defined datacenters powered by Microsoft technologies. Especially the story around NVGRE has been interesting and something that Windows Server, System Center and Azure Pack are really embracing.

If you want to learn and read more about NVGRE in this context, I recommend having a look at our whitepaper: https://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a

Also, if you want to learn how to design a scalable management stamp and turn SCVMM into a fabric controller for your multi-tenant cloud, where NVGRE is essential, have a look at this session: http://channel9.msdn.com/Events/TechEd/Europe/2014/CDP-B327

The objective of this blog post is to:

·        Show how you should design VMM to deliver – and use dedicated VLANs to your tenants
·        Show how to structure and design your hosting plans in Azure
·        Customize the plan settings to avoid confusion

How to design VMM to deliver – and use dedicated VLANs to your tenants

Designing and implementing a solid networking structure in VMM can be quite a challenging task.
We normally see that during setup and installation of VMM, people don’t have all the information they need. As a result, they have already started to deploy a couple of hosts before they are actually paying attention to:
1)      Host groups
2)      Logical networks
3)      Storage classifications
Needless to say, it is very difficult to make changes to this afterwards when you have several objects in VMM with dependencies and deep relationship.

So let us just assume that we are able to follow the guidelines and pattern I’ve been using in this script:

The fabric controller script will create host groups based on physical locations with child host groups that contains different functions.
For all the logical networks in that script, I am using “one connected network” as the network type.



This will create a 1:Many mapping of the VM network to each logical network and simplify scalability and management.

For the VLANs networks though, I will not use the network type of “one connected network”, but rather use “VLAN-based independent networks”.

This will effectively let me create a 1:1 mapping of a VM network to a specific VLAN/subnet within this logical network.

The following screenshot shows the mapping and the design in our fabric.



Now the big question: why VLAN-based independent network with a 1:1 mapping of VM network and VLAN?

As I will show you really soon, the type of logical network we use for our tenant VLANs gives us more flexibility due to isolation.

When we are adding the newly created logical network to a VMM Cloud, we simply have to select the entire logical network.
But when we are creating Hosting Plans in Azure Pack admin portal/API, we can now select the single and preferred VM Network (based on VLAN) for our tenants.

The following screenshot from VMM shows our Cloud that is using both the Cloud Network (PA network space for NVGRE) and Tenants VLAN.




So once we have the logical network enabled at the cloud level in VMM, we can move into the Azure Pack section of this blog post.

Azure Pack is multi-tenant by definition and let you – together with VMM and the VM Cloud resource provider, scale and modify the environment to fit your needs.

When using NVGRE as the foundation for our tenants, we are able to use Azure Pack “out of the box” and have a single hosting plan – based on the VMM Cloud where we added our logical network for NVGRE, and tenants can create and manage their own software-defined networks. For this, we only need a single hosting plan as every tenant is isolated on their own virtualized network.
Of course – there might be other valid reasons to have different hosting plans, such as SLA’s, VM Roles and other service offerings. But for NVGRE, everyone can live in the same plan.

This changes once you are using VLANs. If you have a dedicated VLAN per customer, you must add the dedicated VLAN to the hosting plan in Azure Pack. This will effectively force you to create a hosting plan per tenant, so that they are not able to see/share the same VLAN configuration.

The following architecture shows how this scales.



In the hosting plan in Azure Pack, you simply add the dedicated VLAN to the plan and it will be available once the tenant subscribe to this subscription.



Bonus info:

With the update rollup 5 of Azure Pack, we have now a new setting that simplifies the life for all the VLAN tenants out there!

I’ve always said that “if you give people too much information, they’ll ask too many questions”.
It seems like the Azure Pack product group agree on this, and we have now a new setting at the plan level in WAP that says “disable built-in network extension for tenants”.



So let us see how this looks like in the tenant portal when we are accessing a hosting plan that:

a)      Provides VM Clouds
b)      Has the option “disable built-in network extension for tenants” enabled



This will ease on the confusion for these tenants, as they were not able to manage any network artefacts in Azure Pack when VLAN was used. However, they will of course be able to deploy virtual machines/roles into the VLAN(s) that are available in their hosting plan.




Sunday, February 15, 2015

SCVMM Fabric Controller - Update: No more differential disks for your VM Roles

I just assume that you have read Marc van Eijk's well described blog post about the new enhancement with Update Rollup 5 for SCVMM, where we can now effectively turn off differential disks for all our new VM Role deployments with Azure Pack.

If not, follow this link to get all the details: http://www.hyper-v.nu/archives/mvaneijk/2015/02/windows-azure-pack-vm-role-choose-between-differencing-disks-or-dedicated-disks/

As a result of this going public, I have uploaded a new version of my SCVMM Fabric Controller script, that now will add another custom property to all the IaaS Clouds in SCVMM, assuming you want static disks to be default.

You can grab the new version from here:

https://gallery.technet.microsoft.com/SCVMM-Fabric-Controller-a1edf8a7

Next, I will make this script a bit more user friendly and add some more functionality to it in the next couple of weeks.

Thanks.

-kn


Monday, February 2, 2015

Sharing VNet between subscriptions in Azure Pack

Sharing VNet between subscriptions in Azure Pack


From time to time, I get into discussions with customers on how to be more flexible around networking in Azure Pack.

Today each subscription is a boundary. Meaning, a co-admin can have access to multiple subscriptions, but you are not allowed to “share” anything between those subscriptions, such as virtual networks.

So here’s the scenario.

A tenant subscribes to multiple subscriptions in Azure Pack. Each subscription is based on its associated Hosting Plan, which is something that is defined and exposed by the service administrator (the backend side of Azure Pack). A Hosting Plan can contain several offerings, such as VM Clouds, web site Clouds and more. The context as we move forward is the VM Cloud.

Let us say that a customer has two subscriptions today. Each subscription has their own tenant administrator.

Subscription 1 is associated with Hosting Plan 1, which offers standard virtual machines based on VM templates.

Subscription 2 is associated with Hosting Plan 2, which offers VM Roles through Gallery Items.

The service provider has divided these offerings into two different plans.

Tenant admin 1 has created his VNet on subscription 1 and connected the virtual machines.
However, tenant admin 2 creates a new VNet on subscription 2 and connect his VM Roles, they are not able to communicate with the VMs on subscription 1.

So what do we do?

As this isn’t something we are exposing through the GUI nor the API, we have to get in touch with the service admin itself.

You have already noticed that we are dealing with two tenants here, so that should give you an indication on what we are about to do. We are going to share some resources in the backend.

If we head over to SCVMM and look at the VM Network in Powershell, a few interesting properties are appearing to the surface.



UserRole which says this is associated with a UserRole in SCVMM, which can be generated by Azure Pack and aggregated through SPF.
Owner which is owning the VM network in SCVMM.
GrantedToList which is obviously where we can allow other UserRoles to have access to this object.

Interesting.

This means that the service admin can help their tenants with the following.

Grant access to tenant admin 2 on the VNet that was created on subscription 1 – by tenant admin 1.

Powershell cmdlets:

### Find the VM network you want to share between subscriptions

$VNet = Get-SCVMNetwork | Where-Object {$_.name -eq "TechEd" -and $_.Owner -eq "knadm@internal.systemcenter365.com"}

### Find the tenant admin for that subscription

$tenant = Get-SCUserRole | Where-Object {$_.Name -like "*kristine*"}

### Grant access

Grant-SCResource -Resource $VNet -UserRoleID $tenant.ID -RunAsynchronously



We have now enabled the following scenario:



Ok, so what is next?

You can now access the tenant portal and deploy your workloads.

In the portal, you will never be able to manage the VNet on this subscription, only deploy workloads that are connected to it.