Showing posts with label Cloud. Show all posts
Showing posts with label Cloud. Show all posts

Monday, January 11, 2016

2016 - The year of Microservices and Containers

This is the first blog post I am writing this year.
I was planning to publish this before Christmas, but I figured out it would be better to wait and reflect even more about the trends that’s currently taking place in this industry.
So what’s a better way to start the New Year other than with something I really think will be one of the big bets for the coming year(s)?

I drink a lot of coffee. In fact, I might suspect it will kill me someday. On a positive note, at least I was the one who was controlling it. Jokes aside, I like to drink coffee when I'm thinking out loud around technologies and potentially reflect on the steps we’ve made so far.

Going back to 2009-10 when I was entering the world of virtualization with Windows Server 2008 R2 and Hyper-V, I couldn’t possible imagine how things would change in the future.
At this very day, I realized that the things we were doing back then, was just the foundation to what we are seeing today.

The same arguments are being used throughout the different layers of the stack.
We need to optimize our resources, increase density, flexibility and provide fault-tolerant, resilient and highly-available solutions to bring our business forward.

That was the approach back then – and that’s also the approach right now.

We have constantly been focusing on the infrastructure layer, trying to solve whatever issues that might occur. We have been in the belief that if we actually put our effort into the infrastructure layer, then the applications we put on top of that will be smiling from ear to ear.

But things change.
The infrastructure change, and the applications are changing.

Azure made its debut in 2007-08 I remember. Back then it was all about Platform as a Service offerings.
The offerings were a bit limited back then, giving us cloud services (web role – and worker role), caching and messaging systems such as Service Bus, together with SQL and other storage options such as blob, table and queue.

Many organizations were really struggling back then to get a good grasp of this approach. It was complex. It was a new way of developing and delivering services, and in almost all cases, the application had to be rewritten to fully functional using the PaaS components in Azure.

People were just getting used to virtual machines and has started to use them frequently also a part of test and development of new applications. Many customers went deep into virtualization in production as well, and the result was a great demand from customers for having the opportunity to host virtual machines in Azure too.
This would simplify any migration of “legacy” applications to the cloud, and more or less solve the well-known challenges we were aware of back then.

During the summer in 2011 (if my memory serves me well), Microsoft announced their support of Infrastructure as a Service in Azure. Finally they were able to hit the high note!
Now what?
An increased consumption of Azure was the natural result, and the cloud came a bit closer to most of the customers out there. Finally there was a service model that people could really understand. They were used to virtual machines. The only difference now was the runtime environment, which was now hosted in Azure datacenters instead of their own. At the same time, the PaaS offerings in Azure had evolved and grown to become even more sophisticated.

It is common knowledge now, and it was common knowledge back then that PaaS was the optimal service model for applications living in the cloud, compared to IaaS.

By the end of the day, each and every developer and business around the globe would prefer to host and provide their applications to customers as SaaS instead of anything else, such as traditional client/server applications.

So where are we now?

You probably might wonder where the heck I am going with this?
And trust me, I also wondered at some point. I had to get another cup of coffee before I was able to do a further breakdown.

Looking at Microsoft Azure and the services we have there, it is clear to me that the ideal goal for the IaaS platform is to get as near as possible to the PaaS components in regards to scalability, flexibility, automation, resiliency, self-healing and much more.
For those who have been deep into Azure with Azure Resource Manager know that there’s some really huge opportunities now to leverage the actual platform to deliver IaaS that you ideally don’t have to touch.

With features such as VM Scale Sets (preview), Azure Container Service (also preview), and a growing list of extensions to use together with your compute resources, you can potentially instantiate a state-of-the-art infrastructure hosted in Azure, without having to touch the infrastructure (of course you can’t touch Azure infrastructure, but I am now talking about the virtual infrastructure itself, the one you are basically responsible of).

The IaaS building blocks in Azure is separated in a way so that you can look at them as individual scale-units. Compute, Storage and Networking are all combined to bring you virtual machines. Having this approach with having the loosely coupled, we can also see that these building blocks are empowering many of the PaaS components in Azure itself that lives upon the IaaS.

The following graphic shows how the architecture is layered.
Once Microsoft Azure Stack becomes available on-prem, we will have one consistent platform that brings the same capabilities to your own datacenter as you can use in Azure already.

  

Starting at the bottom, IaaS is on the left side while PaaS is on the right hand side.
By climbing up, you can see that both Azure Stack and Azure Public cloud – which will be consistent has the same approach. VMs and VM Scale sets covers both IaaS and PaaS, but VM Scale Sets is place more on the right hand side than VMs. This is because VM Scale Sets is considered as the powering backbone from the other PaaS services on top of it.

Also VM Extensions leans more to the right as it gives us the opportunity to do more than traditional IaaS. We can extend our virtual machines to perform advanced in-guest operations when using extensions, so anything from provisioning of complex applications, configuration management and more can be handled automatically by the Azure platform.

On the left hand side on top of VM Extensions, we will find Cluster orchestration such as SCALR, RightScale, Mesos and Swarm. Again dealing with a lot of infrastructure, but also providing orchestration on top of it.
Batch is a service that is powered by Azure compute and is a compute job scheduling service that will start a pool of virtual machines for you, installing applications and staging data, running jobs with as many tasks as you have.

Going further to the right, we are seeing two very interesting things – which also is the main driver for the entire blog post. Containers and Service Fabric is leaning more to the PaaS side, and it is not by coincident that Service Fabric is at the right hand side of containers.

Let us try to do a breakdown of containers and Service Fabric

Comparing Containers and Service Fabric

Right now in Azure, we have a new preview service that I encourage everyone who’s interesting in container technology to look into. The ACS Resource Provider provides you basically with a very efficient and low-cost solution to instantiate a complete container environment using a single Azure Resource Manager API call to the underlying resource provider. After completion of the deployment, you will be surprised to find 23 resources within a single resource groups containing all the components you need to have a complete container environment up and running.
One important thing to note at this point is that ACS is Linux first and containers first, in comparison to Service Fabric – which is Windows first and also microservices first rather to container first.

At this time it is ok to be confused. And perhaps this is a good time for me to explain the difficulties to put this on paper.

I am now consuming the third cup of coffee.

Azure explains it all

Let us take some steps back to get some more context into the discussion we are entering.
If you want to keep up with everything that comes in Azure nowadays, that is more or less a full-time job. The rapid pace of innovation, releases and new features is next to crazy.
Have you ever wondered how the engineering teams are able to ship solutions this fast – also with this level of quality?

Many of the services we are using today in Azure is actually running on Service Fabric as Microservices. This is a new way of doing development and is also the true implementation of DevOps, both as a culture and also from a tooling point of view.
Meeting customer expectations isn’t easy. But it is possible when you have a platform that supports and enables it.
As I stated earlier in this blog post, the end goal for any developer would be to deliver their solutions using the SaaS service model.
That is the desired model which implies continuous delivery, automation through DevOps, adoption of automatable, elastic and scalable microservices.

Wait a moment. What exactly is Service Fabric?

Service Fabric provides the complete runtime management for microservices and is dealing with the things we have been fighting against for decades. Out-of-the box, we get hyper scale, partitioning, rolling upgrades, rollbacks, health monitoring, load balancing, failover and replication. All of these capabilities is built-in so we can focus on building those applications we want to be scalable, reliable, consistent and available microservices.

Service Fabric provides a model so you can wrap together the code for a collection of related microservices and their related configuration manifests to an application package. The package is then deployed to a Service Fabric Cluster (this is actually a cluster that runs on one as much as many thousands Windows virtual machines – yes, hyper scale). We have two defined programming models in Service Fabric, which is ‘Reliable Actor’ and ‘Reliable Service’. Both of these models provides you with – and makes it possible to write both stateless and stateful applications. This is breaking news.
You can go ahead and create and develop stateless applications in more or less the same way you have been doing for years, trusting to externalize the state to some queuing system or some other data store, but again handling the complexity of having a distributed application at scale. Personally I think the stateful approach in Service Fabric is what make this so exciting. Being able to write stateful applications that is constantly available, having a primary/replica relationship with its members is very tempting. We are trusting the Service Fabric itself to deal with all the complexity we have been trying to enable in the Infrastructure layer for years, at the same time as the stateful microservices keep the logic and data close so we don’t need queues and caches.

Ok, but what about the container stuff you mentioned?

So Service Fabric provides everything out of the box. You can think of it as a complete way to handle everything from beginning to the end, including a defined programming model that even brings an easy way of handling stateful applications.
ACS on the other side provides a core infrastructure which provides significant flexibility but this brings a cost when trying to implement stateful services. However, the applications themselves are more portable since we can run them wherever Docker containers can run, while microservices on Service Fabric can only run on Service Fabric.

The focus for ACS right now is around open source technologies that can be taken in whole or in part. The orchestration layer and also the application layer brings a great level of portability as a result of that, where you can leverage open source components and deploy them wherever you want.

In the end of the day, Service Fabric has a more restrictive nature but also gives you a more rapid development experience, while ACS provides the most flexibility.

So what exactly is the comparison of Containers and microservices with Service Fabric at this point?

What they indeed do have in common is that this is another layer of abstraction in addition to the things we are already dealing with. Forget what you know about virtual machines for a moment. Containers and microservices is exactly what engineers and developers are demanding to unlock new business scenarios, especially in a time where IoT, Big Data, insight and analytics is becoming more and more important for businesses world wide. The cloud itself is the foundation that enables all of this, but having the great flexibility that both container – and service fabric provides is really speeding up the innovation we’re seeing.

For organizations that has truly been able to adopt the DevOps mindset, they are harnessing that investment and is capable of shipping quality code at a much more frequent cadence than ever before.

Coffee number 4 and closing notes

First I want to thank you for spending these minutes reading my thoughts around Azure, containers, microservices, Service Fabric and where we’re heading.

2016 is a very exciting year and things are changing very fast in this industry. We are seeing customers who are making big bets in certain areas, while others are taking a potential risk of not making any bets at all. I know at least from my point of view what’s the important focus moving forward. And I will do my best to guide people on my way.

While writing these closing notes, I can only use the opportunity to point to the tenderloin in this blog post:

My background is all about ensuring that the Infrastructure is providing whatever the applications need.
That skillset is far from obsolete, however, I know that the true value belongs to the upper layers.

We are hopefully now realizing that even the infrastructure that we have been ever so careful about is turning into commodity, and now handled more through an ‘infrastructure as code’ approach than ever before, trusting that it works, empowers the PaaS components – that again brings the world forward while powering SaaS applications.

Container technologies and Microservices as part of Service Fabric is taking that for granted, and from now on, I am doing the same.




Friday, October 2, 2015

Azure Resource Manager - Linking Templates

This summer, we wrote a whitepaper named «Cloud Consistency with Azure Resource Manager» that you can download from here: https://gallery.technet.microsoft.com/Cloud-Consistency-with-0b79b775

This whitepaper will soon be updated with new content, more examples and guidance around best practices for template authoring.

In the meantime I’ve been writing some templates that can be used by you to learn how you can link templates to have a nested deployment.

The basic example is available on GitHub - https://github.com/krnese/AzureDeploy/tree/master/Basic



You can explore all templates, but in essence I’m doing the following:

·         Have a dedicated template for storage that takes some input parameters and can be used separately
·         Have a dedicated template for virtual network that takes some input parameters and can be used separately
·         Have a master template that also contains compute, vNic and publicIP resource types that links to the storage and vnet templates

Again, this is a very easy example and I will provide you with a more advanced example in a couple of days where we split this up even further and are able to have a much more flexible and dynamic deployment scenario around IaaS/PaaS.

Pay attention to the resource section in the azuredeploy.json document, where we are using the API version “2015-01-01” and the resource type “Microsoft.Resources/deployments”.
Here I am linking to a public Uri for the template (hosted on my github) and specify the parameters I’d like to use in my configuration.



You can hit the “Deploy to Azure” link in order to explore the json structure in Azure and do an actual deployment.



If you want do deploy it through PowerShell, you can also see that the “Microsoft.Resources/deployments” resource types are being used.


Happy authoring – and see you next time.


Wednesday, August 19, 2015

Getting started with Nano Server for Compute and Cluster

I assume you have heard the news that Windows Server and System Center 2016 TP3 is publicly available by now.

This means you can download and play around with the bits in order to get some early hands-on experience on the available scenarios and features.

One of the key scenarios that’s available in this preview is the following:

·         Nano Server (enhanced – and covered in this blog post)
·         Windows Container (new – and very well explained by Aidan Finn at www.aidanfinn.com )
·         Storage Spaces Direct (enhanced – and covered here very soon)
·         Network Controller (new – and covered here in detail very very soon J )

So, let us start to talk about Nano Server.

During Ignite earlier this year, Nano Server was introduced by the legend himself, Mr. Snover.
Let us be very clear: Nano Server is not even comparable to Server Core, that Microsoft has been pushing since the release of it, where you run a full Windows Server without any graphical user interface. However, some of the concepts are the same and applicable when it comes to Nano.

Some of drivers for Nano Server was based on customer feedback, and you might be familiar with the following statements:

-          Reboots impact my business
Think about Windows Server in general, not just Hyper-V in a cluster context – which more or less deals with reboots.
Very often you would find yourself in a situation where you had to reboot a server due to an update – of a component you in fact wasn’t using, nor aware of was installed on the server (that’s a different topic, but you get the point).

-          What’s up with the server image? It’s way too big!
From a WAP standpoint, using VMM as the VM Cloud Provider, you have been doing plenty of VM deployments. You normally have to sit and wait for several minutes just for the data transfer to complete. Then there’s the VM customization if it’s a VM Role, and so on and so forth. Although thing has been improving over the last years with Fast-File-Copy and support of ODX, the image size is very big. And don’t forget - this affects backup, restore and DR scenarios too, in addition to the extra cost on our networking fabric infrastructure.

-          Infrastructure requires too many resources
I am running and operating a large datacenter today, where I have effectively been able to standardize on the server roles and features I only need. However, the cost per server is too high when it comes to utilization, and really make an impact on the VM density.
Higher VM density lower my costs and increases my efficiency & margins.

I just want the components I need….and nothing more… please

So speaking of which components we really need.

Nano Server is designed for the Cloud, which means it’s effective and goes along with a “Zero-footprint” model. Server Roles and optional features live outside of the Nano Server itself, and we have stand-alone packages that we adds to the image by using DISM. More about that later.
Nano Server is a “headless”, 64-bit only, deployment option for Windows Server that according to Microsoft marketing is refactored to focus on “Cloud OS Infrastructure” and “Born-in-the-cloud applications”.

The key roles and features we have today is the following:

-          Hyper-V
Yes, this is (If you ask me) the key – and the flagship when it comes to Nano Server. You might remember the stand-alone Hyper-V server that was based on the Windows Kernel but only ran the Hyper-V Role? Well, the Nano Server is much smaller and only is based on Hyper-V, sharing the exact same architecture as the Hypervisor we know from the GUI based Windows Server edition.

-          Storage (SOFS)
As you probably know already, compute without storage is quite useless, given the fact that Virtual Machines is nothing but a set of files on a disk J
With a package for storage, we are able to instantiate several Nano Servers with the storage role to act as storage nodes based on Storage Spaces Direct (shared-nothing storage). This is very cool and will of course qualify for its own blog post in the near future.

-          Clustering
Both Hyper-V and Storage (SOFS) relies (in many situations) on the Windows Failover Cluster feature. Luckily, the cluster feature servers as its own package for Nano Server and we can effectively enable critical infra roles in a HA configuration using clustering.

-          Windows Container
This is new in TP3 – and I suggest you read Aidan’s blog about the topic. However, you won’t be able to test/verify this package on Nano Server in this TP, as it is missing several of its key requirements and dependencies.

-          Guest Package
Did you think that you had to run Nano Server on your physical servers only? Remember that Nano is designed for the “born-in-the-cloud applications” too, so you can of course run them as virtual machines. However, you would have to add the Guest Package to make them aware that they are running on top of Hyper-V.

In addition, we have packages for OEM Drivers (package of all drivers in Server Core), OneCore ReverseForwarders and Defender.

Remote Management

Nano Server is all about being effective, leverage the cloud computing attributes, being effective, scalable and achieve more. In order to do so, we must understand that Nano Server is all about remote management.
With a subset of Win32 support, PowerShell Core, ASP.NET5, we aren’t able to use Nano Server for everything. But that is also the point here.

Although Nano is refactored to run on CoreCLR, we have full PowerShell language compatibility and remoting. Examples here are Invoke-Command, New-PSSession, Enter-PSSession etc.

Getting started with Nano Server for Compute

Alright, so let us get over to some practical examples on how to get started with Nano Server for Compute, and how to actually do the configuration.

Originally, this blog post was a bit longer than it is now, since Microsoft just published some new content over at TechNet. Here you will find a good guidance on how to deploy Nano: https://technet.microsoft.com/en-us/library/mt126167.aspx

I must admit, that the experience of installing and configuring Nano wasn’t state of the art in TP2.
Now, in TP3, you can see that we have the required scripts and files located on the media itself, which simplifies the process.



1.       Mount the media and dot-source the ‘convert-windowsimage.ps1’ and ‘new-nanoserverimage.ps1’ script in a PowerShell ISE session
2.       Next, see the following example on how to create a new image for your Nano server (this will create a VHD that you could either upload to a WDS if you want to deploy it on a physical server, or mount it to a virtual machine



3.       By running the cmdlet, you should have a new image

In our example, we uploaded the vhd to our WDS (Thanks Flemming Riis for facilitating this).

If you pay close attention to the paramhash table, you can see the following:

$paramHash = @{
MediaPath = 'G:\'
BasePath = 'C:\nano\new'
TargetPath = 'C:\Nano\compute'
AdministratorPassword = $pass
ComputerName = 'nanohosttp3'
Compute = $true
Clustering = $true
DriversPath = "c:\drivers"
EnableIPDisplayOnBoot = $True
EnableRemoteManagementPort = $True
Language = 'en-us'
DomainName = 'drinking.azurestack.coffee'
}

Compute = $true and Clustering = $true.
This means that both the compute and the clustering package will be added to the image. In addition, since we are deploying this on a physical server, we learned the hard way (thanks again Flemming) that we needed some HP drivers for networks and storage controller. We are therefore pointing to the location (DriversPath = “c:\drivers” ) where we extracted the drivers so they get added to the image.
Through this process, we are also pre-creating the computer name object in Active Directory as we want to domain join the box to “drinking.azurestack.coffee”.
If you pay attention to the guide at Technet, you can see how you can set a static IP address on your Nano Server. We have simplified the deployment process in our fabric as we are rapidly deploying and decommissioning compute on the fly, so all servers get their IP config from a DHCP server.

Once the servers were deployed (this took literally under 4 minutes!), we could move forward and very that everything was as we desired.

1)      Nano Servers were joined to domain
2)      We had remote access to the nano servers



Since Nano Server is all about remote management, we used the following PowerShell cmdlets in order to configure the compute nodes, create the cluster etc.

# Preparing your mgmt server

Install-WindowsFeature -Name RSAT-Hyper-V-Tools, Hyper-V-Tools, Hyper-V-PowerShell, RSAT-Clustering, RSAT-Clustering-MGMT, RSAT-AD-PowerShell -Verbose

# Creating Nano Compute Cluster

$clustername = "nanocltp3"
$nodes = "hvtp301, hvtp302"
$ip = "10.0.0.50"

New-Cluster -Name $clustername -Node $nodes -StaticAddress $ip -NoStorage -Verbose

# Connecting to storage server and create SMB share with proper permissions

$storage = "nanostor"

Enter-PSSession -ComputerName nanostor

MD D:\VMS
ICACLS.EXE D:\VMS --% /Grant drinking\knadm:(CI)(OI)F
ICACLS.EXE D:\VMS --% /Grant drinking\hvtp301$:(CI)(OI)F
ICACLS.EXE D:\VMS --% /Grant NTDEV\benarm-zeta$:(CI)(OI)F
ICACLS.EXE D:\VMS /Inheritance:R
New-SmbShare -Name VMS -Path D:\VMS –FullAccess drinking\knadm, drinking\hvtp301$, drinking\hvtp302$

# Configuring Constrained Delegation

Enable-SmbDelegation –SmbServer $storage –SmbClient hvtp301 -Verbose
Enable-SmbDelegation –SmbServer $storage -SmbClient hvtp302 -Verbose

# Configure Hyper-V settings for Cluster usage

$vmhosts =@("hvtp301", "hvtp302")
$vhdpath = "\\nanostor\vms\"
$vmconfigpath = "\\nanostor\vms\"
$lmsettings = "5"

foreach ($vmhost in $vmhosts)
    {
        Set-VMHost -ComputerName $vmhost -MaximumVirtualMachineMigrations $lmsettings -VirtualHardDiskPath $vhdpath -VirtualMachinePath $vmconfigpath -VirtualMachineMigrationAuthenticationType Kerberos -Verbose
    }

# Create VM based on Nano Image

$vm = "nanovm1"
$nanohost = "hvtp301"

New-VM -ComputerName $nanohost -Name $vm -MemoryStartupBytes 512mb -VHDPath \\nanostor\vms\blank1.vhd -SwitchName VMSwitch -Generation 1 -Verbose

# Make the VM highly available

Add-ClusterVirtualMachineRole -VMName $vm -Cluster $clustername -Verbose

# Start the VM

Start-VM -ComputerName hvtp301 -Name $vm -Verbose

As you can see, we are also creating a virtual machine here that is obviously based on a vhd with the guest drivers installed. We tested on how to do this manually by using DISM on an empty image.

The following example can be used in order to service your Nano vhd.

# Nano servicing

# Create a mountpoint

md mountpoint

# Mount the image into the mountpoint you just created

dism /Mount-Image /ImageFile:.\blank.vhd /Index:1 /MountDir:.\mountpoint

# Add your package. In this example, we will add packages for Storage, Cluster and Virtual Guest Services

dism /Add-Package /PackagePath:G:\NanoServer\Packages\Microsoft-NanoServer-Guest-Package.cab /Image:.\mountpoint

dism /Add-Package /PackagaPath:G:\NanoServer\Packages\Microsoft-NanoServer-FailoverCluster-Package.cab /Image:.\mountpoint

dism /Add-Package /PackagePath:G:\NanoServer\Packages\Microsoft-NanoServer-Storage-Package.cab /Image:.\mountpoint

# Commit and dismount

dism /Unmount-Image /MountDir:.\mountpoint /commit

# Copy the vhd over to the smb share for the compute cluster

Copy-Item -Path .\blank.vhd -Destination \\nanostor\vms -Verbose

The following screen shot shows the Nano Cluster that is running a virtual machine with Nano Server installed:



NB: I am aware that my PowerShell cmdlets didn’t configure any VMswitch as part of the process. In fact, I have reported that as a bug as it is not possible to do so using the Hyper-V module. The VM switch was created successfully using the Hyper-V Manager console.

Happy Nano’ing, and I will cover more later.

(I also hope that I will see you during our SCU session on this topic next week)


Sunday, October 5, 2014

Scratching the surface of Networking in vNext

The technical previews of both Windows Server and System Center is now available for download.
What’s really interesting to see, is that we are making huge progress when it comes to core infrastructure components such as compute (Hyper-V, Failover Clustering), storage and networking.

What I would like to talk a bit about in this blog post, is the new things in networking in the context of cloud computing.

Network Controller

As you already know, in vCurrent (Windows Server 2012 R2 and System Center 2012 R2), Virtual Machine Manager act as the network controller for your cloud infrastructure. The reasons for this have been obvious so far, but has also lead to some challenges regarding high availability, scalability and extensibility.
In the technical preview, we have a new role in Windows Server, “Network Controller”.



This is a highly available and scalable server role that provides the point of automation (REST API) that allows you to configure, monitor and troubleshoot the following aspects of a datacenter stamp or cluster:

·         Virtual networks
·         Network services
·         Physical networks
·         Network topology
·         IP Address Management

A management application – such as VMM vNext can manage the controller to perform configuration, monitoring, programming and troubleshooting on the network infrastructure under its control.
In addition, the network controller can expose infrastructure to network aware applications such as Lync and Skype.

GRE Tunneling in Windows Server

Working a lot with cloud computing (private and service provider clouds), we have now and then ran into challenges for very specific scenarios where the service providers want to provide their tenants with hybrid connectivity into the service provider infrastructure.

A typical example is that you have a tenant running VMs on NVGRE, but the same tenant also wants access to some shared services in the service provider fabric.
The workaround for this have never been pretty, but due to GRE tunneling in Windows Server, we have many new features that can leverage the lightweight tunneling protocol of GRE.

GRE tunnels are useful in many scenarios, such as:

·         High speed connectivity
This enables a scalable way to provide high speed connectivity from the tenant on premise network to their virtual network located in the service provider cloud network. A tenant connects via MPLS where a GRE tunnel is established between the hosting service provider’s edge router and the multitenant gateway to the tenant’s virtual network

·         Integration with VLAN based isolation
You can now integrate VLAN based isolation with NVGRE. A physical network on the service provider network contains a load balancer using VLAN-based isolation. A multitenant gateway establishes GRE tunnels between the load balancer on the physical network and the multitenant gateway on the virtual network.

·         Access from a tenant virtual networks to tenant physical networks
Finally, we can provide access from a tenant virtual network to tenant physical networks located in the service provider fabrics. A GRE tunnel endpoint is established on the multitenant gateway, the other GRE tunnel endpoint is established on a third-party device on the physical network. Layer-3 traffic is routed between the VMs in the virtual network and the third-party device on the physical network


No matter if you are an enterprise or a service provider, you will have plenty of new scenarios made available in the next release that will make you more flexible, agile and dynamic than ever before.
For hybrid connectivity – which is the essence of hybrid cloud, it is time to start investigate on how to make this work for you, your organization and customers.

Monday, June 30, 2014

Azure Pack - Working with the Tenant Public API

In these days, you are most likely looking for solutions where you can leverage powershell to gain some level of automation no matter if it’s on premises or in the cloud.
I have been writing about the common service management API in the Cloud OS vision before, where Microsoft Azure and Azure Pack is sharing the same exact management API.

In this blog post, we will have a look at the tenant public API in Azure Pack and see how to make it available for your tenants and also how do some basic tasks through powershell.

Azure Pack can either be installed with the express setup (all portals, sites and API’s on the same machine) or distributed, where you have dedicated virtual machines for each portal, site and components. By having a look at the API’s only, you can see that we have the following:

Windows Azure Pack and its service management API includes three separate components.

·         Windows Azure Pack: Admin API (Not publicly accessible). The Admin API exposes functionality to complete administrative tasks from the management portal for administrators or through the use of Powershell cmdlets. (Blog post: http://kristiannese.blogspot.no/2014/06/working-with-admin-api-in-windows-azure.html )

·         Windows Azure Pack: Tenant API (Not publicly accessible). The Tenant API enables users, or tenants, to manage and configure cloud services that are included in the plans that they subscribe to.

·         Windows Azure Pack: Tenant Public API (publicly accessible). The Tenant Public API enables end users to manage and configure cloud services that are included in the plans that they subscribe to. The Tenant Public API is designed to serve all the requirements of end users that subscribe to the various services that ha hosting service provider provides

Making the Tenant Public API available and accessible for your tenants

Default, the Tenant Public API is installed on port 30006 – which means it is not very firewall friendly.
We have already made the tenant portal and the authentication site available on port 443 (described by Flemming in this blog post: http://flemmingriis.com/windows-azure-pack-publishing-using-sni/ ), and now we need to configure the tenant public API as well.

1)      Create a DNS record for your tenant public API endpoint.
We will need to have a DNS registration for the API. In our case, we have registered “api.systemcenter365.com” and are ready to go.

2)      Log on to your virtual machine running the tenant public API.
In our case, this is the same virtual machine that runs the rest of the internet facing parts, like tenant site and tenant authentication site. This means that we have already registered cloud.systemcenter365.com and cloudauth.systemcenter365.com to this particular server, and now also api.systemcenter365.com.

3)      Change the bindings on the tenant public API in IIS
Navigate to IIS and locate the tenant public API. Click bindings, and change to port 443, register with your certificate and also type the correct hostname that the tenants will be using when accessing this API.



4)      Reconfigure the tenant public API with Powershell
Next, we need to update the configuration for Azure Pack using powershell (accessing the admin API).
The following cmdlet will change the tenant public API to use port 443 and host name “api.systemcenter365.com”.

Set-MgmtSvcFqdn –Namespace TenantPublicAPI –FQDN “api.systemcenter365.com” –Connectionstring “Data Source=sqlwap;Initial Catalog=Microsoft.MgmtSvc.Store;User Id=sa;Password=*” –Port 443

That’s it! You are done, and have now made the tenant public API publicly accessible.

Before we proceed, we need to ensure that we have the right tools in place for accessing the API as a tenant.
It might be quite obvious for some, but not everyone. To be able to manage Azure Pack subscriptions through Powershell, we basically need the powershell module for Microsoft Azure. That is right. We have a bunch of cmdlets in the Azure module for powershell that is directly related to Azure Pack.



You can read more about the Azure module and download it by following this link: http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/
Or simply search for it if you have Web Platform Installer in place on your machine.

Deploying a virtual machine through the Tenant Public API

Again, if you are familiar with Microsoft Azure and the powershell module, you have probably been hitting the “publishsettings” file a couple of times.

Normally when logging into Azure or Azure Pack, you reach for the portal, get redirected to some authentication site (can also be ADFS if not using the default authentication site in Azure Pack) and then sent back to the portal again which in our case is cloud.systemcenter365.com.

The same process will take place if you are trying to access the “publishsettings”. Typing https://cloud.systemcenter365.com/publishsettings in the internet explorer will first require you to logon and then you will have access to your published settings. This will download a file for you that contains your secure credentials and additional information about your subscription for use in your WAP environment.



Once download, we can open the file to explore the content and verify the changes we did when making the tenant public API publicly accessible in the beginning of this blog post.



Picture api content
Next, we will head over to Powershell to start exporing the capabilities.

1)      Import the publish settings file using Powershell

Import-WAPackPublishSettingsFile “C:\MVP.Publishsettings”



Make sure the cmdlet fits your environment and points to the file you have downloaded.

2)      Check to see the active subscriptions for the tenant

Get-WAPackSubscription | select SubscriptionName, ServiceEndpoint



3)      Deploy a new virtual machine

To create a new virtual machine, we first need to have some variables that stores information about the template we will use and the virtual network we will connect to, and then proceed to create the virtual machine.




4)      Going back to the tenant portal, we can see that we are currently provisioning a new virtual machine that we initiated through the tenant public API