Showing posts with label Azure Stack. Show all posts
Showing posts with label Azure Stack. Show all posts

Wednesday, August 19, 2015

Getting started with Nano Server for Compute and Cluster

I assume you have heard the news that Windows Server and System Center 2016 TP3 is publicly available by now.

This means you can download and play around with the bits in order to get some early hands-on experience on the available scenarios and features.

One of the key scenarios that’s available in this preview is the following:

·         Nano Server (enhanced – and covered in this blog post)
·         Windows Container (new – and very well explained by Aidan Finn at www.aidanfinn.com )
·         Storage Spaces Direct (enhanced – and covered here very soon)
·         Network Controller (new – and covered here in detail very very soon J )

So, let us start to talk about Nano Server.

During Ignite earlier this year, Nano Server was introduced by the legend himself, Mr. Snover.
Let us be very clear: Nano Server is not even comparable to Server Core, that Microsoft has been pushing since the release of it, where you run a full Windows Server without any graphical user interface. However, some of the concepts are the same and applicable when it comes to Nano.

Some of drivers for Nano Server was based on customer feedback, and you might be familiar with the following statements:

-          Reboots impact my business
Think about Windows Server in general, not just Hyper-V in a cluster context – which more or less deals with reboots.
Very often you would find yourself in a situation where you had to reboot a server due to an update – of a component you in fact wasn’t using, nor aware of was installed on the server (that’s a different topic, but you get the point).

-          What’s up with the server image? It’s way too big!
From a WAP standpoint, using VMM as the VM Cloud Provider, you have been doing plenty of VM deployments. You normally have to sit and wait for several minutes just for the data transfer to complete. Then there’s the VM customization if it’s a VM Role, and so on and so forth. Although thing has been improving over the last years with Fast-File-Copy and support of ODX, the image size is very big. And don’t forget - this affects backup, restore and DR scenarios too, in addition to the extra cost on our networking fabric infrastructure.

-          Infrastructure requires too many resources
I am running and operating a large datacenter today, where I have effectively been able to standardize on the server roles and features I only need. However, the cost per server is too high when it comes to utilization, and really make an impact on the VM density.
Higher VM density lower my costs and increases my efficiency & margins.

I just want the components I need….and nothing more… please

So speaking of which components we really need.

Nano Server is designed for the Cloud, which means it’s effective and goes along with a “Zero-footprint” model. Server Roles and optional features live outside of the Nano Server itself, and we have stand-alone packages that we adds to the image by using DISM. More about that later.
Nano Server is a “headless”, 64-bit only, deployment option for Windows Server that according to Microsoft marketing is refactored to focus on “Cloud OS Infrastructure” and “Born-in-the-cloud applications”.

The key roles and features we have today is the following:

-          Hyper-V
Yes, this is (If you ask me) the key – and the flagship when it comes to Nano Server. You might remember the stand-alone Hyper-V server that was based on the Windows Kernel but only ran the Hyper-V Role? Well, the Nano Server is much smaller and only is based on Hyper-V, sharing the exact same architecture as the Hypervisor we know from the GUI based Windows Server edition.

-          Storage (SOFS)
As you probably know already, compute without storage is quite useless, given the fact that Virtual Machines is nothing but a set of files on a disk J
With a package for storage, we are able to instantiate several Nano Servers with the storage role to act as storage nodes based on Storage Spaces Direct (shared-nothing storage). This is very cool and will of course qualify for its own blog post in the near future.

-          Clustering
Both Hyper-V and Storage (SOFS) relies (in many situations) on the Windows Failover Cluster feature. Luckily, the cluster feature servers as its own package for Nano Server and we can effectively enable critical infra roles in a HA configuration using clustering.

-          Windows Container
This is new in TP3 – and I suggest you read Aidan’s blog about the topic. However, you won’t be able to test/verify this package on Nano Server in this TP, as it is missing several of its key requirements and dependencies.

-          Guest Package
Did you think that you had to run Nano Server on your physical servers only? Remember that Nano is designed for the “born-in-the-cloud applications” too, so you can of course run them as virtual machines. However, you would have to add the Guest Package to make them aware that they are running on top of Hyper-V.

In addition, we have packages for OEM Drivers (package of all drivers in Server Core), OneCore ReverseForwarders and Defender.

Remote Management

Nano Server is all about being effective, leverage the cloud computing attributes, being effective, scalable and achieve more. In order to do so, we must understand that Nano Server is all about remote management.
With a subset of Win32 support, PowerShell Core, ASP.NET5, we aren’t able to use Nano Server for everything. But that is also the point here.

Although Nano is refactored to run on CoreCLR, we have full PowerShell language compatibility and remoting. Examples here are Invoke-Command, New-PSSession, Enter-PSSession etc.

Getting started with Nano Server for Compute

Alright, so let us get over to some practical examples on how to get started with Nano Server for Compute, and how to actually do the configuration.

Originally, this blog post was a bit longer than it is now, since Microsoft just published some new content over at TechNet. Here you will find a good guidance on how to deploy Nano: https://technet.microsoft.com/en-us/library/mt126167.aspx

I must admit, that the experience of installing and configuring Nano wasn’t state of the art in TP2.
Now, in TP3, you can see that we have the required scripts and files located on the media itself, which simplifies the process.



1.       Mount the media and dot-source the ‘convert-windowsimage.ps1’ and ‘new-nanoserverimage.ps1’ script in a PowerShell ISE session
2.       Next, see the following example on how to create a new image for your Nano server (this will create a VHD that you could either upload to a WDS if you want to deploy it on a physical server, or mount it to a virtual machine



3.       By running the cmdlet, you should have a new image

In our example, we uploaded the vhd to our WDS (Thanks Flemming Riis for facilitating this).

If you pay close attention to the paramhash table, you can see the following:

$paramHash = @{
MediaPath = 'G:\'
BasePath = 'C:\nano\new'
TargetPath = 'C:\Nano\compute'
AdministratorPassword = $pass
ComputerName = 'nanohosttp3'
Compute = $true
Clustering = $true
DriversPath = "c:\drivers"
EnableIPDisplayOnBoot = $True
EnableRemoteManagementPort = $True
Language = 'en-us'
DomainName = 'drinking.azurestack.coffee'
}

Compute = $true and Clustering = $true.
This means that both the compute and the clustering package will be added to the image. In addition, since we are deploying this on a physical server, we learned the hard way (thanks again Flemming) that we needed some HP drivers for networks and storage controller. We are therefore pointing to the location (DriversPath = “c:\drivers” ) where we extracted the drivers so they get added to the image.
Through this process, we are also pre-creating the computer name object in Active Directory as we want to domain join the box to “drinking.azurestack.coffee”.
If you pay attention to the guide at Technet, you can see how you can set a static IP address on your Nano Server. We have simplified the deployment process in our fabric as we are rapidly deploying and decommissioning compute on the fly, so all servers get their IP config from a DHCP server.

Once the servers were deployed (this took literally under 4 minutes!), we could move forward and very that everything was as we desired.

1)      Nano Servers were joined to domain
2)      We had remote access to the nano servers



Since Nano Server is all about remote management, we used the following PowerShell cmdlets in order to configure the compute nodes, create the cluster etc.

# Preparing your mgmt server

Install-WindowsFeature -Name RSAT-Hyper-V-Tools, Hyper-V-Tools, Hyper-V-PowerShell, RSAT-Clustering, RSAT-Clustering-MGMT, RSAT-AD-PowerShell -Verbose

# Creating Nano Compute Cluster

$clustername = "nanocltp3"
$nodes = "hvtp301, hvtp302"
$ip = "10.0.0.50"

New-Cluster -Name $clustername -Node $nodes -StaticAddress $ip -NoStorage -Verbose

# Connecting to storage server and create SMB share with proper permissions

$storage = "nanostor"

Enter-PSSession -ComputerName nanostor

MD D:\VMS
ICACLS.EXE D:\VMS --% /Grant drinking\knadm:(CI)(OI)F
ICACLS.EXE D:\VMS --% /Grant drinking\hvtp301$:(CI)(OI)F
ICACLS.EXE D:\VMS --% /Grant NTDEV\benarm-zeta$:(CI)(OI)F
ICACLS.EXE D:\VMS /Inheritance:R
New-SmbShare -Name VMS -Path D:\VMS –FullAccess drinking\knadm, drinking\hvtp301$, drinking\hvtp302$

# Configuring Constrained Delegation

Enable-SmbDelegation –SmbServer $storage –SmbClient hvtp301 -Verbose
Enable-SmbDelegation –SmbServer $storage -SmbClient hvtp302 -Verbose

# Configure Hyper-V settings for Cluster usage

$vmhosts =@("hvtp301", "hvtp302")
$vhdpath = "\\nanostor\vms\"
$vmconfigpath = "\\nanostor\vms\"
$lmsettings = "5"

foreach ($vmhost in $vmhosts)
    {
        Set-VMHost -ComputerName $vmhost -MaximumVirtualMachineMigrations $lmsettings -VirtualHardDiskPath $vhdpath -VirtualMachinePath $vmconfigpath -VirtualMachineMigrationAuthenticationType Kerberos -Verbose
    }

# Create VM based on Nano Image

$vm = "nanovm1"
$nanohost = "hvtp301"

New-VM -ComputerName $nanohost -Name $vm -MemoryStartupBytes 512mb -VHDPath \\nanostor\vms\blank1.vhd -SwitchName VMSwitch -Generation 1 -Verbose

# Make the VM highly available

Add-ClusterVirtualMachineRole -VMName $vm -Cluster $clustername -Verbose

# Start the VM

Start-VM -ComputerName hvtp301 -Name $vm -Verbose

As you can see, we are also creating a virtual machine here that is obviously based on a vhd with the guest drivers installed. We tested on how to do this manually by using DISM on an empty image.

The following example can be used in order to service your Nano vhd.

# Nano servicing

# Create a mountpoint

md mountpoint

# Mount the image into the mountpoint you just created

dism /Mount-Image /ImageFile:.\blank.vhd /Index:1 /MountDir:.\mountpoint

# Add your package. In this example, we will add packages for Storage, Cluster and Virtual Guest Services

dism /Add-Package /PackagePath:G:\NanoServer\Packages\Microsoft-NanoServer-Guest-Package.cab /Image:.\mountpoint

dism /Add-Package /PackagaPath:G:\NanoServer\Packages\Microsoft-NanoServer-FailoverCluster-Package.cab /Image:.\mountpoint

dism /Add-Package /PackagePath:G:\NanoServer\Packages\Microsoft-NanoServer-Storage-Package.cab /Image:.\mountpoint

# Commit and dismount

dism /Unmount-Image /MountDir:.\mountpoint /commit

# Copy the vhd over to the smb share for the compute cluster

Copy-Item -Path .\blank.vhd -Destination \\nanostor\vms -Verbose

The following screen shot shows the Nano Cluster that is running a virtual machine with Nano Server installed:



NB: I am aware that my PowerShell cmdlets didn’t configure any VMswitch as part of the process. In fact, I have reported that as a bug as it is not possible to do so using the Hyper-V module. The VM switch was created successfully using the Hyper-V Manager console.

Happy Nano’ing, and I will cover more later.

(I also hope that I will see you during our SCU session on this topic next week)


Tuesday, June 2, 2015

Announcing a new whitepaper!

Hi everyone!

It has been quite quiet here on this blog since last month, but there’s of course some reasons for that.
I would like to use this opportunity to give you a heads-up on an upcoming whitepaper that I have been working on together with a few other subject matter experts.

This blog post is not about the specific whitepaper itself, but the goal is rather to give you an explanation of why we are having this approach – putting a lot of effort into a whitepaper instead of publishing books.

I have personally been authoring books myself, and also together with other authors. The experience was interesting to say the least, and also required a lot of my time. Not just to do the research, testing and writing, but also to meet the deadlines, engage with reviewers and much more.
In short, the flexibility you have to modify – or even change the subject, is very very limited when working with books.

The limited flexibility is a showstopper in a business where drastic changes (as in new features and releases) happens at a much faster cadence than ever before.
In order to be able to adopt, learn and apply all what’s happening, – writing whitepapers seems like a better idea than doing books.

At least this is what we think. When discussing this with some of our peers, we often get questions around royalties etc. to be honest, you will never ever get rich by writing a book, unless you are writing some fiction about some magic wizard with glasses, or a girl describing her fantasies of a rich man.

So jokes aside, we do this because of the following reasons:

·         We enjoy doing it

This is not a secret at all. Of course we will spend some massive amount of time on these projects, and probably our significant others would have a grin every now and then. But we enjoy so much, that it is worth the risk and potential penalty we might get.

·         For our own learning and knowledge

Let us be honest. We dive deep into this to learn it by heart. There’s no secret that the technology we will cover will be our bread and butter, so we better know what we are doing.

·         To share it with the community

Do it once –and do it right. We spend a lot of our time in forums, conferences, etc and engage with the community. Being able to point towards a rather comprehensive guide that many can benefit from, instead of supporting 1:1 is beneficial for all of us

·         Recognition

If you do something good and useful, I can ensure you that many people – regardless whether they know you or not, will appreciate it and give you credits. We’ve heard several times from our previous whitepaper (Hybrid Cloud with NVGRE (Cloud OS) ) that it helped peers, IT-pro’s, engineers, students and CxO’s to make a real difference. This is probably worth the effort all alone.

So let me introduce you to the upcoming whitepaper that will hit the internet very shortly:

“Cloud Consistency with Azure Resource Manager”

This whitepaper will focus on cloud consistency using Azure Resource Manager in both the public cloud with Azure, as well as the private and hosted clouds with Azure Stack.

I won’t disclose more about the content, structure or the initial thoughts right now, but I encourage you to stay tuned and download it once it is available on the TechNet Gallery.

Thanks for reading!


Friday, May 8, 2015

Microsoft Azure Stack with a strong ARM

How did God manage to create the world in only 6 days?
-          He had no legacy!

With that, I would like to explain what the new Microsoft Azure Stack is all about.

As many of you already know, we have all been part of a journey over the last couple of years where Microsoft is aiming for consistency across their clouds, covering private, service provider and public.
Microsoft Azure has been the leading star and it is quite clear with a “mobile first, cloud first” strategy that they are putting all their effort into the cloud, and later, make bits and bytes available for on-prem where it make sense.
Regarding consistency, I would like to point out that we have had “Windows Azure Services for Windows Server” (v1) and “Windows Azure Pack” (v2) – that brought the tenant experience on-prem with portals and common API’s.

Let us stop there for a bit.
The API’s we got on-prem as part of the service management APIs was common to the ones we had in Azure, but they weren’t consistent nor identical.
If you’ve ever played around with the Azure Powershell module, you have probably noticed that we had different cmdlets when targeting an Azure Pack endpoint compared to Microsoft Azure.

For the portal experience, we got 2 portals. One portal for the Service Provider – where the admin could configure the underlying resource providers, create hosting plans and define settings and quotas through the admin API. These hosting plans were made available to the tenants in the tenant portal with subscriptions, where that portal – was accessing the resources through the tenant API.

The underlying resource providers were different REST APIs that could contain several different resource types. Take the VM Cloud resource provider for example, that is a combination of System Center Virtual Machine Manager and System Center Service Provider Foundation.

Let us stop here as well, and reflect of what we have just read.

1)      So far, we have had a common set of APIs between Azure Pack and Azure
2)      On-prem, we are relying on System Center in order to bring IaaS into Azure Pack

With cloud consistency in mind, it is about time to point out that to move forward, we have to get the exact same APIs on-prem as we have in Microsoft Azure.
Second, we all know that there’s no System Center components that are managing the Hyper-Scale cloud in Azure

Let us take a closer look at the architecture of Microsoft Azure Stack



Starting at the top, we can see that we have the same – consistent browser experience.
The user facing services consists of hubs, a portal shell site and  RP extensions for both admins (service provider) and tenant. This shows that we won’t have two different portals as we have in Azure Pack today, but things are differentiated through the extensions.

These components are all living on top of something called “Azure Resource Manager”, which is where all the fun and consistency for real is born.
Previously in Azure, we were accessing the Service Management API when interacting with our cloud services.
Now, this has changed and Azure Resource Manager is the new, consistent and powerful API that will be managing all the underlying resource providers, regardless of clouds.

Azure Resource Manager introduces an entirely new way of thinking about your cloud resources.
A challenge with both Azure Pack and the former Azure portal was that once we had several components that made up an application, it was really hard to manage the life-cycle of it. This has drastically changed with ARM, where we can now imagining a complex service, such as a SharePoint farm – containing many different tiers, instances, scripts, applications. With ARM, we can use a template that will create a resource group (a logical group that will let you control RBAC, life-cycle, billing etc on the entire group of resources, but you can also specify this at a lower level on the resources itself) with the resources you need to support the service.
Also, the ARM itself is idempotent, which means it has a declarative approach. You can already start to imagine how powerful this will be.

In the context of the architecture of Azure Stack as we are looking at right now, this means we can:

1)      Create an Azure Gallery Template (.json)
a.       Deploy the template to Microsoft Azure
or/and
b.      Deploy the template to Microsoft Azure Stack

It is time to take a break and put a smile on your face.

Now, let us explain the architecture a bit further.

Under the Azure Resource Manager, we will have several Core Management Resource Providers as well as Service Resource Providers.

The Core Management Resource Providers consists of Authorization – which is where all the RBAC settings and policies are living. All the services will also share the same Gallery now, instead of having separate galleries for Web, VMs etc as we have in Azure Pack today. Also, all the events, monitoring and usage related settings are living in these core management resource providers. One of the benefits here is that third parties can now plug in their resource providers and harness the existing architecture of these core RPs.

Further, we have currently Compute, Network and Storage as Service Resource Providers.

If we compare this with what we already have in Azure Pack today through our VM Cloud Resource Provider, we have all of this through a single resource provider (SCVMM/SCSPF) that basically provides us with everything we need to deliver IaaS.
I assume that you have read the entire blog post till now, and as I wrote in the beginning, there’s no System Center components that are managing Microsoft Azure today.

So why do we have 3 different resource providers in Azure Stack for compute, network and storage, when we could potentially have everything from the same RP?

In order to leverage the beauty of a cloud, we need the opportunity to have a loosely coupled infrastructure – where the resources and different units can scale separately and independent of each other.

Here’s an example of how you can take advantage of this:

1)      You want to deploy an advanced application to an Azure/Azure Stack cloud, so you create a base template containing the common artifacts, such as image, OS settings etc
2)      Further, you create a separate template for the NIC settings and the storage settings
3)      As part of the deployment, you create references and eventually some “depends-on” between these templates so that everything will be deployed within the same Azure Resource Group (that shares the same common life-cycle, billing, RBAC etc)
4)      Next, you might want to change – or eventually replace some of the components in this resource group. As an example, let us say that you put some effort into the NIC configuration. You can then delete the VM (from the Compute RP) itself, but keep the NIC (in the Network RP).

This gives us much more flexibility compared to what we are used to.

Summary

So, Microsoft is for real bringing Azure services to your datacenters now, as part of the 2016 wave that will be shipped next year. The solution is called “Microsoft Azure Stack” and won’t “require” System Center – but you can use System Center if you want for managing purposes etc., which is probably a very good idea.

It is an entirely new product for you datacenter – which is a cloud-optimized application platform, using Azure-based compute, network and storage services

In the next couple of weeks, I will write more about the underlying resource providers and also how to leverage the ARM capabilities. 

Stay tuned for more info around Azure Stack and Azure Resource Manager.