Showing posts with label vNext. Show all posts
Showing posts with label vNext. Show all posts

Sunday, September 27, 2015

Explaining Windows Server Containers - Part Three

In the last blog post, I talked about the architecture of container images and how to use them in a similar way like our kids are using Lego.

Today, I want to shift focus a bit and talk more about management of container life-cycle using Docker in the Windows Server Technical Preview 3.

If you have any challenges or problems in your IT business today and ask me for advice, I would most likely point you to something that adds more abstraction.
Abstraction is key, and is how we have solved big and common challenges so far in this industry.
When we covered the architecture of containers in part 1, we compared it with server virtualization.
Both technologies are solving the same challenges. However, they are doing it at different abstraction layer.

With cloud computing we have had the IaaS service model for a long time already, helping organization to speed up their processes and development by leveraging this service model either in a private cloud, public cloud or both – in a hybrid cloud.
However, being able to spin up new virtual machines isn’t necessarily the answer to all the problems in the world.
Sure it makes you more agile and let you utilize your resources far better compared to physical machines, but it is still a machine. A machine requires management at the OS level, such as patching, backup, configuration and more.  Since you also have access at the OS level you might end up in a situation where you have to take actions that involves networking as well.

This is very often where it get complex for organizations with a lot of developers.
They need to focus, learn and adopt new skillsets, just to be capable of doing testing of their applications.

Wouldn’t it be nice if they didn’t have to care about this jungle of complexity at all, knowing nothing about the environment they will be shipping software into?
Given the fact that there’s different peoples involved when it comes to software development and managing the environment of the software, the challenges grows together with the organization itself and scale becomes a problem.

This is where containers comes to the rescue – or do they?

Containers has a good approach since all applications within a container look the same on the outside from the host environment perspective.
We can now wrap our software together within a container and ship the container image to a shared repository and don’t deal with any of the complexity that a managed OS normally require from us.

I have seen this in action, and here’s an example that normally trigger people’s interest:

1)      A developer create something new – or simply commit some changes to their version control system (GitHub, VSO etc).
2)      A new image (Docker in this case) is built with the application.
3)      The new Docker image goes through the entire testing and approval process.
4)      The image is committed to a shared repo.
5)      The new Docker image is deployed into production.

This seems like a well-known story we all have heard in the IaaS world, right?
Please note that no infrastructure was touched from the developer perspective during these steps.

This was just one example of real world organizations are using containers today, and I will cover more good use cases as we move forward in this blog series.
It is important that we’re honest and admit that new technologies that gives us more and more capabilities, features and possibilities, will at the same time introduce us for some new challenges as well.

With containers, we can easily end up in a scenario where the situation can remind us a bit about the movie called “Inception” ( https://en.wikipedia.org/wiki/Inception ). It might be hard to know exactly where you are when you are working - and have access to all the different abstraction layers.

In Technical Preview 3 of Windows Server 2016, Windows Server containers can be managed both with PowerShell and Docker.

What exactly is Docker?

Docker has been around for years and ensures automated deployments into containers by providing an additional layer of abstraction and automation of OS virtualization on Linux, MAC OS and Windows.
Just as with Windows Server containers, Docker provides resource isolation by using namespaces to allow independent containers to run within a single Linux instance, instead of having the overhead of running and maintaining virtual machines.
Although Linux containers wasn’t something new, it had been around for years already, Docker made those Linux containers become available for the general IT guy by simplifying the tooling and workflows.

In Windows Server 2016 TP3, containers can be deployed by both Docker APIs and the Docker client, and Windows Server Containers. Later, Hyper-V containers will be available too.
They important thing to note is that Linux containers will (always) require Linux APIs from the host kernel itself, and Windows Server Containers will require Windows APIs from the host Windows kernel. So although you can’t run Linux on Windows or vice versa, you can manage all of these containers with the same Docker client.

So getting back to the topic here – how to do management of containers?

Since Docker was first, this blog post will focus on the management experience by using Docker in TP3.

Note: In TP3, we are not able to see nor manage the containers if they are created outside of our preferred management solution. Meaning that containers that are created with Docker can only be managed by using Docker, and containers created with PowerShell can only be managed by using PowerShell.

During my testing on TP3, I have run into many issues/bugs when testing management of containers.
In the following recipe, I would like to point out that the following has been done:

1)      I downloaded the image from Microsoft that contains the Server Core image with the running container feature in addition to Docker
2)      I joined the container host to my AD domain
3)      I enabled the server for remote management and opened some required firewall ports
4)      I learned that everything I would like to test regarding Docker, should be performed on the container host itself, logged on through RDP

Once I’ve logged into the container host, I run the following cmdlet to see my images:

Docker images

This shows two images.

Next, I run the following cmdlet:

Docker ps

This will list all the containers on the system (note that Docker is only able to see containers created by Docker).



The next thing I’d like to show off, was how to pull an image from the Docker hub and then run it from my container host. First I get an overview of all the Images that’s compatible with my system:

Docker search server

I see that Microsoft/iis seems like a good option in my case, so I run the following cmdlet to download it:

Docker pull Microsoft/iis

This will first download the image and then extract it.
In the screen shot below, you can see all the steps I have taken so far and the output. Obviously the last part didn’t work as expected and I wasn’t able to pull the image down to my TP3 container host.



So heading back to the basics then and create a new container based on an existing image.

Docker run –it –name krnesedemo windowsservercore powershell

This will:

1)      Create a new container based on the Windows Server Core image
2)      Name the container “krnesedemo”
3)      Start an interactive PowerShell session since –it was specified. Note that this is one of the reasons why you have to run this locally on the container host. The cmdlet doesn’t work remotely

This will literally take seconds, and then my new container is ready with a PowerShell prompt.
Below you can see that I am running some basic cmdlets to verify that I am actually in a container context and not in the container host.
Also note the error I get after installing the Web-Server feature. This is a known error in TP3 that you have to run some cmdlets several times in order to get the right result. Executing it the second time shows that it went as planned.



After exiting the session (exit), I will be back at the container host’s cmdline session.
I run the following cmdlet to see all the running containers:

Docker ps –a

This shows that the newly created container “krnesedemo” is running PowerShell in an interactive session, when it was started and when I exited it.



Now, I want to commit the changes I did (installed Web-Server) and create a new image with the following cmdlet:

Docker commit krnesedemo demoimage

In my environment, this cmdlet takes a few minutes to complete. I also experienced some issues when the container was running prior to executing this command. So my advice would be to run “Docker stop “container name” “ prior to committing it.

After verifying that the image has been created (see picture below), I run the following cmdlet to create a new container based on the newly created image:

Docker run –it –name demo02 demoimage powershell



We have now successfully created a new container based on our newly created image, and through the interactive session we can also verify that the Web-Server is present.



Next time I will dive more into the PowerShell experience and see how you can leverage your existing skillset to create a good management platform for your Windows Containers.



Monday, September 7, 2015

Explaining Windows Server Containers – Part Two

In Part One, I covered the concept of Containers, compared to server virtualization in a Microsoft context.

Today, I want to highlight the architecture of container images and how you can use them as building blocks to speed up deployment.

Before we start

If you have a background in Server Virtualization, you are probably very familiar with VM templates.
A VM template is a sysprep’d image that is generalized and can be deployed over and over again. It is normally configure with its required components and applications and kept up to date with the latest patches.
A VM template contains the complete operating system (and eventually its associated data disk(s)) and has been used by administrators and developers for years when they want do rapidly be able to test and deploy their applications on top of those VMs.

With Containers, this is a bit different. In the previous blog post I explained that Containers are basically what we call “OS Virtualization” and with Windows Server Containers the kernel is shared between the container host and its containers.
So, a container image is not the same as a VM image.

Container Image

Think of a container image as a snapshot/checkpoint of a running container that can be re-deployed many times, isolated in its own user mode with namespace virtualization.
Since the kernel is shared, it is no need for the container image to contain the OS partition

When you have a running container, you can either stop and discard the container once you are done with it, or you can stop and capture the state and modifications you have made by transforming it into a container image.

We have two types of container images. A Container OS image is the first layer in potentially many image layers that make up a container. This image contains the OS environment and is also immutable – which means it cannot be modified.
A container image is stored in its local repository so that you can re-use the images as many times you’d like on the container host. It is also possible to store the images in a remote repository, making them available for multiple container hosts.

Let us see how the image creation process works with Windows Server Containers

Working with Container Images

In the current release, Windows Server Containers can be managed by Docker client and PowerShell.
This blog post will focus on the PowerShell experience and show which cmdlets you need to run in order to build images, just as easy as you would do by playing with Lego J

First, we will explore the properties of a Container Image. An Image contains a Name, Publisher and a version 



We are executing - and storing the following cmdlet in a variable: $conimage = Get-ContainerImage -Name "WinSrvCore" 


Next, we create a new container based on this image by executing - and storing the following cmdlet in a variable: $con = New-Container -Name "Demo" -ContainerImage $conimage -SwitchName "VM". 


Once the container is deployed, we will start it and invoke a command that installs the Web-Server role within this container ( Invoke-Command -ContainerId $con.ContainerId -RunAsAdministrator { Install-WindowsFeature -Name Web-Server } ). You can see that the picture below shows that the blue Lego block is now on top of the brown one (as in layers). 


As described earlier in this blog post, we can stop the running container and create an image if we want to keep the state. We are doing that by executing New-ContainerImage -Container $con -Name Web -Publisher KRN -Version 1.0


If we now executes Get-ContainerImage, we have two images. One that has only the ServerCore, and another one that has ServerCore and the Web-Server Role installed. 


We will repeat the process and create a new container based on the newly created Container Image.



In this container, we will install a web application too. The grey Lego block on top of the blue shows that this is an additional layer.


We are then stopping the running container again and creates another container image, containing the web application too.


In the local repository, we have now three different container images in a layered architecture.



Hopefully you found this useful, and I will soon be back with part three of this blog series.
Perhaps you will see more Lego as well .... :-) 

-kn





Sunday, September 6, 2015

Explaining Windows Server Containers – Part One

You have heard a lot about it lately, Microsoft is speeding up on their container investment and we can see the early beginning in Windows Server 2016 Technical Preview 3.

But before we start to go deep into the container technology in TP3, I would like to add some more context so that you more easily can absorb and understand what exactly is going on here.

Server Virtualization

Container technologies belongs to the virtualization category, but before we explain the concept and technology that gives us “containerization”, we will take a few steps back and see where we are coming from.

Server (virtual machine) virtualization is finally mainstream for the majority of the industry by now.
We have been using virtualization in order to provide an isolated environment for guest instances on a host to increase machine density, enable new scenarios, speed up test & development etc.

Server virtualization gave us an abstraction where every virtual machine were in the belief of that they had their own CPU, I/O resources, memory and networking.
In the Microsoft world, we first started with server virtualization using a type 2 hypervisor, such as Virtual Server and Virtual PC – where all the hardware access was emulated through the operating system itself, meaning that the virtualization software was running in user mode, just as every other application on that machine.
So a type 2 Hypervisor have in essence two hardware abstraction layers, turning them all into bad candidates for real world workloads.

This changed with Hyper-V in Windows Server 2008, where Microsoft introduced their first type 1 hypervisor.
Hyper-V is a microkernelized hypervisor that implements a shared virtualization stack and a distributed driver model that is very flexible and secure.
With this approach, Microsoft had finally a hypervisor that could run workloads considered as “always-on” and also based on x64 architecture.

I don’t have to go through the entire story of Hyper-V, but to summarize: Hyper-V in these days reminds you a bit of VMware – only it is better!

As stated earlier, server virtualization is key and a common requirement for cloud computing. In fact, Microsoft wouldn’t have such a good story today if it wasn’t for the investment they made in Hyper-V.
If you look closely, the Cloud OS vision with the entire “cloud consistency” approach derives from the hypervisor itself.

Empowering IaaS

In Azure today, we have many sophisticated offerings around the Infrastrucutre as a Service delivery model, focusing on core compute, networking and storage capabilities. Also, they have taken this a step further where we can use something called VM extensions in our virtual machines, so that during provisioning time – or post deployment, we can interact with the virtual machine operating system to perform some really advanced stuff. Examples here could be deployment and configuration of a complex LoB application.

Microsoft Azure and Windows Azure Pack (Azure technologies on-prem) has been focusing on IaaS for a long time, and today we have literally everything we need to use any of these cloud environments to rapidly instantiate new test & dev environments, spinning up virtual machine instances in isolated networks and fully leverage the software-defined datacenter model that Microsoft provides.

But what do we do when virtual machines aren’t enough? What if we want to be even more agile? What if we don’t want to sit down and wait for the VM to be deployed, configured and available before we can verify our test results? What if we want to maximize our investments even further and increase the hw utilization to the maximum?

This is where containers comes handy and provides us with OS virtualization.

OS Virtualization

Many people have already started to compare Windows Server Containers with technologies such as Server App-V and App-V (for desktops).
Neither of these comparisons are really true, as Windows Server Containers covers a lot more and has some fundamental differences when looking at the architecture and use cases. 
The concept, however, might be similar, as App-V technologies (both for server and desktop) aimed to deliver isolated application environments, in its own sandbox. Things could either be executed locally or streamed from a server. 

Microsoft will give us two options when it comes to container technology:
Windows Server Containers and Hyper-V Containers.

Before you get confused or starts to raise questions: You can run both Windows Server Containers and Hyper-V Containers within a VM (where the VM is the Container host). However, using Hyper-V Containers would require that Hyper-V is installed.

In Windows Server Container, the container is a process that executes in its own isolated user mode of the operating system, but where the kernel is shared between the container host and all of its containers.
To achieve isolation between the Containers and the Container Hosts, namespace virtualization is used to provide independent session namespace and kernel object namespace isolation per container
In addition, each container is isolated behind a network compartment using NAT (meaning that the container host has a Hyper-V Virtual Switch configured, connected to the containers).

For applications executing in a container process, all file and registry changes are captured through their respective drivers (file filter driver and registry filter). System state are shown as read-only to the application.

With this architecture of Windows Server Containers, it is very likely that this is an ideal approach for applications within the same trust boundary since the host kernel and APIs are shared among the containers. Windows Server Containers is the most optimized solution when reduced start-up time is important to you.

On the other hand, we also have something called Hyper-V Containers (this is not available in Technical Preview 3).
A Hyper-V Container provides the same capabilities as Windows Server Containers, but has its own (isolated) copy of the Windows kernel and memory directly assigned to them. There is of course pros and considerations with every type of technology, and with Hyper-V Containers you will achieve more isolation and better security, but have a less efficient start-up and density compared to Windows Server Containers.

The following two pictures shows the difference between server virtualization and OS virtualization (Windows Server Containers)

Server Virtualization

OS Virtualization

So, what are the use cases for Windows Server Containers?

It is still early days with Windows Server 2016 Technical Preview 3 so things are subject to change.
However, there are things we need to start to think about right now when it comes to how to leverage containers.

If you take a closer look at Docker (which has been doing this for a long time already), you might get a hint of what you can achieve using container technology.

Containers aren’t necessarily the right solution for all kind of applications, scenarios and tools you may think of, but gives you a unique opportunity to speed up testing, development and to effectively enable DevOps scenarios that embraces continuous delivery.

Containers can be spun up in seconds and we all know that having multiple new “objects” in our environment can also lead to a demand of control and management that also introduces us for a new toolset.

I am eager to share more of my learning of Windows Server Containers with you, and will shortly publish part two of this blog series.



Monday, August 3, 2015

Explaining PowerShell Direct

One of the most frequently asked questions I get from my customers is something like this:

“We have a multi-tenant environment where everything is now software-defined, including the network by using network virtualization. As a result of that, we can no longer provide value added services to these customers, as we don’t have a network path into the environments”.

Last year, I wrote a blog post that talks about “Understanding your service offerings with Azure Pack” – which you can read here: http://kristiannese.blogspot.no/2014/10/understanding-windows-azure-pack-and.html

I won’t get into all of those details, but a common misunderstanding nowadays is that both enterprises and service providers expect that they will be able to manage their customers in the same way as they always have been doing.
The fact that many organizations are now building their cloud infrastructure with several new capabilities, such as network virtualization and self-servicing, makes this very difficult to achieve.

I remember back at TechDays in Barcelona, when I got the chance to talk with one of the finest Program Manager’s at Microsoft, Mr. Ben Armstrong.
We had a discussion about this and he was (as always) aware of these challenges and sad he had some plans to simplify service management in a multi-tenant environment directly in the platform.

As a result of that, we can now play around with PowerShell Direct in Windows Server 2016 Technical Preview.

Background

Walking down the memorial lane, we used to have Virtual Server and Virtual PC when we wanted to play around with virtualization in the Microsoft world. Both of these solutions were what we call a “type 2 hypervisor”, where all the hardware access was emulated through the operating system that was actually running the virtual instances.
With Windows Server 2008, we saw the first version of Hyper-V which was truly a type 1 hypervisor.
In the architecture of Hyper-V – and also the reason why I am telling you all of this, is that we have something called VMBus.

The VMBus is a communication mechanism (high-speed memory) used for interpartition communication and device enumeration on systems with multiple active virtualized partitions. The VMBus is responsible for the communication between the parent partition (the Hyper-V host) and the child partition(s) (virtual machines with Integration Components installed/enabled).

As you can see, the VMBus is critical for communication between host and virtual machines, and we are able to take advantage of this channel in several ways already.

In Windows Server 2012 R2, we got the following:

·         Copy-VMFile

Copy-VMFile let you copy file(s) from a source path to a specific virtual machine running on the host. This was all done within the context of the VMBus, so there’s no need for network connectivity to the virtual machines at all. For this to work, it required you to enable “Guest Services” on the target VMs as part of the integration services.

Here’s an example on how to achieve this using PowerShell:

# Enable guest services
Enable-VMIntegrationService -Name 'Guest Service Interface' -VMName mgmtvm -Verbose

# Copy file to VM via VMBus
Copy-VMFile -Name mgmtvm -SourcePath .\myscript.ps1 -DestinationPath “C:\myscript.ps1” -FileSource Host -Verbose

·         Remote Console via VMBus

Another feature that was shipped with Windows Server 2012 R2 was something called “Enhanced Session Mode”. This would leverage a RDP session via the VMBus.
Using RDP, we could now logon to a virtual machine directly from Hyper-V Manager and even copy files in and out of the virtual machine. In addition, USB and printing would also now be possible – without any network connectivity from the host to the virtual machines.

And last but not least, this was the foundation for the Remote Console feature with System Center and Windows Azure Pack- which you can read more about here: http://kristiannese.blogspot.no/2014/02/configuring-remote-console-for-windows.html

And now back to the point. With Windows Server 2016, we will get PowerShell Direct.

With PowerShell Direct we can now in an easy and reliable way run PowerShell cmdlets and scripts directly inside a virtual machine without relying on technologies such as PowerShell remoting, RDP and VMConnect.
Leveraging the VMBus architecture, we are literally bypassing all the requirements for networking, firewall, remote management – and access settings.

However, there are some requirements in the time of writing this:

·         You must be connected to a Windows 10 or a Windows Server technical preview host with virtual machines that are running Windows 10 or Windows Server technical preview as the guest operating system
·         You must be logged in with Hyper-V Admin creds on the host
·         You need user credentials for the virtual machine!
·         The virtual machine that you want to connect to must run locally on the host and be booted

Clearly, it should be obvious that both the host and the guest need to be on the same OS level. The reason for this is that VMBus is relying on the virtualization service client in the guest – and the virtualization service provider on the host, which need to be the same version.

But what’s interesting to see here is that in order to take advantage of PowerShell Direct, we need to have user credentials for the virtual machine’s guest operating system itself.
Also, if we want to perform something awesome within that guest, we probably need admin permission too – unless we are able to dance around with JEA, but I have been able to test that yet.

Here’s an example on what we can do using PowerShell Direct

# Get credentials to access the guest
$cred = Get-Credential

# Create a PSSession targeting the VMName from the Hyper-V Host
Enter-PSSession -VMName mgmtvm -Credential $cred

# Running a cmdlet within the guest context
Get-Service | Where-Object {$_.Status -like "*running*" -and $_.name -like "*vm*" }

[mgmtvm]: PS C:\Users\administrator.DRINKING\Documents> Get-Service | Where-Object {$_.Status -like "*running*" -and $_.name -like "*vm*" }

Status   Name               DisplayName                           
------   ----               -----------                          
Running  vmicguestinterface Hyper-V Guest Service Interface      
Running  vmicheartbeat      Hyper-V Heartbeat Service            
Running  vmickvpexchange    Hyper-V Data Exchange Service        
Running  vmicrdv            Hyper-V Remote Desktop Virtualizati...
Running  vmicshutdown       Hyper-V Guest Shutdown Service       
Running  vmictimesync       Hyper-V Time Synchronization Service 
Running  vmicvmsession      Hyper-V VM Session Service           
Running  vmicvss            Hyper-V Volume Shadow Copy Requestor

As you can see, [mgmtvm] shows that the context is the virtual machine and we have successfully listed all the running services related to the integration services.

Although this is very cool and shows that it works, I’d rather show something that might be more useful.

We can enter a PSSession as showed above, but we can also directly invoke a command through invoke-command and use –scriptblock.

#Invoke command, create and start a DSC configuration on the localhost
Invoke-Command -VMName mgmtvm -Credential (Get-Credential) -ScriptBlock {
# DSC Configuration
Configuration myWeb {
    Node "localhost" {
        WindowsFeature Web {
            Ensure = "Present"
            Name = "Web-Server"
        }
    }
}
# Enuct the DSC config
myWeb

# Start and apply the DSC configuration
Start-DscConfiguration .\myWeb -Wait -Force -Verbose }

From the example above, we are actually invoking a DSC configuration that we are creating and applying on the fly, from the host to the virtual machine using PowerShell Direct.

Here’s the output:

PS C:\Users\knadm> #Invoke command, create and start a DSC configuration on the localhost
Invoke-Command -VMName mgmtvm -Credential (Get-Credential) -ScriptBlock {
# DSC Configuration
Configuration myWeb {
    Node "localhost" {
        WindowsFeature Web {
            Ensure = "Present"
            Name = "Web-Server"
        }
    }
}
# Enuct the DSC config
myWeb

# Start and apply the DSC configuration
Start-DscConfiguration .\myWeb -Wait -Force -Verbose }
cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
WARNING: The configuration 'myWeb' is loading one or more built-in resources without explicitly importing associated modules. Add Import-DscResource –ModuleName ’PSDesire
dStateConfiguration’ to your configuration to avoid this message.


    Directory: C:\Users\administrator.DRINKING\Documents\myWeb


Mode                LastWriteTime         Length Name                                                         PSComputerName                                            
----                -------------         ------ ----                                                         --------------                                             
-a----       03-08-2015     11:34           1834 localhost.mof                                                mgmtvm                                                    
VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespace
Name' = root/Microsoft/Windows/DesiredStateConfiguration'.
VERBOSE: An LCM method call arrived from computer MGMT16 with user sid S-1-5-21-786319967-1790529733-2558778247-500.
VERBOSE: [MGMT16]: LCM:  [ Start  Set      ]
VERBOSE: [MGMT16]: LCM:  [ Start  Resource ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]: LCM:  [ Start  Test     ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] The operation 'Get-WindowsFeature' started: Web-Server
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] The operation 'Get-WindowsFeature' succeeded: Web-Server
VERBOSE: [MGMT16]: LCM:  [ End    Test     ]  [[WindowsFeature]Web]  in 22.0310 seconds.
VERBOSE: [MGMT16]: LCM:  [ Start  Set      ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Installation started...
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Continue with installation?
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Prerequisite processing started...
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Prerequisite processing succeeded.
WARNING: [MGMT16]:                            [[WindowsFeature]Web] You must restart this server to finish the installation process.
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Installation succeeded.
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] successfully installed the feature Web-Server
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] The Target machine needs to be restarted.
VERBOSE: [MGMT16]: LCM:  [ End    Set      ]  [[WindowsFeature]Web]  in 89.0570 seconds.
VERBOSE: [MGMT16]: LCM:  [ End    Resource ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]:                            [] A reboot is required to progress further. Please reboot the system.
WARNING: [MGMT16]:                            [] A reboot is required to progress further. Please reboot the system.
VERBOSE: [MGMT16]: LCM:  [ End    Set      ]
VERBOSE: [MGMT16]: LCM:  [ End    Set      ]    in  113.0260 seconds.
VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 115.028 seconds

In this example I am using one of the built-in DSC resources in Windows Server. If I wanted to do more advanced configuration that would require custom DSC resources, I would have to copy those resources to the guest using the Copy-VMFile cmdlet first. All in all, I am able to do a lot around vm management with the new capabilities through VMBus.

So, what can we expect to see now that we have the opportunity to provide management directly, native in the compute platform itself?

Let me walk you through a scenario here where the tenant wants to provision a new virtual machine.

In Azure Pack today, we have a VM extension through the VM Role. If we compare it to Azure and its new API through Azure Resource Manager, we have even more extension to play around with.
These extensions gives us an opportunity to do more than just OS provisioning. We can deploy – and configure advanced applications just the way we want to.
Before you continue to read this, please note that I am not saying that PowerShell Direct is a VM extension, but still something useful you can take advantage of in this scenario.

So a tenant provision a new VM Role in Azure Pack, and the VM Role is designed with a checkbox that says “Enable Managed Services”.

Now, depending on how each service provider would like to define their SLA’s etc, the tenant has now made it clear that they want managed services for this particular VM Role and hence need to share/create credentials for the service provider to interact with the virtual machines.

I’ve already been involved in several engagements in this scope and I am eager to see the end-result once we have the next bits fully released.

Thanks to the Hyper-V team with Ben and Sarah, for delivering value added services and capabilities on an ongoing basis!