Wednesday, August 19, 2015

Getting started with Nano Server for Compute and Cluster

I assume you have heard the news that Windows Server and System Center 2016 TP3 is publicly available by now.

This means you can download and play around with the bits in order to get some early hands-on experience on the available scenarios and features.

One of the key scenarios that’s available in this preview is the following:

·         Nano Server (enhanced – and covered in this blog post)
·         Windows Container (new – and very well explained by Aidan Finn at )
·         Storage Spaces Direct (enhanced – and covered here very soon)
·         Network Controller (new – and covered here in detail very very soon J )

So, let us start to talk about Nano Server.

During Ignite earlier this year, Nano Server was introduced by the legend himself, Mr. Snover.
Let us be very clear: Nano Server is not even comparable to Server Core, that Microsoft has been pushing since the release of it, where you run a full Windows Server without any graphical user interface. However, some of the concepts are the same and applicable when it comes to Nano.

Some of drivers for Nano Server was based on customer feedback, and you might be familiar with the following statements:

-          Reboots impact my business
Think about Windows Server in general, not just Hyper-V in a cluster context – which more or less deals with reboots.
Very often you would find yourself in a situation where you had to reboot a server due to an update – of a component you in fact wasn’t using, nor aware of was installed on the server (that’s a different topic, but you get the point).

-          What’s up with the server image? It’s way too big!
From a WAP standpoint, using VMM as the VM Cloud Provider, you have been doing plenty of VM deployments. You normally have to sit and wait for several minutes just for the data transfer to complete. Then there’s the VM customization if it’s a VM Role, and so on and so forth. Although thing has been improving over the last years with Fast-File-Copy and support of ODX, the image size is very big. And don’t forget - this affects backup, restore and DR scenarios too, in addition to the extra cost on our networking fabric infrastructure.

-          Infrastructure requires too many resources
I am running and operating a large datacenter today, where I have effectively been able to standardize on the server roles and features I only need. However, the cost per server is too high when it comes to utilization, and really make an impact on the VM density.
Higher VM density lower my costs and increases my efficiency & margins.

I just want the components I need….and nothing more… please

So speaking of which components we really need.

Nano Server is designed for the Cloud, which means it’s effective and goes along with a “Zero-footprint” model. Server Roles and optional features live outside of the Nano Server itself, and we have stand-alone packages that we adds to the image by using DISM. More about that later.
Nano Server is a “headless”, 64-bit only, deployment option for Windows Server that according to Microsoft marketing is refactored to focus on “Cloud OS Infrastructure” and “Born-in-the-cloud applications”.

The key roles and features we have today is the following:

-          Hyper-V
Yes, this is (If you ask me) the key – and the flagship when it comes to Nano Server. You might remember the stand-alone Hyper-V server that was based on the Windows Kernel but only ran the Hyper-V Role? Well, the Nano Server is much smaller and only is based on Hyper-V, sharing the exact same architecture as the Hypervisor we know from the GUI based Windows Server edition.

-          Storage (SOFS)
As you probably know already, compute without storage is quite useless, given the fact that Virtual Machines is nothing but a set of files on a disk J
With a package for storage, we are able to instantiate several Nano Servers with the storage role to act as storage nodes based on Storage Spaces Direct (shared-nothing storage). This is very cool and will of course qualify for its own blog post in the near future.

-          Clustering
Both Hyper-V and Storage (SOFS) relies (in many situations) on the Windows Failover Cluster feature. Luckily, the cluster feature servers as its own package for Nano Server and we can effectively enable critical infra roles in a HA configuration using clustering.

-          Windows Container
This is new in TP3 – and I suggest you read Aidan’s blog about the topic. However, you won’t be able to test/verify this package on Nano Server in this TP, as it is missing several of its key requirements and dependencies.

-          Guest Package
Did you think that you had to run Nano Server on your physical servers only? Remember that Nano is designed for the “born-in-the-cloud applications” too, so you can of course run them as virtual machines. However, you would have to add the Guest Package to make them aware that they are running on top of Hyper-V.

In addition, we have packages for OEM Drivers (package of all drivers in Server Core), OneCore ReverseForwarders and Defender.

Remote Management

Nano Server is all about being effective, leverage the cloud computing attributes, being effective, scalable and achieve more. In order to do so, we must understand that Nano Server is all about remote management.
With a subset of Win32 support, PowerShell Core, ASP.NET5, we aren’t able to use Nano Server for everything. But that is also the point here.

Although Nano is refactored to run on CoreCLR, we have full PowerShell language compatibility and remoting. Examples here are Invoke-Command, New-PSSession, Enter-PSSession etc.

Getting started with Nano Server for Compute

Alright, so let us get over to some practical examples on how to get started with Nano Server for Compute, and how to actually do the configuration.

Originally, this blog post was a bit longer than it is now, since Microsoft just published some new content over at TechNet. Here you will find a good guidance on how to deploy Nano:

I must admit, that the experience of installing and configuring Nano wasn’t state of the art in TP2.
Now, in TP3, you can see that we have the required scripts and files located on the media itself, which simplifies the process.

1.       Mount the media and dot-source the ‘convert-windowsimage.ps1’ and ‘new-nanoserverimage.ps1’ script in a PowerShell ISE session
2.       Next, see the following example on how to create a new image for your Nano server (this will create a VHD that you could either upload to a WDS if you want to deploy it on a physical server, or mount it to a virtual machine

3.       By running the cmdlet, you should have a new image

In our example, we uploaded the vhd to our WDS (Thanks Flemming Riis for facilitating this).

If you pay close attention to the paramhash table, you can see the following:

$paramHash = @{
MediaPath = 'G:\'
BasePath = 'C:\nano\new'
TargetPath = 'C:\Nano\compute'
AdministratorPassword = $pass
ComputerName = 'nanohosttp3'
Compute = $true
Clustering = $true
DriversPath = "c:\drivers"
EnableIPDisplayOnBoot = $True
EnableRemoteManagementPort = $True
Language = 'en-us'
DomainName = ''

Compute = $true and Clustering = $true.
This means that both the compute and the clustering package will be added to the image. In addition, since we are deploying this on a physical server, we learned the hard way (thanks again Flemming) that we needed some HP drivers for networks and storage controller. We are therefore pointing to the location (DriversPath = “c:\drivers” ) where we extracted the drivers so they get added to the image.
Through this process, we are also pre-creating the computer name object in Active Directory as we want to domain join the box to “”.
If you pay attention to the guide at Technet, you can see how you can set a static IP address on your Nano Server. We have simplified the deployment process in our fabric as we are rapidly deploying and decommissioning compute on the fly, so all servers get their IP config from a DHCP server.

Once the servers were deployed (this took literally under 4 minutes!), we could move forward and very that everything was as we desired.

1)      Nano Servers were joined to domain
2)      We had remote access to the nano servers

Since Nano Server is all about remote management, we used the following PowerShell cmdlets in order to configure the compute nodes, create the cluster etc.

# Preparing your mgmt server

Install-WindowsFeature -Name RSAT-Hyper-V-Tools, Hyper-V-Tools, Hyper-V-PowerShell, RSAT-Clustering, RSAT-Clustering-MGMT, RSAT-AD-PowerShell -Verbose

# Creating Nano Compute Cluster

$clustername = "nanocltp3"
$nodes = "hvtp301, hvtp302"
$ip = ""

New-Cluster -Name $clustername -Node $nodes -StaticAddress $ip -NoStorage -Verbose

# Connecting to storage server and create SMB share with proper permissions

$storage = "nanostor"

Enter-PSSession -ComputerName nanostor

ICACLS.EXE D:\VMS --% /Grant drinking\knadm:(CI)(OI)F
ICACLS.EXE D:\VMS --% /Grant drinking\hvtp301$:(CI)(OI)F
ICACLS.EXE D:\VMS --% /Grant NTDEV\benarm-zeta$:(CI)(OI)F
ICACLS.EXE D:\VMS /Inheritance:R
New-SmbShare -Name VMS -Path D:\VMS –FullAccess drinking\knadm, drinking\hvtp301$, drinking\hvtp302$

# Configuring Constrained Delegation

Enable-SmbDelegation –SmbServer $storage –SmbClient hvtp301 -Verbose
Enable-SmbDelegation –SmbServer $storage -SmbClient hvtp302 -Verbose

# Configure Hyper-V settings for Cluster usage

$vmhosts =@("hvtp301", "hvtp302")
$vhdpath = "\\nanostor\vms\"
$vmconfigpath = "\\nanostor\vms\"
$lmsettings = "5"

foreach ($vmhost in $vmhosts)
        Set-VMHost -ComputerName $vmhost -MaximumVirtualMachineMigrations $lmsettings -VirtualHardDiskPath $vhdpath -VirtualMachinePath $vmconfigpath -VirtualMachineMigrationAuthenticationType Kerberos -Verbose

# Create VM based on Nano Image

$vm = "nanovm1"
$nanohost = "hvtp301"

New-VM -ComputerName $nanohost -Name $vm -MemoryStartupBytes 512mb -VHDPath \\nanostor\vms\blank1.vhd -SwitchName VMSwitch -Generation 1 -Verbose

# Make the VM highly available

Add-ClusterVirtualMachineRole -VMName $vm -Cluster $clustername -Verbose

# Start the VM

Start-VM -ComputerName hvtp301 -Name $vm -Verbose

As you can see, we are also creating a virtual machine here that is obviously based on a vhd with the guest drivers installed. We tested on how to do this manually by using DISM on an empty image.

The following example can be used in order to service your Nano vhd.

# Nano servicing

# Create a mountpoint

md mountpoint

# Mount the image into the mountpoint you just created

dism /Mount-Image /ImageFile:.\blank.vhd /Index:1 /MountDir:.\mountpoint

# Add your package. In this example, we will add packages for Storage, Cluster and Virtual Guest Services

dism /Add-Package /PackagePath:G:\NanoServer\Packages\ /Image:.\mountpoint

dism /Add-Package /PackagaPath:G:\NanoServer\Packages\ /Image:.\mountpoint

dism /Add-Package /PackagePath:G:\NanoServer\Packages\ /Image:.\mountpoint

# Commit and dismount

dism /Unmount-Image /MountDir:.\mountpoint /commit

# Copy the vhd over to the smb share for the compute cluster

Copy-Item -Path .\blank.vhd -Destination \\nanostor\vms -Verbose

The following screen shot shows the Nano Cluster that is running a virtual machine with Nano Server installed:

NB: I am aware that my PowerShell cmdlets didn’t configure any VMswitch as part of the process. In fact, I have reported that as a bug as it is not possible to do so using the Hyper-V module. The VM switch was created successfully using the Hyper-V Manager console.

Happy Nano’ing, and I will cover more later.

(I also hope that I will see you during our SCU session on this topic next week)

Monday, August 10, 2015

My Sessions at System Center Universe

If you haven’t signed up for the conference by now, you should really hurry up.
Have a look at the sessions we are about to present during this year conference here in Europe:

I will have 4 sessions this year, covering a lot of interesting stuff that I want to share with you.

On Monday, we will do a joint session together with Savision (partner) and several industry experts, such as Robert, Thomas and Kevin.
The session title is “Are ITIL and System Center BFF?”

In the modern world where organizations are facing new challenges to be more competitive, they are looking for better ways to improve the quality and efficiency of their IT Service delivery using ITIL framework. Gain valuable insights and best practices on how you can adopt the ITIL framework to Microsoft System Center and OMS from real world experiences together with Savision’s Jonas Lenntun, and Microsoft MVPs Robert Hedblom, Kristian Nese, Kevin Greene and Thomas Maurer.

On Tuesday, I will have the “Early Morning Discussion – Microsoft Azure Stack” together with Thomas Maurer.

Bring all your questions and we will answer as much as we can, while consuming some crazy amount of coffee during this hour.
I will also bring my laptop in case we have to show you some live demos.

Immediately after the morning discussion, me and Thomas will take you into the next generation of infrastructure by introducing you to Nano Server.

In this session we will walk you through how Nano Server is changing the fundamental way we look at fabric servers and workloads. Nano Service will change the way we build servers and solve fundamental challenges which we have encountered over the past years embracing cloud fundamentals.

I can guarantee you a lot of breathtaking demos during this session.
(Although the expected level of this session will be 200, there will definitively be a lot of PowerShell code to cover, since Nano Server is a headless x64 server without any local console).

On Wednesday, I will go solo and talk about “Modern Application Modeling and Configuration for Infrastructure Clouds”.

For more than two decades, the way to manage applications on enterprise distributed systems has followed consistent patterns, and has proven to be very effective. But new paradigms have emerged and are changing how IT is delivering business value, and how IT interacts with business units and end users. Among these new paradigms are: cloud computing (including multi-tenancy and self-service), DevOps, outsourcing, hosting, and more. These paradigms come with different layers and assignments of responsibilities, that underlying technologies must implement for the end-to-end process to remain efficient, scalable and flexible. This session goes through these changes, explains how Microsfot solutions are adapting to them, and summarizes the vision for modern application management in infrastructure as a service (whether on-prem, or in the public cloud or both).

This should be a very interesting session to follow, where we will walk down the memorial lane and see where we eventually ends up and how to deal with it.

Later on Wednesday, I will do my last session – and I am really looking forward to this one, as it is about a subject that is very close to my heart: “Deep-dive on Azure Resource Manager”.

Join me to take the shortcut on Azure Resource Manager (ARM). ARM will definitively have an impact on your career, and probably has already. Once Azure Stack arrives on-prem, we will have a true consistency through ARM that will change the way we are modeling and delivering our services to the clouds. During this session, you will learn how a template is constructed and how to create and deploy your cloud resources.

Please note the following:
The ARM session is level 400 – and also a side session. That means there will only be room for 15 persons.

After the session, I really need to jump into a taxi and get to the airport.

I hope I’ll see you in Basel in a few days J

Monday, August 3, 2015

Explaining PowerShell Direct

One of the most frequently asked questions I get from my customers is something like this:

“We have a multi-tenant environment where everything is now software-defined, including the network by using network virtualization. As a result of that, we can no longer provide value added services to these customers, as we don’t have a network path into the environments”.

Last year, I wrote a blog post that talks about “Understanding your service offerings with Azure Pack” – which you can read here:

I won’t get into all of those details, but a common misunderstanding nowadays is that both enterprises and service providers expect that they will be able to manage their customers in the same way as they always have been doing.
The fact that many organizations are now building their cloud infrastructure with several new capabilities, such as network virtualization and self-servicing, makes this very difficult to achieve.

I remember back at TechDays in Barcelona, when I got the chance to talk with one of the finest Program Manager’s at Microsoft, Mr. Ben Armstrong.
We had a discussion about this and he was (as always) aware of these challenges and sad he had some plans to simplify service management in a multi-tenant environment directly in the platform.

As a result of that, we can now play around with PowerShell Direct in Windows Server 2016 Technical Preview.


Walking down the memorial lane, we used to have Virtual Server and Virtual PC when we wanted to play around with virtualization in the Microsoft world. Both of these solutions were what we call a “type 2 hypervisor”, where all the hardware access was emulated through the operating system that was actually running the virtual instances.
With Windows Server 2008, we saw the first version of Hyper-V which was truly a type 1 hypervisor.
In the architecture of Hyper-V – and also the reason why I am telling you all of this, is that we have something called VMBus.

The VMBus is a communication mechanism (high-speed memory) used for interpartition communication and device enumeration on systems with multiple active virtualized partitions. The VMBus is responsible for the communication between the parent partition (the Hyper-V host) and the child partition(s) (virtual machines with Integration Components installed/enabled).

As you can see, the VMBus is critical for communication between host and virtual machines, and we are able to take advantage of this channel in several ways already.

In Windows Server 2012 R2, we got the following:

·         Copy-VMFile

Copy-VMFile let you copy file(s) from a source path to a specific virtual machine running on the host. This was all done within the context of the VMBus, so there’s no need for network connectivity to the virtual machines at all. For this to work, it required you to enable “Guest Services” on the target VMs as part of the integration services.

Here’s an example on how to achieve this using PowerShell:

# Enable guest services
Enable-VMIntegrationService -Name 'Guest Service Interface' -VMName mgmtvm -Verbose

# Copy file to VM via VMBus
Copy-VMFile -Name mgmtvm -SourcePath .\myscript.ps1 -DestinationPath “C:\myscript.ps1” -FileSource Host -Verbose

·         Remote Console via VMBus

Another feature that was shipped with Windows Server 2012 R2 was something called “Enhanced Session Mode”. This would leverage a RDP session via the VMBus.
Using RDP, we could now logon to a virtual machine directly from Hyper-V Manager and even copy files in and out of the virtual machine. In addition, USB and printing would also now be possible – without any network connectivity from the host to the virtual machines.

And last but not least, this was the foundation for the Remote Console feature with System Center and Windows Azure Pack- which you can read more about here:

And now back to the point. With Windows Server 2016, we will get PowerShell Direct.

With PowerShell Direct we can now in an easy and reliable way run PowerShell cmdlets and scripts directly inside a virtual machine without relying on technologies such as PowerShell remoting, RDP and VMConnect.
Leveraging the VMBus architecture, we are literally bypassing all the requirements for networking, firewall, remote management – and access settings.

However, there are some requirements in the time of writing this:

·         You must be connected to a Windows 10 or a Windows Server technical preview host with virtual machines that are running Windows 10 or Windows Server technical preview as the guest operating system
·         You must be logged in with Hyper-V Admin creds on the host
·         You need user credentials for the virtual machine!
·         The virtual machine that you want to connect to must run locally on the host and be booted

Clearly, it should be obvious that both the host and the guest need to be on the same OS level. The reason for this is that VMBus is relying on the virtualization service client in the guest – and the virtualization service provider on the host, which need to be the same version.

But what’s interesting to see here is that in order to take advantage of PowerShell Direct, we need to have user credentials for the virtual machine’s guest operating system itself.
Also, if we want to perform something awesome within that guest, we probably need admin permission too – unless we are able to dance around with JEA, but I have been able to test that yet.

Here’s an example on what we can do using PowerShell Direct

# Get credentials to access the guest
$cred = Get-Credential

# Create a PSSession targeting the VMName from the Hyper-V Host
Enter-PSSession -VMName mgmtvm -Credential $cred

# Running a cmdlet within the guest context
Get-Service | Where-Object {$_.Status -like "*running*" -and $ -like "*vm*" }

[mgmtvm]: PS C:\Users\administrator.DRINKING\Documents> Get-Service | Where-Object {$_.Status -like "*running*" -and $ -like "*vm*" }

Status   Name               DisplayName                           
------   ----               -----------                          
Running  vmicguestinterface Hyper-V Guest Service Interface      
Running  vmicheartbeat      Hyper-V Heartbeat Service            
Running  vmickvpexchange    Hyper-V Data Exchange Service        
Running  vmicrdv            Hyper-V Remote Desktop Virtualizati...
Running  vmicshutdown       Hyper-V Guest Shutdown Service       
Running  vmictimesync       Hyper-V Time Synchronization Service 
Running  vmicvmsession      Hyper-V VM Session Service           
Running  vmicvss            Hyper-V Volume Shadow Copy Requestor

As you can see, [mgmtvm] shows that the context is the virtual machine and we have successfully listed all the running services related to the integration services.

Although this is very cool and shows that it works, I’d rather show something that might be more useful.

We can enter a PSSession as showed above, but we can also directly invoke a command through invoke-command and use –scriptblock.

#Invoke command, create and start a DSC configuration on the localhost
Invoke-Command -VMName mgmtvm -Credential (Get-Credential) -ScriptBlock {
# DSC Configuration
Configuration myWeb {
    Node "localhost" {
        WindowsFeature Web {
            Ensure = "Present"
            Name = "Web-Server"
# Enuct the DSC config

# Start and apply the DSC configuration
Start-DscConfiguration .\myWeb -Wait -Force -Verbose }

From the example above, we are actually invoking a DSC configuration that we are creating and applying on the fly, from the host to the virtual machine using PowerShell Direct.

Here’s the output:

PS C:\Users\knadm> #Invoke command, create and start a DSC configuration on the localhost
Invoke-Command -VMName mgmtvm -Credential (Get-Credential) -ScriptBlock {
# DSC Configuration
Configuration myWeb {
    Node "localhost" {
        WindowsFeature Web {
            Ensure = "Present"
            Name = "Web-Server"
# Enuct the DSC config

# Start and apply the DSC configuration
Start-DscConfiguration .\myWeb -Wait -Force -Verbose }
cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
WARNING: The configuration 'myWeb' is loading one or more built-in resources without explicitly importing associated modules. Add Import-DscResource –ModuleName ’PSDesire
dStateConfiguration’ to your configuration to avoid this message.

    Directory: C:\Users\administrator.DRINKING\Documents\myWeb

Mode                LastWriteTime         Length Name                                                         PSComputerName                                            
----                -------------         ------ ----                                                         --------------                                             
-a----       03-08-2015     11:34           1834 localhost.mof                                                mgmtvm                                                    
VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespace
Name' = root/Microsoft/Windows/DesiredStateConfiguration'.
VERBOSE: An LCM method call arrived from computer MGMT16 with user sid S-1-5-21-786319967-1790529733-2558778247-500.
VERBOSE: [MGMT16]: LCM:  [ Start  Set      ]
VERBOSE: [MGMT16]: LCM:  [ Start  Resource ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]: LCM:  [ Start  Test     ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] The operation 'Get-WindowsFeature' started: Web-Server
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] The operation 'Get-WindowsFeature' succeeded: Web-Server
VERBOSE: [MGMT16]: LCM:  [ End    Test     ]  [[WindowsFeature]Web]  in 22.0310 seconds.
VERBOSE: [MGMT16]: LCM:  [ Start  Set      ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Installation started...
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Continue with installation?
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Prerequisite processing started...
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Prerequisite processing succeeded.
WARNING: [MGMT16]:                            [[WindowsFeature]Web] You must restart this server to finish the installation process.
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] Installation succeeded.
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] successfully installed the feature Web-Server
VERBOSE: [MGMT16]:                            [[WindowsFeature]Web] The Target machine needs to be restarted.
VERBOSE: [MGMT16]: LCM:  [ End    Set      ]  [[WindowsFeature]Web]  in 89.0570 seconds.
VERBOSE: [MGMT16]: LCM:  [ End    Resource ]  [[WindowsFeature]Web]
VERBOSE: [MGMT16]:                            [] A reboot is required to progress further. Please reboot the system.
WARNING: [MGMT16]:                            [] A reboot is required to progress further. Please reboot the system.
VERBOSE: [MGMT16]: LCM:  [ End    Set      ]
VERBOSE: [MGMT16]: LCM:  [ End    Set      ]    in  113.0260 seconds.
VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 115.028 seconds

In this example I am using one of the built-in DSC resources in Windows Server. If I wanted to do more advanced configuration that would require custom DSC resources, I would have to copy those resources to the guest using the Copy-VMFile cmdlet first. All in all, I am able to do a lot around vm management with the new capabilities through VMBus.

So, what can we expect to see now that we have the opportunity to provide management directly, native in the compute platform itself?

Let me walk you through a scenario here where the tenant wants to provision a new virtual machine.

In Azure Pack today, we have a VM extension through the VM Role. If we compare it to Azure and its new API through Azure Resource Manager, we have even more extension to play around with.
These extensions gives us an opportunity to do more than just OS provisioning. We can deploy – and configure advanced applications just the way we want to.
Before you continue to read this, please note that I am not saying that PowerShell Direct is a VM extension, but still something useful you can take advantage of in this scenario.

So a tenant provision a new VM Role in Azure Pack, and the VM Role is designed with a checkbox that says “Enable Managed Services”.

Now, depending on how each service provider would like to define their SLA’s etc, the tenant has now made it clear that they want managed services for this particular VM Role and hence need to share/create credentials for the service provider to interact with the virtual machines.

I’ve already been involved in several engagements in this scope and I am eager to see the end-result once we have the next bits fully released.

Thanks to the Hyper-V team with Ben and Sarah, for delivering value added services and capabilities on an ongoing basis!