Showing posts with label Cluster. Show all posts
Showing posts with label Cluster. Show all posts

Wednesday, August 19, 2015

Getting started with Nano Server for Compute and Cluster

I assume you have heard the news that Windows Server and System Center 2016 TP3 is publicly available by now.

This means you can download and play around with the bits in order to get some early hands-on experience on the available scenarios and features.

One of the key scenarios that’s available in this preview is the following:

·         Nano Server (enhanced – and covered in this blog post)
·         Windows Container (new – and very well explained by Aidan Finn at www.aidanfinn.com )
·         Storage Spaces Direct (enhanced – and covered here very soon)
·         Network Controller (new – and covered here in detail very very soon J )

So, let us start to talk about Nano Server.

During Ignite earlier this year, Nano Server was introduced by the legend himself, Mr. Snover.
Let us be very clear: Nano Server is not even comparable to Server Core, that Microsoft has been pushing since the release of it, where you run a full Windows Server without any graphical user interface. However, some of the concepts are the same and applicable when it comes to Nano.

Some of drivers for Nano Server was based on customer feedback, and you might be familiar with the following statements:

-          Reboots impact my business
Think about Windows Server in general, not just Hyper-V in a cluster context – which more or less deals with reboots.
Very often you would find yourself in a situation where you had to reboot a server due to an update – of a component you in fact wasn’t using, nor aware of was installed on the server (that’s a different topic, but you get the point).

-          What’s up with the server image? It’s way too big!
From a WAP standpoint, using VMM as the VM Cloud Provider, you have been doing plenty of VM deployments. You normally have to sit and wait for several minutes just for the data transfer to complete. Then there’s the VM customization if it’s a VM Role, and so on and so forth. Although thing has been improving over the last years with Fast-File-Copy and support of ODX, the image size is very big. And don’t forget - this affects backup, restore and DR scenarios too, in addition to the extra cost on our networking fabric infrastructure.

-          Infrastructure requires too many resources
I am running and operating a large datacenter today, where I have effectively been able to standardize on the server roles and features I only need. However, the cost per server is too high when it comes to utilization, and really make an impact on the VM density.
Higher VM density lower my costs and increases my efficiency & margins.

I just want the components I need….and nothing more… please

So speaking of which components we really need.

Nano Server is designed for the Cloud, which means it’s effective and goes along with a “Zero-footprint” model. Server Roles and optional features live outside of the Nano Server itself, and we have stand-alone packages that we adds to the image by using DISM. More about that later.
Nano Server is a “headless”, 64-bit only, deployment option for Windows Server that according to Microsoft marketing is refactored to focus on “Cloud OS Infrastructure” and “Born-in-the-cloud applications”.

The key roles and features we have today is the following:

-          Hyper-V
Yes, this is (If you ask me) the key – and the flagship when it comes to Nano Server. You might remember the stand-alone Hyper-V server that was based on the Windows Kernel but only ran the Hyper-V Role? Well, the Nano Server is much smaller and only is based on Hyper-V, sharing the exact same architecture as the Hypervisor we know from the GUI based Windows Server edition.

-          Storage (SOFS)
As you probably know already, compute without storage is quite useless, given the fact that Virtual Machines is nothing but a set of files on a disk J
With a package for storage, we are able to instantiate several Nano Servers with the storage role to act as storage nodes based on Storage Spaces Direct (shared-nothing storage). This is very cool and will of course qualify for its own blog post in the near future.

-          Clustering
Both Hyper-V and Storage (SOFS) relies (in many situations) on the Windows Failover Cluster feature. Luckily, the cluster feature servers as its own package for Nano Server and we can effectively enable critical infra roles in a HA configuration using clustering.

-          Windows Container
This is new in TP3 – and I suggest you read Aidan’s blog about the topic. However, you won’t be able to test/verify this package on Nano Server in this TP, as it is missing several of its key requirements and dependencies.

-          Guest Package
Did you think that you had to run Nano Server on your physical servers only? Remember that Nano is designed for the “born-in-the-cloud applications” too, so you can of course run them as virtual machines. However, you would have to add the Guest Package to make them aware that they are running on top of Hyper-V.

In addition, we have packages for OEM Drivers (package of all drivers in Server Core), OneCore ReverseForwarders and Defender.

Remote Management

Nano Server is all about being effective, leverage the cloud computing attributes, being effective, scalable and achieve more. In order to do so, we must understand that Nano Server is all about remote management.
With a subset of Win32 support, PowerShell Core, ASP.NET5, we aren’t able to use Nano Server for everything. But that is also the point here.

Although Nano is refactored to run on CoreCLR, we have full PowerShell language compatibility and remoting. Examples here are Invoke-Command, New-PSSession, Enter-PSSession etc.

Getting started with Nano Server for Compute

Alright, so let us get over to some practical examples on how to get started with Nano Server for Compute, and how to actually do the configuration.

Originally, this blog post was a bit longer than it is now, since Microsoft just published some new content over at TechNet. Here you will find a good guidance on how to deploy Nano: https://technet.microsoft.com/en-us/library/mt126167.aspx

I must admit, that the experience of installing and configuring Nano wasn’t state of the art in TP2.
Now, in TP3, you can see that we have the required scripts and files located on the media itself, which simplifies the process.



1.       Mount the media and dot-source the ‘convert-windowsimage.ps1’ and ‘new-nanoserverimage.ps1’ script in a PowerShell ISE session
2.       Next, see the following example on how to create a new image for your Nano server (this will create a VHD that you could either upload to a WDS if you want to deploy it on a physical server, or mount it to a virtual machine



3.       By running the cmdlet, you should have a new image

In our example, we uploaded the vhd to our WDS (Thanks Flemming Riis for facilitating this).

If you pay close attention to the paramhash table, you can see the following:

$paramHash = @{
MediaPath = 'G:\'
BasePath = 'C:\nano\new'
TargetPath = 'C:\Nano\compute'
AdministratorPassword = $pass
ComputerName = 'nanohosttp3'
Compute = $true
Clustering = $true
DriversPath = "c:\drivers"
EnableIPDisplayOnBoot = $True
EnableRemoteManagementPort = $True
Language = 'en-us'
DomainName = 'drinking.azurestack.coffee'
}

Compute = $true and Clustering = $true.
This means that both the compute and the clustering package will be added to the image. In addition, since we are deploying this on a physical server, we learned the hard way (thanks again Flemming) that we needed some HP drivers for networks and storage controller. We are therefore pointing to the location (DriversPath = “c:\drivers” ) where we extracted the drivers so they get added to the image.
Through this process, we are also pre-creating the computer name object in Active Directory as we want to domain join the box to “drinking.azurestack.coffee”.
If you pay attention to the guide at Technet, you can see how you can set a static IP address on your Nano Server. We have simplified the deployment process in our fabric as we are rapidly deploying and decommissioning compute on the fly, so all servers get their IP config from a DHCP server.

Once the servers were deployed (this took literally under 4 minutes!), we could move forward and very that everything was as we desired.

1)      Nano Servers were joined to domain
2)      We had remote access to the nano servers



Since Nano Server is all about remote management, we used the following PowerShell cmdlets in order to configure the compute nodes, create the cluster etc.

# Preparing your mgmt server

Install-WindowsFeature -Name RSAT-Hyper-V-Tools, Hyper-V-Tools, Hyper-V-PowerShell, RSAT-Clustering, RSAT-Clustering-MGMT, RSAT-AD-PowerShell -Verbose

# Creating Nano Compute Cluster

$clustername = "nanocltp3"
$nodes = "hvtp301, hvtp302"
$ip = "10.0.0.50"

New-Cluster -Name $clustername -Node $nodes -StaticAddress $ip -NoStorage -Verbose

# Connecting to storage server and create SMB share with proper permissions

$storage = "nanostor"

Enter-PSSession -ComputerName nanostor

MD D:\VMS
ICACLS.EXE D:\VMS --% /Grant drinking\knadm:(CI)(OI)F
ICACLS.EXE D:\VMS --% /Grant drinking\hvtp301$:(CI)(OI)F
ICACLS.EXE D:\VMS --% /Grant NTDEV\benarm-zeta$:(CI)(OI)F
ICACLS.EXE D:\VMS /Inheritance:R
New-SmbShare -Name VMS -Path D:\VMS –FullAccess drinking\knadm, drinking\hvtp301$, drinking\hvtp302$

# Configuring Constrained Delegation

Enable-SmbDelegation –SmbServer $storage –SmbClient hvtp301 -Verbose
Enable-SmbDelegation –SmbServer $storage -SmbClient hvtp302 -Verbose

# Configure Hyper-V settings for Cluster usage

$vmhosts =@("hvtp301", "hvtp302")
$vhdpath = "\\nanostor\vms\"
$vmconfigpath = "\\nanostor\vms\"
$lmsettings = "5"

foreach ($vmhost in $vmhosts)
    {
        Set-VMHost -ComputerName $vmhost -MaximumVirtualMachineMigrations $lmsettings -VirtualHardDiskPath $vhdpath -VirtualMachinePath $vmconfigpath -VirtualMachineMigrationAuthenticationType Kerberos -Verbose
    }

# Create VM based on Nano Image

$vm = "nanovm1"
$nanohost = "hvtp301"

New-VM -ComputerName $nanohost -Name $vm -MemoryStartupBytes 512mb -VHDPath \\nanostor\vms\blank1.vhd -SwitchName VMSwitch -Generation 1 -Verbose

# Make the VM highly available

Add-ClusterVirtualMachineRole -VMName $vm -Cluster $clustername -Verbose

# Start the VM

Start-VM -ComputerName hvtp301 -Name $vm -Verbose

As you can see, we are also creating a virtual machine here that is obviously based on a vhd with the guest drivers installed. We tested on how to do this manually by using DISM on an empty image.

The following example can be used in order to service your Nano vhd.

# Nano servicing

# Create a mountpoint

md mountpoint

# Mount the image into the mountpoint you just created

dism /Mount-Image /ImageFile:.\blank.vhd /Index:1 /MountDir:.\mountpoint

# Add your package. In this example, we will add packages for Storage, Cluster and Virtual Guest Services

dism /Add-Package /PackagePath:G:\NanoServer\Packages\Microsoft-NanoServer-Guest-Package.cab /Image:.\mountpoint

dism /Add-Package /PackagaPath:G:\NanoServer\Packages\Microsoft-NanoServer-FailoverCluster-Package.cab /Image:.\mountpoint

dism /Add-Package /PackagePath:G:\NanoServer\Packages\Microsoft-NanoServer-Storage-Package.cab /Image:.\mountpoint

# Commit and dismount

dism /Unmount-Image /MountDir:.\mountpoint /commit

# Copy the vhd over to the smb share for the compute cluster

Copy-Item -Path .\blank.vhd -Destination \\nanostor\vms -Verbose

The following screen shot shows the Nano Cluster that is running a virtual machine with Nano Server installed:



NB: I am aware that my PowerShell cmdlets didn’t configure any VMswitch as part of the process. In fact, I have reported that as a bug as it is not possible to do so using the Hyper-V module. The VM switch was created successfully using the Hyper-V Manager console.

Happy Nano’ing, and I will cover more later.

(I also hope that I will see you during our SCU session on this topic next week)


Saturday, March 16, 2013

VMM 2012 SP1 - Real World Example


Real World example of using the new capabilities in Windows Server 2012, Hyper-V, and System Center 2012 SP1.

Let me start this blog post by explaining how glad I am that we are finally here, with Windows Server 2012 and System Center 2012 SP1.

The waiting has been tough, and many customers have been on the edge before implementing Hyper-V without management. However, my experience is that many customers are moving away from VMware and jumps over to Hyper-V and System Center. V2V is probably my closest friend in these days, together with a couple of Logical Switches. More on that later in this blog post.

So in this example, I would like to tell you about this enterprise customer who’s running major datacenters using VMware with vCenter. They were doing it all in the traditional way, using Fibre Channel from their hosts, connected to some heavy, expensive and noisy storage.

So how did we present a better solution for them, more suited for the future, using technology from Microsoft?

The customer would like to utilize their investments better, and do things more economically and cost effective, without losing any performance, functionality, availability and all the other factors you put into your SLA.

Key elements:

·         Windows Server 2012

·         Hyper-V

o   SMB3.0

o   Scale-Out File Server Role

o   NIC Teaming

o   Network Virtualization

o   Failover Clustering

·         System Center 2012 SP1

o   Virtual Machine Manager

o   Operations Manager

o   Orchestrator

·         Windows Azure on Windows Server (Katal)

o   SPF (SC 2012 SP1 – Orchestrator)

Since this is a large environment, designed to scale. The first thing we did was to install Virtual Machine Manager.

In a solution like this, VMM is key to streamline the configuration of Hyper-V hosts and manage the Fabric (pooled infrastructure resources). So since this would be a very important component, we installed VMM in a failover cluster as a HA role.
 
·         Dedicated cluster for Virtual Machine Manager

·         Two Windows Server 2012 nodes

·         Configuration of Distributed Key Management

·         Connected to dedicated SQL cluster for System Center

·         VMM console installed on a dedicated management server

With this baseline in place, we started to prepare the fabric.

Instead of using the traditional way of delivering shared storage to the hosts, SMB3.0 was introduced as an alternative. The customer was interested to see the performance of this, and the ability to manage it from Virtual Machine Manager. In test environment, we setup the hosts with multiple storage options.

Test environment:

·         Two Hyper-V hosts in a cluster managed by Virtual Machine Manager

·         Both hosts connected to shared storage using:

o   Fibre Channel directly to their SAN

o   2 x 10Gbe NICs in a NIC team, using dedicated virtual network adapters for SMB3.0 traffic, accessing a scale-out file server cluster.

 

Overview

After testing this, the results were clear.

1.       The customers had gained the same performance as using Fibre Channel.

2.       In addition, they had now simplified management by using file shares instead of dedicating LUNs to their clusters, leading to better utilization.

3.       Moreover, by better utilization, they were able to scale their clusters in a very new manner than before.

4.       By calculating on this for production, they were able to reduce their costs significantly by using Ethernet infrastructure instead of Fibre Channel. And this was key, since they could leverage Ethernet and move away from HBA adapters on their hosts.  

The networking part was probably the most interesting in this scenario, because if you think about it, a Hyper-V cluster configuration is all about networking.

And by using NIC teaming, QoS, network virtualization, SMB3.0 and more, it’s important to pay attention to the goal of the design as well as the infrastructure in general.

Every host had 2 x 10Gbe modules installed. And the customer wanted Load Balancing and Failover on every network.

We designed the following logical networks in Virtual Machine Manager:
 
·         Management

·         Live Migration

·         Guests

·         SMB1 (on the Scale-Out File Server cluster nodes, we made the SMB networks available for clients and registered the IP addresses in DNS. This is required if you want to use Multi-Channel)

·         SMB2

·         Cluster

Then, we created network sites and IP subnets with associated VLANs.

For each logical network, we created a VM Network associated with the logical networks.
For more information about NIC teaming in VMM and Network Virtualization, check these blog posts:
 
 
 
Next, we created Native Port Profiles for the teams and the virtual adapters, and grouped them in Logical Switches.
We prepared the Fabric additionally by integrating with PXE and WSUS for securing the life cycle management of the resources in the Fabric.

All set. We started to deploy Hyper-V hosts, and streamlined the configuration by putting then into right hosts groups, applied logical switches and presented file shares to them.

By taking a couple of steps back, I can clearly see that VMM is an absolute necessary framework for a highly available datacenter solution today. Almost every step was performed from the VMM console, and this was highly appreciated by the customer.

The next steps was to deploy virtual machines and leverage the flexibility of templates, profiles and services.

Ok, we had a Private Cloud infrastructure up and running, but still there was some work to do.

Migration from VMware to Hyper-V J

Ok, if you want to perform this operation in bulk, converting many virtual machines at once, then you must either use your Powershell ninja skills combined with Orchestrator, or some secret tool from Microsoft that also involves Veeam.

But if you want to take this slowly while doing other things simultaneously, then VMM is your friend.

This to be aware of:

-          Make sure the networks that your VMware virtual machines are connected to, are available on the Hyper-V hosts

-          Make sure you have a supported VMware infrastructure (5.1 is the only one that is supported but it might work if you are using 5.0 also).

-          Uninstall VMware tools manually on the VMs you will convert.

-          Power off the VMs afterwards.

-          Add both vCenter and then VMware ESX hosts/clusters in VMM.

-          Run Virtual 2 Virtual Conversion in Virtual Machine Manager.

This is an ongoing process and will require some downtime for the VMs. An extra step by converting the VHDs to dynamically VHDX can also be evaluated.

Hopefully this blog post gave you some ideas on how to leverage Windows Server 2012, Hyper-V and Virtual Machine Manager.
Of course we integrated with Operations Manager as well, to get monitoring in place for our fabric and virtual machines. This is key to ensure availability and stable operations.
The self-service solution landed on Katal, so that they could expose their Clouds/Stamps/Plans to their customers in a really good-looking UI with lots of functionality. I will cover this in a more detailed blog post later.

Tuesday, January 15, 2013

Hyper-V Replica Broker: Cluster network name resource failed to create its associated computer object in domain


Cluster network name resource failed to create its associated computer object in domain...
 
As the title says, there is some permission issues in Active Directory.
A customer of mine was deploying Hyper-V Replica in their datacenter today, and everything seemed OK during creation.
However, after checking the roles afterwards, the Hyper-V Replica Broker was having some errors.
A closer look at the critical events for this source showed that the cluster account was not able to create child objects in the correct container in Active Directory.

The text for the associated error code is: A constrained violation occurred.

When you enable the Hyper-V Replica Broker Role within a Failover Cluster, you must specify a name for the role as well as an IP address. This is because the Hyper-V Replica Broker is a HA role, where the secondary replica servers will target this object during replication. It is just like any other active/passive cluster role.

The Cluster Name Object (hyper-v_cluster_name$) is responsible for the creation of objects, and if this object does not have the required permissions to create child objects in the same container where it lives, you will get this error.

Talk with your domain administrator to sort this out.
Normally the domain administrator will delegate control through Active Directory so that your CNO can create child objects.

After we fixed this, everything was in order and the Hyper-V Replica Broker role was Running, healthy and smiling.

Wednesday, November 14, 2012

Clustered Shared Volumes (2.0) in Windows Server 2012

Clustered Shared Volumes was first introduced in Windows Server 2008 R2, and was almost as popular as sliced bread by the time. A great enhancement, and it was solely meant for Hyper-V virtual machines.

Instead of using a dedicated LUN for each VM (so that you could migrate them between cluster nodes without taking down the other VMs on the same LUN) as in Windows Server 2008, you had now the possibility to store multiple VMs on the same LUN by converting it to CSV.

CSV is a distributed file access solution that let multiple nodes in a cluster to access the same file system simultaneously.

This means that many VMs can share the same volume, while you can failover, live migrate and move VMs without affecting the other virtual machines. This leads to better utilization of your storage since you don’t have to place VMs on separate disks, and the CSV’s are not depending in disk letters so you can scale this configuration out, if you’d like.

What’s the latest and greatest related to CSV 2.0:

 

·         Windows Server 2012 has brought some changes to the architecture, so there’s now a new NTFS compatible file system, which is called CSVFS. This means that applications running on a CSV are able to discover this, and leverage this. But still, the underlying file system is NTFS.

 

·         BitLocker Support is added to the list, which means you can secure your CSVs on a remote location. The Cluster Name Object is used as the identity to decryption and you should include this in every cluster deployment you are doing, because the performance penalty are less than 1%.

 

·         Direct I/O for data access which gives enhancements for virtual machine creation and copy operations.

 

·         Support for other roles than Virtual Machines. There’s an entirely new story around SMB in Windows Server 2012, and CSV is also affected by this. You can now put a SMB file share on top of your CSVs, which makes it easier to scale out your cluster storage, to share a single CSV among several clusters, where they will access their shares instead of volumes. Just a reminder: You can run Hyper-V virtual machines from a SMB file share in Windows Server 2012. This requires that both the server and the client is using SMB 3.0.

 

·         The marriage to Active Directory has come to an end. External authentication dependencies, which you would run into if you started your cluster without an available AD is now removed. This gives us an easier setup of clusters, with less trouble and dependencies.

 

·         File backup by supporting requestors that’s running Windows Server 2008 R2 or 2012. You can use application consistent and crash consistent VSS snapshots.

 

·         SMB support with multichannel and direct. CSV traffic can now stream across multiple networks in the clusters and utilize the performance in your NICs that supports RDMA.

 

·         Integration with storage spaces (new in Windows Server 2012) so that you can leverage your cheap disks (just a bunch of disks, JBOD) in a cluster environment

 

·         Maintenance by scanning and repairing volumes with no downtime

 
Although there’s several enhancement for VM mobility in 2012, where you can move VMs without shared storage, there are still significant benefits by clustering your Hyper-V hosts.

Remember: No cluster = no high availability.