Showing posts with label Windows Server 2016. Show all posts
Showing posts with label Windows Server 2016. Show all posts

Sunday, September 27, 2015

Explaining Windows Server Containers - Part Three

In the last blog post, I talked about the architecture of container images and how to use them in a similar way like our kids are using Lego.

Today, I want to shift focus a bit and talk more about management of container life-cycle using Docker in the Windows Server Technical Preview 3.

If you have any challenges or problems in your IT business today and ask me for advice, I would most likely point you to something that adds more abstraction.
Abstraction is key, and is how we have solved big and common challenges so far in this industry.
When we covered the architecture of containers in part 1, we compared it with server virtualization.
Both technologies are solving the same challenges. However, they are doing it at different abstraction layer.

With cloud computing we have had the IaaS service model for a long time already, helping organization to speed up their processes and development by leveraging this service model either in a private cloud, public cloud or both – in a hybrid cloud.
However, being able to spin up new virtual machines isn’t necessarily the answer to all the problems in the world.
Sure it makes you more agile and let you utilize your resources far better compared to physical machines, but it is still a machine. A machine requires management at the OS level, such as patching, backup, configuration and more.  Since you also have access at the OS level you might end up in a situation where you have to take actions that involves networking as well.

This is very often where it get complex for organizations with a lot of developers.
They need to focus, learn and adopt new skillsets, just to be capable of doing testing of their applications.

Wouldn’t it be nice if they didn’t have to care about this jungle of complexity at all, knowing nothing about the environment they will be shipping software into?
Given the fact that there’s different peoples involved when it comes to software development and managing the environment of the software, the challenges grows together with the organization itself and scale becomes a problem.

This is where containers comes to the rescue – or do they?

Containers has a good approach since all applications within a container look the same on the outside from the host environment perspective.
We can now wrap our software together within a container and ship the container image to a shared repository and don’t deal with any of the complexity that a managed OS normally require from us.

I have seen this in action, and here’s an example that normally trigger people’s interest:

1)      A developer create something new – or simply commit some changes to their version control system (GitHub, VSO etc).
2)      A new image (Docker in this case) is built with the application.
3)      The new Docker image goes through the entire testing and approval process.
4)      The image is committed to a shared repo.
5)      The new Docker image is deployed into production.

This seems like a well-known story we all have heard in the IaaS world, right?
Please note that no infrastructure was touched from the developer perspective during these steps.

This was just one example of real world organizations are using containers today, and I will cover more good use cases as we move forward in this blog series.
It is important that we’re honest and admit that new technologies that gives us more and more capabilities, features and possibilities, will at the same time introduce us for some new challenges as well.

With containers, we can easily end up in a scenario where the situation can remind us a bit about the movie called “Inception” ( https://en.wikipedia.org/wiki/Inception ). It might be hard to know exactly where you are when you are working - and have access to all the different abstraction layers.

In Technical Preview 3 of Windows Server 2016, Windows Server containers can be managed both with PowerShell and Docker.

What exactly is Docker?

Docker has been around for years and ensures automated deployments into containers by providing an additional layer of abstraction and automation of OS virtualization on Linux, MAC OS and Windows.
Just as with Windows Server containers, Docker provides resource isolation by using namespaces to allow independent containers to run within a single Linux instance, instead of having the overhead of running and maintaining virtual machines.
Although Linux containers wasn’t something new, it had been around for years already, Docker made those Linux containers become available for the general IT guy by simplifying the tooling and workflows.

In Windows Server 2016 TP3, containers can be deployed by both Docker APIs and the Docker client, and Windows Server Containers. Later, Hyper-V containers will be available too.
They important thing to note is that Linux containers will (always) require Linux APIs from the host kernel itself, and Windows Server Containers will require Windows APIs from the host Windows kernel. So although you can’t run Linux on Windows or vice versa, you can manage all of these containers with the same Docker client.

So getting back to the topic here – how to do management of containers?

Since Docker was first, this blog post will focus on the management experience by using Docker in TP3.

Note: In TP3, we are not able to see nor manage the containers if they are created outside of our preferred management solution. Meaning that containers that are created with Docker can only be managed by using Docker, and containers created with PowerShell can only be managed by using PowerShell.

During my testing on TP3, I have run into many issues/bugs when testing management of containers.
In the following recipe, I would like to point out that the following has been done:

1)      I downloaded the image from Microsoft that contains the Server Core image with the running container feature in addition to Docker
2)      I joined the container host to my AD domain
3)      I enabled the server for remote management and opened some required firewall ports
4)      I learned that everything I would like to test regarding Docker, should be performed on the container host itself, logged on through RDP

Once I’ve logged into the container host, I run the following cmdlet to see my images:

Docker images

This shows two images.

Next, I run the following cmdlet:

Docker ps

This will list all the containers on the system (note that Docker is only able to see containers created by Docker).



The next thing I’d like to show off, was how to pull an image from the Docker hub and then run it from my container host. First I get an overview of all the Images that’s compatible with my system:

Docker search server

I see that Microsoft/iis seems like a good option in my case, so I run the following cmdlet to download it:

Docker pull Microsoft/iis

This will first download the image and then extract it.
In the screen shot below, you can see all the steps I have taken so far and the output. Obviously the last part didn’t work as expected and I wasn’t able to pull the image down to my TP3 container host.



So heading back to the basics then and create a new container based on an existing image.

Docker run –it –name krnesedemo windowsservercore powershell

This will:

1)      Create a new container based on the Windows Server Core image
2)      Name the container “krnesedemo”
3)      Start an interactive PowerShell session since –it was specified. Note that this is one of the reasons why you have to run this locally on the container host. The cmdlet doesn’t work remotely

This will literally take seconds, and then my new container is ready with a PowerShell prompt.
Below you can see that I am running some basic cmdlets to verify that I am actually in a container context and not in the container host.
Also note the error I get after installing the Web-Server feature. This is a known error in TP3 that you have to run some cmdlets several times in order to get the right result. Executing it the second time shows that it went as planned.



After exiting the session (exit), I will be back at the container host’s cmdline session.
I run the following cmdlet to see all the running containers:

Docker ps –a

This shows that the newly created container “krnesedemo” is running PowerShell in an interactive session, when it was started and when I exited it.



Now, I want to commit the changes I did (installed Web-Server) and create a new image with the following cmdlet:

Docker commit krnesedemo demoimage

In my environment, this cmdlet takes a few minutes to complete. I also experienced some issues when the container was running prior to executing this command. So my advice would be to run “Docker stop “container name” “ prior to committing it.

After verifying that the image has been created (see picture below), I run the following cmdlet to create a new container based on the newly created image:

Docker run –it –name demo02 demoimage powershell



We have now successfully created a new container based on our newly created image, and through the interactive session we can also verify that the Web-Server is present.



Next time I will dive more into the PowerShell experience and see how you can leverage your existing skillset to create a good management platform for your Windows Containers.



Sunday, September 6, 2015

Explaining Windows Server Containers – Part One

You have heard a lot about it lately, Microsoft is speeding up on their container investment and we can see the early beginning in Windows Server 2016 Technical Preview 3.

But before we start to go deep into the container technology in TP3, I would like to add some more context so that you more easily can absorb and understand what exactly is going on here.

Server Virtualization

Container technologies belongs to the virtualization category, but before we explain the concept and technology that gives us “containerization”, we will take a few steps back and see where we are coming from.

Server (virtual machine) virtualization is finally mainstream for the majority of the industry by now.
We have been using virtualization in order to provide an isolated environment for guest instances on a host to increase machine density, enable new scenarios, speed up test & development etc.

Server virtualization gave us an abstraction where every virtual machine were in the belief of that they had their own CPU, I/O resources, memory and networking.
In the Microsoft world, we first started with server virtualization using a type 2 hypervisor, such as Virtual Server and Virtual PC – where all the hardware access was emulated through the operating system itself, meaning that the virtualization software was running in user mode, just as every other application on that machine.
So a type 2 Hypervisor have in essence two hardware abstraction layers, turning them all into bad candidates for real world workloads.

This changed with Hyper-V in Windows Server 2008, where Microsoft introduced their first type 1 hypervisor.
Hyper-V is a microkernelized hypervisor that implements a shared virtualization stack and a distributed driver model that is very flexible and secure.
With this approach, Microsoft had finally a hypervisor that could run workloads considered as “always-on” and also based on x64 architecture.

I don’t have to go through the entire story of Hyper-V, but to summarize: Hyper-V in these days reminds you a bit of VMware – only it is better!

As stated earlier, server virtualization is key and a common requirement for cloud computing. In fact, Microsoft wouldn’t have such a good story today if it wasn’t for the investment they made in Hyper-V.
If you look closely, the Cloud OS vision with the entire “cloud consistency” approach derives from the hypervisor itself.

Empowering IaaS

In Azure today, we have many sophisticated offerings around the Infrastrucutre as a Service delivery model, focusing on core compute, networking and storage capabilities. Also, they have taken this a step further where we can use something called VM extensions in our virtual machines, so that during provisioning time – or post deployment, we can interact with the virtual machine operating system to perform some really advanced stuff. Examples here could be deployment and configuration of a complex LoB application.

Microsoft Azure and Windows Azure Pack (Azure technologies on-prem) has been focusing on IaaS for a long time, and today we have literally everything we need to use any of these cloud environments to rapidly instantiate new test & dev environments, spinning up virtual machine instances in isolated networks and fully leverage the software-defined datacenter model that Microsoft provides.

But what do we do when virtual machines aren’t enough? What if we want to be even more agile? What if we don’t want to sit down and wait for the VM to be deployed, configured and available before we can verify our test results? What if we want to maximize our investments even further and increase the hw utilization to the maximum?

This is where containers comes handy and provides us with OS virtualization.

OS Virtualization

Many people have already started to compare Windows Server Containers with technologies such as Server App-V and App-V (for desktops).
Neither of these comparisons are really true, as Windows Server Containers covers a lot more and has some fundamental differences when looking at the architecture and use cases. 
The concept, however, might be similar, as App-V technologies (both for server and desktop) aimed to deliver isolated application environments, in its own sandbox. Things could either be executed locally or streamed from a server. 

Microsoft will give us two options when it comes to container technology:
Windows Server Containers and Hyper-V Containers.

Before you get confused or starts to raise questions: You can run both Windows Server Containers and Hyper-V Containers within a VM (where the VM is the Container host). However, using Hyper-V Containers would require that Hyper-V is installed.

In Windows Server Container, the container is a process that executes in its own isolated user mode of the operating system, but where the kernel is shared between the container host and all of its containers.
To achieve isolation between the Containers and the Container Hosts, namespace virtualization is used to provide independent session namespace and kernel object namespace isolation per container
In addition, each container is isolated behind a network compartment using NAT (meaning that the container host has a Hyper-V Virtual Switch configured, connected to the containers).

For applications executing in a container process, all file and registry changes are captured through their respective drivers (file filter driver and registry filter). System state are shown as read-only to the application.

With this architecture of Windows Server Containers, it is very likely that this is an ideal approach for applications within the same trust boundary since the host kernel and APIs are shared among the containers. Windows Server Containers is the most optimized solution when reduced start-up time is important to you.

On the other hand, we also have something called Hyper-V Containers (this is not available in Technical Preview 3).
A Hyper-V Container provides the same capabilities as Windows Server Containers, but has its own (isolated) copy of the Windows kernel and memory directly assigned to them. There is of course pros and considerations with every type of technology, and with Hyper-V Containers you will achieve more isolation and better security, but have a less efficient start-up and density compared to Windows Server Containers.

The following two pictures shows the difference between server virtualization and OS virtualization (Windows Server Containers)

Server Virtualization

OS Virtualization

So, what are the use cases for Windows Server Containers?

It is still early days with Windows Server 2016 Technical Preview 3 so things are subject to change.
However, there are things we need to start to think about right now when it comes to how to leverage containers.

If you take a closer look at Docker (which has been doing this for a long time already), you might get a hint of what you can achieve using container technology.

Containers aren’t necessarily the right solution for all kind of applications, scenarios and tools you may think of, but gives you a unique opportunity to speed up testing, development and to effectively enable DevOps scenarios that embraces continuous delivery.

Containers can be spun up in seconds and we all know that having multiple new “objects” in our environment can also lead to a demand of control and management that also introduces us for a new toolset.

I am eager to share more of my learning of Windows Server Containers with you, and will shortly publish part two of this blog series.