In the last blog post, I talked about the architecture of
container images and how to use them in a similar way like our kids are using
Lego.
Today, I want to shift focus a bit and talk more about
management of container life-cycle using Docker in the Windows Server Technical
Preview 3.
If you have any challenges or problems in your IT
business today and ask me for advice, I would most likely point you to
something that adds more abstraction.
Abstraction is key, and is how we have solved big and
common challenges so far in this industry.
When we covered the architecture of containers in part 1,
we compared it with server virtualization.
Both technologies are solving the same challenges. However,
they are doing it at different abstraction layer.
With cloud computing we have had the IaaS service model
for a long time already, helping organization to speed up their processes and
development by leveraging this service model either in a private cloud, public
cloud or both – in a hybrid cloud.
However, being able to spin up new virtual machines isn’t
necessarily the answer to all the problems in the world.
Sure it makes you more agile and let you utilize your
resources far better compared to physical machines, but it is still a machine. A
machine requires management at the OS level, such as patching, backup,
configuration and more. Since you also
have access at the OS level you might end up in a situation where you have to
take actions that involves networking as well.
This is very often where it get complex for organizations
with a lot of developers.
They need to focus, learn and adopt new skillsets, just
to be capable of doing testing of their applications.
Wouldn’t it be nice if they didn’t have to care about
this jungle of complexity at all, knowing nothing about the environment they
will be shipping software into?
Given the fact that there’s different peoples involved
when it comes to software development and managing the environment of the
software, the challenges grows together with the organization itself and scale
becomes a problem.
This is where
containers comes to the rescue – or do they?
Containers has a good approach since all applications
within a container look the same on the outside from the host environment
perspective.
We can now wrap our software together within a container
and ship the container image to a shared repository and don’t deal with any of
the complexity that a managed OS normally require from us.
I have seen this in action, and here’s an example that
normally trigger people’s interest:
1)
A developer create something new – or simply
commit some changes to their version control system (GitHub, VSO etc).
2)
A new image (Docker in this case) is built with
the application.
3)
The new Docker image goes through the entire
testing and approval process.
4)
The image is committed to a shared repo.
5)
The new Docker image is deployed into
production.
This seems like a well-known story we all have heard in
the IaaS world, right?
Please note that no infrastructure was touched from the
developer perspective during these steps.
This was just one example of real world organizations are
using containers today, and I will cover more good use cases as we move forward
in this blog series.
It is important that we’re honest and admit that new
technologies that gives us more and more capabilities, features and
possibilities, will at the same time introduce us for some new challenges as
well.
With containers, we can easily end up in a scenario where
the situation can remind us a bit about the movie called “Inception” ( https://en.wikipedia.org/wiki/Inception
). It might be hard to know exactly where you are when you are working - and
have access to all the different abstraction layers.
In Technical Preview 3 of Windows Server 2016, Windows
Server containers can be managed both with PowerShell and Docker.
What exactly is
Docker?
Docker has been around for years and ensures automated
deployments into containers by providing an additional layer of abstraction and
automation of OS virtualization on Linux, MAC OS and Windows.
Just as with Windows Server containers, Docker provides
resource isolation by using namespaces to allow independent containers to run
within a single Linux instance, instead of having the overhead of running and
maintaining virtual machines.
Although Linux containers wasn’t something new, it had
been around for years already, Docker made those Linux containers become available
for the general IT guy by simplifying the tooling and workflows.
In Windows Server 2016 TP3, containers can be deployed by
both Docker APIs and the Docker client, and Windows Server Containers. Later,
Hyper-V containers will be available too.
They important thing to note is that Linux containers
will (always) require Linux APIs from the host kernel itself, and Windows
Server Containers will require Windows APIs from the host Windows kernel. So although
you can’t run Linux on Windows or vice versa, you can manage all of these
containers with the same Docker client.
So getting back to the topic here – how to do management
of containers?
Since Docker was first, this blog post will focus on the
management experience by using Docker in TP3.
Note: In TP3,
we are not able to see nor manage the containers if they are created outside of
our preferred management solution. Meaning that containers that are created
with Docker can only be managed by using Docker, and containers created with
PowerShell can only be managed by using PowerShell.
During my testing on TP3, I have run into many
issues/bugs when testing management of containers.
In the following recipe, I would like to point out that
the following has been done:
1)
I downloaded the image from Microsoft that
contains the Server Core image with the running container feature in addition to
Docker
2)
I joined the container host to my AD domain
3)
I enabled the server for remote management and
opened some required firewall ports
4)
I learned that everything I would like to test
regarding Docker, should be performed on the container host itself, logged on
through RDP
Once I’ve logged into the container host, I run the
following cmdlet to see my images:
Docker images
This shows two images.
Next, I run the following cmdlet:
Docker ps
This will list all the containers on the system (note
that Docker is only able to see containers created by Docker).
The next thing I’d like to show off, was how to pull an
image from the Docker hub and then run it from my container host. First I get
an overview of all the Images that’s compatible with my system:
Docker search server
I see that Microsoft/iis seems like a good option in my
case, so I run the following cmdlet to download it:
Docker pull Microsoft/iis
This will first download the image and then extract it.
In the screen shot below, you can see all the steps I
have taken so far and the output. Obviously the last part didn’t work as
expected and I wasn’t able to pull the image down to my TP3 container host.
So heading back to the basics then and create a new
container based on an existing image.
Docker run –it –name krnesedemo
windowsservercore powershell
This will:
1)
Create a new container based on the Windows
Server Core image
2)
Name the container “krnesedemo”
3)
Start an interactive PowerShell session since –it
was specified. Note that this is one of the reasons why you have to run this
locally on the container host. The cmdlet doesn’t work remotely
This will literally take seconds, and then my new
container is ready with a PowerShell prompt.
Below you can see that I am running some basic cmdlets to
verify that I am actually in a container context and not in the container host.
Also note the error I get after installing the Web-Server
feature. This is a known error in TP3 that you have to run some cmdlets several
times in order to get the right result. Executing it the second time shows that
it went as planned.
After exiting the session (exit), I will be back at the
container host’s cmdline session.
I run the following cmdlet to see all the running
containers:
Docker ps –a
This shows that the newly created container “krnesedemo”
is running PowerShell in an interactive session, when it was started and when I
exited it.
Now, I want to commit the changes I did (installed
Web-Server) and create a new image with the following cmdlet:
Docker commit
krnesedemo demoimage
In my environment, this cmdlet takes a few minutes to
complete. I also experienced some issues when the container was running prior
to executing this command. So my advice would be to run “Docker stop “container
name” “ prior to committing it.
After verifying that the image has been created (see
picture below), I run the following cmdlet to create a new container based on
the newly created image:
Docker run –it –name demo02
demoimage powershell
We have now
successfully created a new container based on our newly created image, and
through the interactive session we can also verify that the Web-Server is
present.
Next time I
will dive more into the PowerShell experience and see how you can leverage your
existing skillset to create a good management platform for your Windows
Containers.
3 comments:
Why are you calling binaries run from cmd.exe cmdlets? Not even all commands inside Powershell can be considered cmdlets.
C:\Users\User> Get-Command | Group-Object commandtype -NoElement
Count Name
----- ----
17 Alias
934 Function
647 Cmdlet
Calling commands outside Powershell cmdlets is just wrong and misleading.
What exactly are you trying to say? Docker doesn't require Powershell, and Powershell is only used in this blog post when installing the Web-Server role into the container.
When you log on to a server core, a cmd prompt is what you get and you can take it from there.
Great explanation on Windows Server Containers!
One questoin though, is the only way to manage things using docker command line?
Regardsm
Matthijs ter Woord
Post a Comment