Showing posts with label Azure. Show all posts
Showing posts with label Azure. Show all posts

Wednesday, April 20, 2016

Connecting the dots with OMS and SLACK

In my last blog post, I spent some time trying to explain why OMS is more than you think and how this fits into the next generation of hybrid IT management, with Management-as-a-Service.


Today, I want to highlight something I myself find very interesting, where we are using OMS as the source of our information towards operations engineers.

OMS Log Analytics

One of the key aspects of OMS is the Log Analytic workspace. This is where you harvest the data from your hybrid operational environment, and as I talked about in my previous blog post, you can have multiple data sources – and even use custom logs to retrieve and centralize the information you are looking for – but also (and perhaps more important) – the information that you didn’t knew you were looking for!

Log Analytics let you easily search for any of your data and from there, you can truly demonstrate your skillset by connecting the dots to a complete remediation solution, or plug into some other system to either deliver or manipulate the data or both.

With Log Analytics, we are able to:

·         Search for any of our data
·         Save searches and use them together with Dashboards
·         Use saved searches in conjunction with Alerts
·         Get e-mail notification with detailed information about the alert, search result and more
·         Connect Alerts with Azure Automation to trigger a Runbook that is either executed in Azure or through a Hybrid Worker
·         Connect Alerts with third-party systems using WebHooks

This blog post will focus on how to use OMS as the foundation for an operational department and centralize the alerts (informational, warning and critical) into SLACK.

First, let us quickly get a better understanding of what SLACK really is and why it might be useful in this particular scenario.

Many IT organizations are having a wide diversity of different ways of doing collaboration. Some of them are good, some of them are less good. However, the fact is that many channels might be used over time which leads to lack of communication and especially transparency when it comes to critical information around the operations.
Slack is a messaging application where teams can share files, talk and literally work together. This lets organizations have everything in one place, moving away from the devastating e-mail threads and so on.
With SLACK, everything that is shared is automatically indexed and archived which is searchable.

Some of the advantages you will get immediately when using SLACK is transparency to team communication for greater visibility into what other teams are working on, it speeds up feedback and decision making and make it a lot easier to find information and documents and more.
Las but not least – SLACK supports a wide range of tools, which means you can integrate existing apps, systems etc to communicate with SLACK to centralize the communication and information.

This is where OMS comes into play together with the WebHook integration to SLACK.

Ok, I get it. The information from our alerts can have a flow into one or more SLACK channels where our teams can get everything in a single view, but what exactly is a WebHook?

I am glad you asked.

WebHook is something you have used already if you have been using Azure Automation – and especially together with Alerts in OMS. This will leverage WebHooks.

The concept of WebHooks is really simple, and by simple I mean it is a simple HTTP POST that occurs when ‘something’ happens.

Using OMS together with SLACK, OMS will POST a message to a URL when certain things happen (the Log Analytic Search is showing some result that will trigger the Alert workflow).
WebHooks helps us to receive valuable information when it happens – instead of constantly pulling for the data.

In SLACK, you can add an ‘incoming webhook’ to your channel that will accept data from external sources that will send a JSON payload over HTTP.
Each channel in SLACK will be identified by a unique incoming Webhook URL to which you can post the message from outside.

A typical JSON payload will look similar to this:

{
  "text": "This is some random text from Virtualization and some Coffee",
  "channel": "#virtualization",
  "username": "Kristian",
  "icon_emoji": ":KristianDancing"
}

Once you have added the incoming WebHook to your SLACK channel, you can take advantage of this when creating alerts in OMS.

Here's an overview of the workflow and architecture



Here’s an example on how to configure an Alert in OMS to use a Webhook



And this is an example on how it could look like in SLACK, where we have different channels for different teams, depending on their area of expertise, responsibility etc.



Happy integrating!


Thursday, February 18, 2016

Azure and OMS – Better Together

Azure and OMS – Better Together

Recently, Microsoft announced an enhancement to both Azure and OMS where you can now simply enable the OMS extension for your virtual machines and they will start to report directly to the OMS Workspace associated with that subscription.

In this blog post, I will walk you through a real-world example on how we integrated OMS with Azure to ensure availability for some Windows Server Containers as part of a project.

Overview

We wanted to be able to rapidly test and deploy Windows Server containers to Azure using Azure Resource Manager templates. This would of course lead to development of one or more ARM templates, leveraging custom script extension to perform the heavy lifting within the virtual machine(s).
If you are familiar with container technology and have followed the investment from Microsoft in this area lately, you have probably heard of the Azure Container Service that now is in the public preview. This is an end-to-end solution that you can spin up in Azure using a very abstracted template that will instantiate around 23 resources for you.
If you want to achieve the same with Windows Server Containers today, you must rely on your own ARM skills to make this happen, as the current Container Service is Linux-only.

Container technology is an additional layer of abstraction that you can host on a virtual machine, and whatever you put inside a container today should either be considered as stateless, or you should have externalized the state through the application layer.
In our case we were using stateless containers that would go down in case the container host (the virtual machine) went down or had a reboot in Azure.

With the capabilities available in Microsoft OMS today, this should be a real good use case to combine resources in Azure and OMS to ensure that if the specific event occurred, the container should be running and respond to requests in minutes.

Understanding the requirements

Windows Container is a part of Windows Server 2016 Technical Preview 4 today, which is an available image for you to use in Azure.
Although there’s two supported runtimes for containers in 2016, only Windows Server – and not Hyper-V Containers is supported, as the latter requires support for nested virtualization.
Further, the image in Azure is running Server Core – which also applies to the Windows Server Containers you can host there. In other words, there’s no graphical user interface J

When you use the image in Azure, you will get a default – empty Windows Server Core Image to use for your container exploration. If you want to add applications, server roles and more to a container, you need to be aware that you should treat your containers as Lego blocks.

In our case, there was a need to test several specific Web applications hosted on a Windows Server Containers.
This meant we had to build something that would spin up a new container image, add the Web-Server Role to it, and commit the container to the library as an image so that we could use that image when deploying the web applications on top of it.

From an ARM template perspective, that would mean that we would add a Custom Script Extension resource and associate it with the virtual machine resource.

The Custom Script Extension would then point to a repository that contains the script (PowerShell script in this case). The PowerShell script would support several input parameters so that the entire ARM template would be reusable for others who would like to deploy something similar in the same fashion.
The script would spin up and create containers, and ensure that the correct firewall settings and NAT rules were applied from the container host to the container(s), so the container could be publicly accessible from the outside, following the rules that were defined in the Network Security Group in the template.

So far, so great

From a deployment perspective, this should be good.
Since containers aren’t the same as virtual machines as you can run on your local Hyper-V, you will not get anything that reminds you of Live Migration in the current build. So for us to increase uptime and availability for the containers on the container host, OMS became very interesting.

With the OMS extension, we could easily associate the virtual machine with an OMS Workspace to retrieve critical information about our containers runtime environment.

Not only do we get the insight of our environment in OMS, but we can also leverage the agent to invoke another powerful Azure/OMS resource – Azure Automation.

The goal was now to monitor the virtual machine for specific events, and if the Log Search query would return such result in a given timeframe, we would link that result to an alert we have created in OMS.

From there, we could do remediation through an Azure Automation Runbook.

Since this was an event that was going on inside the operating system of the virtual machine, a Hybrid Worker was considered as the best solution, so that we could trigger Azure Automation to invoke the runbook within the Hybrid Worker context.

Since the OMS agent is also the same agent you would use for Hybrid Worker to Azure Automation, we only had to tell the agent where to register post deployment.

In order to handle this, another Extension (OMS agent) was added to the ARM template, instructed to be deployed before the Custom Script Extension who now also would be responsible for registering the OMS agent with Azure automation.

OMS

Generally speaking, people seems to be confused when it comes to OMS at its capabilities.
Out of the box you get an extreme amount of intel that you can leverage to act upon and become predictive in the way you do management of your resources, regardless of cloud, operating system and location.

This is brought to through Log Analytics, which now is a resource within the Azure Resource Manager API. Together with Log Analytics, you can use Azure Automation (tight integration) as well as Azure Site Recovery and Azure Backup – both who will reach the UX experience in the new portal in the near future.

Once you have connected sources to OMS, the data harvest can begin.

You can decide what type of data you will gather, and you can take advantage of existing solutions from a ‘Solution Gallery’ that gives you pre-defined searches, views and insight based on the solution it represent. Examples here are:

·         Change tracking
·         Security and Audit
·         System Update & Assessment
·         SQL Assessment
·         AD Assessment
·         Malware Assessment

These are just a few examples, and by using OMS – which is Management as a Service delivered from the cloud, you can expect the cloud cadence to be applied to these solutions, reducing Time-To-Market and Time-To-Value which is very good for your business.


The Solution

Azure Resource Manager template

The example Azure Resource Manager Template I will describe here is constructed so that it currently takes input parameters for:

·         Containerhost (name of the virtual machine that will host the container(s)
·         Containername (name of the container to instantiate)
·         vmSize (SKU)
·         adminaccount (administrator account for the container host vm)
·         adminpwd (password for the admin account)
·         vNetName (name of the virtual network to be created)
·         OMSWorkspaceID (the ID for your OMS Workspace)
·         OMSWorkspaceKey (the primary key for your OMS Workspace)
·         AAEndpoint (the endpoint to your Azure Automation account)
·         Token (the primary key for your Azure Automation account)

The resources that will be deployed (in this specific order) are:

·         Storage accounts, public IP address, network security group and availabilityset are deployed in parallel
·         Virtual network will start deployment as soon as the network security group has completed
·         Virtual network interface will start deploying when virtual network and public IP address has completed
·         Virtual machine will start deployment once storage accounts and virtual network interface has completed
·         Virtual machine extension (OMS agent) will start deployment when the virtual machine has completed
·         Virtual machine extension (Custom Script Extension) will start deployment once the OMS extension has completed



·         Once everything has deployed, we should receive an output from the template that will give us the URL to the deployed container, available on port 80





OMS Search

For OMS to find relevant information, the following search was used:

Type=SecurityEvent EventID=4608 OR EventID=1100 "4608 - Windows is starting up." contp4 | Select Computer, Activity, TimeGenerated

This search is targeting the specific virtual machine running the containers.

From this search, an alert was created that was linked to a runbook I had created to start any containers that had State –eq “Off”.



The runbook should then be executed by a Hybrid Worker, which would be the container host itself.

Testing

To test and verify that things are working, we would trigger a restart of the virtual machine in Azure.


Once the virtual machine has started, we can see that the following event has been logged to our OMS Workspace


This also results in an e-mail notification according to our configuration, and should also generate a webhook to invoke the remediation runbook that’s created in Azure Automation




By hitting the ULR to my container again, I can verify it is responding on port 80 and is brought up again just as expected.


Hopefully you found this blog post useful to show some of the capabilities by leveraging Microsoft OMS together with Azure services.

In the next blog post, I'll cover the ideal setup for Microsoft OMS for Service Providers.

(The templates, scripts and examples will be live on my github.com/krnese account as soon as they are polished a bit. Check in there later or follow me on twitter @KristiaNese to get the latest updates)






Monday, February 1, 2016

Free book! Cloud Consistency with Azure Resource Manager

Finally!

I was able to spend some wife points this weekend to finalize the new book “Cloud Consistency with Azure Resource Manager”.


This book is aiming to get you started with Azure Resource Manager and covers many examples on how to author templates, use functions as well as exploring many of the other aspects of Azure Resource Manager



Here’s a snapshot of the content:

Acknowledgements:
About the authors
Kristian Nese | @KristianNese
Flemming Riis | @FlemmingRiis
Background
Introduction
Microsoft Azure
Microsoft Azure Stack
Cloud Computing and Modern Application Modeling
Step 1 – Service Templates
Step 2 – VM Roles
Step 3 – Azure Resource Manager
Summary
IaaS v2 – Azure Resource Manager API replaces Service Management API
Consistent Management Layer
Azure PowerShell
Azure CLI
Azure Resource Manager Rest API
Azure Portal
Azure Resource Manager Templates
Deploying with Azure Resource Manager
Where can we deploy our Azure Resource Manager Templates
Explaining the template format
Authoring the first Azure Resource Manager Template
Adding parameter file
Visual Studio
PowerShell
Azure Portal
Idempotency with Azure Resource Manager
Resource Explorer
Imperative Deployment with Azure Resource Manager
Advanced Azure Resource Manager Templates
Functions
Extensions
Write once, deploy anywhere

Instead of jumping right into the authoring experience and learn how an ARM template is constructed, we wanted to give you enough context to know what’s going on in the industry, what is changing and how you should prepare yourself to take advantage of this new way of managing your cloud resoueces.

If you have been playing around with Azure already, you are probably very familiar with some of the content already. If you are new, and especially interested in Microsoft Azure Stack, you should be glad to know that everything you learn in this book can be addressed there as well.

It has been a great experience writing this book, covering some of the most interesting stuff we have available right now, and I have to emphasize that this book will also be updated as we move forward to keep up with all the great things that is happening in the Microsoft Cloud.


I really hope you enjoy it.

Tuesday, January 19, 2016

Azure Site Recovery and Azure Resource Manager

Recently, I was working with the new Azure Site Recovery Resource Provider in Azure Resource Manager.
Since we now have support for this through PowerShell, I wanted to create a solution that would automatically add VMs to the protection group.

To get VMs protected, it is quite straightforward, but you want to plan this a bit more carefully when you are designing for real-world scenarios.

Planning and Considerations

·         Resource Groups
Everything you create in ARM will belong to a Resource Group. This should be common knowledge by now, but it is worth a friendly reminder to avoid any potential confusion

·         Storage Accounts
For using ASR and having Azure as the Recovery Site, you must also create a storage account that can receive and hold the vhds for the protected virtual machines. When you power up a virtual machine – either as part of a DR drill (test failover) or perhaps for a more permanent time using planned/unplanned failover, remember that this is where the disks will be located. As a friendly reminder, the storage accounts must also belong to a Resource Group. It is important that the storage account is created in the same region as the ASR resource itself.
If you choose to use a storage account created in classic (Azure Service Management API), then the VMs will be visible in the classic portal afterwards. If you use a storage account in the ARM model, you are good to go in the new portal.

·         Network
You want to be able to connect to your virtual machines post failover. This requires network connectivity – among other things. Where you place the virtual network isn’t imported as long as it is within the same region.

·         Virtual Machines
During test/planned/unplanned failover, the virtual machine will have their storage located on the storage account you have created for your ASR configuration. The virtual networks might be in a different resource group as well. This is important to know, as every VM (regardless of test/planned/unplanned failover) will be instantiated in its own – new Resource Group, only containing the virtual machine object and the virtual network interface. All other resources are in different Resource Group(s).

What you need to be aware of

In addition to the design considerations for Resource Groups, storage and networking, you must remember a couple of things. For being able to access virtual machines post a failover, you need to ensure that you have enabled RDP within the guest (Windows) if doing so. Next, you must have either a jump-host on the target virtual network where the recovered VM is running, or simply create a Network Security Group with the required rules, associated with either the subnet or the vNics itself.

I have created a PowerShell script that is currently being polished before publishing, where I will share my findings on this topic to make an efficient DR process of your virtual machines.



Monday, January 11, 2016

2016 - The year of Microservices and Containers

This is the first blog post I am writing this year.
I was planning to publish this before Christmas, but I figured out it would be better to wait and reflect even more about the trends that’s currently taking place in this industry.
So what’s a better way to start the New Year other than with something I really think will be one of the big bets for the coming year(s)?

I drink a lot of coffee. In fact, I might suspect it will kill me someday. On a positive note, at least I was the one who was controlling it. Jokes aside, I like to drink coffee when I'm thinking out loud around technologies and potentially reflect on the steps we’ve made so far.

Going back to 2009-10 when I was entering the world of virtualization with Windows Server 2008 R2 and Hyper-V, I couldn’t possible imagine how things would change in the future.
At this very day, I realized that the things we were doing back then, was just the foundation to what we are seeing today.

The same arguments are being used throughout the different layers of the stack.
We need to optimize our resources, increase density, flexibility and provide fault-tolerant, resilient and highly-available solutions to bring our business forward.

That was the approach back then – and that’s also the approach right now.

We have constantly been focusing on the infrastructure layer, trying to solve whatever issues that might occur. We have been in the belief that if we actually put our effort into the infrastructure layer, then the applications we put on top of that will be smiling from ear to ear.

But things change.
The infrastructure change, and the applications are changing.

Azure made its debut in 2007-08 I remember. Back then it was all about Platform as a Service offerings.
The offerings were a bit limited back then, giving us cloud services (web role – and worker role), caching and messaging systems such as Service Bus, together with SQL and other storage options such as blob, table and queue.

Many organizations were really struggling back then to get a good grasp of this approach. It was complex. It was a new way of developing and delivering services, and in almost all cases, the application had to be rewritten to fully functional using the PaaS components in Azure.

People were just getting used to virtual machines and has started to use them frequently also a part of test and development of new applications. Many customers went deep into virtualization in production as well, and the result was a great demand from customers for having the opportunity to host virtual machines in Azure too.
This would simplify any migration of “legacy” applications to the cloud, and more or less solve the well-known challenges we were aware of back then.

During the summer in 2011 (if my memory serves me well), Microsoft announced their support of Infrastructure as a Service in Azure. Finally they were able to hit the high note!
Now what?
An increased consumption of Azure was the natural result, and the cloud came a bit closer to most of the customers out there. Finally there was a service model that people could really understand. They were used to virtual machines. The only difference now was the runtime environment, which was now hosted in Azure datacenters instead of their own. At the same time, the PaaS offerings in Azure had evolved and grown to become even more sophisticated.

It is common knowledge now, and it was common knowledge back then that PaaS was the optimal service model for applications living in the cloud, compared to IaaS.

By the end of the day, each and every developer and business around the globe would prefer to host and provide their applications to customers as SaaS instead of anything else, such as traditional client/server applications.

So where are we now?

You probably might wonder where the heck I am going with this?
And trust me, I also wondered at some point. I had to get another cup of coffee before I was able to do a further breakdown.

Looking at Microsoft Azure and the services we have there, it is clear to me that the ideal goal for the IaaS platform is to get as near as possible to the PaaS components in regards to scalability, flexibility, automation, resiliency, self-healing and much more.
For those who have been deep into Azure with Azure Resource Manager know that there’s some really huge opportunities now to leverage the actual platform to deliver IaaS that you ideally don’t have to touch.

With features such as VM Scale Sets (preview), Azure Container Service (also preview), and a growing list of extensions to use together with your compute resources, you can potentially instantiate a state-of-the-art infrastructure hosted in Azure, without having to touch the infrastructure (of course you can’t touch Azure infrastructure, but I am now talking about the virtual infrastructure itself, the one you are basically responsible of).

The IaaS building blocks in Azure is separated in a way so that you can look at them as individual scale-units. Compute, Storage and Networking are all combined to bring you virtual machines. Having this approach with having the loosely coupled, we can also see that these building blocks are empowering many of the PaaS components in Azure itself that lives upon the IaaS.

The following graphic shows how the architecture is layered.
Once Microsoft Azure Stack becomes available on-prem, we will have one consistent platform that brings the same capabilities to your own datacenter as you can use in Azure already.

  

Starting at the bottom, IaaS is on the left side while PaaS is on the right hand side.
By climbing up, you can see that both Azure Stack and Azure Public cloud – which will be consistent has the same approach. VMs and VM Scale sets covers both IaaS and PaaS, but VM Scale Sets is place more on the right hand side than VMs. This is because VM Scale Sets is considered as the powering backbone from the other PaaS services on top of it.

Also VM Extensions leans more to the right as it gives us the opportunity to do more than traditional IaaS. We can extend our virtual machines to perform advanced in-guest operations when using extensions, so anything from provisioning of complex applications, configuration management and more can be handled automatically by the Azure platform.

On the left hand side on top of VM Extensions, we will find Cluster orchestration such as SCALR, RightScale, Mesos and Swarm. Again dealing with a lot of infrastructure, but also providing orchestration on top of it.
Batch is a service that is powered by Azure compute and is a compute job scheduling service that will start a pool of virtual machines for you, installing applications and staging data, running jobs with as many tasks as you have.

Going further to the right, we are seeing two very interesting things – which also is the main driver for the entire blog post. Containers and Service Fabric is leaning more to the PaaS side, and it is not by coincident that Service Fabric is at the right hand side of containers.

Let us try to do a breakdown of containers and Service Fabric

Comparing Containers and Service Fabric

Right now in Azure, we have a new preview service that I encourage everyone who’s interesting in container technology to look into. The ACS Resource Provider provides you basically with a very efficient and low-cost solution to instantiate a complete container environment using a single Azure Resource Manager API call to the underlying resource provider. After completion of the deployment, you will be surprised to find 23 resources within a single resource groups containing all the components you need to have a complete container environment up and running.
One important thing to note at this point is that ACS is Linux first and containers first, in comparison to Service Fabric – which is Windows first and also microservices first rather to container first.

At this time it is ok to be confused. And perhaps this is a good time for me to explain the difficulties to put this on paper.

I am now consuming the third cup of coffee.

Azure explains it all

Let us take some steps back to get some more context into the discussion we are entering.
If you want to keep up with everything that comes in Azure nowadays, that is more or less a full-time job. The rapid pace of innovation, releases and new features is next to crazy.
Have you ever wondered how the engineering teams are able to ship solutions this fast – also with this level of quality?

Many of the services we are using today in Azure is actually running on Service Fabric as Microservices. This is a new way of doing development and is also the true implementation of DevOps, both as a culture and also from a tooling point of view.
Meeting customer expectations isn’t easy. But it is possible when you have a platform that supports and enables it.
As I stated earlier in this blog post, the end goal for any developer would be to deliver their solutions using the SaaS service model.
That is the desired model which implies continuous delivery, automation through DevOps, adoption of automatable, elastic and scalable microservices.

Wait a moment. What exactly is Service Fabric?

Service Fabric provides the complete runtime management for microservices and is dealing with the things we have been fighting against for decades. Out-of-the box, we get hyper scale, partitioning, rolling upgrades, rollbacks, health monitoring, load balancing, failover and replication. All of these capabilities is built-in so we can focus on building those applications we want to be scalable, reliable, consistent and available microservices.

Service Fabric provides a model so you can wrap together the code for a collection of related microservices and their related configuration manifests to an application package. The package is then deployed to a Service Fabric Cluster (this is actually a cluster that runs on one as much as many thousands Windows virtual machines – yes, hyper scale). We have two defined programming models in Service Fabric, which is ‘Reliable Actor’ and ‘Reliable Service’. Both of these models provides you with – and makes it possible to write both stateless and stateful applications. This is breaking news.
You can go ahead and create and develop stateless applications in more or less the same way you have been doing for years, trusting to externalize the state to some queuing system or some other data store, but again handling the complexity of having a distributed application at scale. Personally I think the stateful approach in Service Fabric is what make this so exciting. Being able to write stateful applications that is constantly available, having a primary/replica relationship with its members is very tempting. We are trusting the Service Fabric itself to deal with all the complexity we have been trying to enable in the Infrastructure layer for years, at the same time as the stateful microservices keep the logic and data close so we don’t need queues and caches.

Ok, but what about the container stuff you mentioned?

So Service Fabric provides everything out of the box. You can think of it as a complete way to handle everything from beginning to the end, including a defined programming model that even brings an easy way of handling stateful applications.
ACS on the other side provides a core infrastructure which provides significant flexibility but this brings a cost when trying to implement stateful services. However, the applications themselves are more portable since we can run them wherever Docker containers can run, while microservices on Service Fabric can only run on Service Fabric.

The focus for ACS right now is around open source technologies that can be taken in whole or in part. The orchestration layer and also the application layer brings a great level of portability as a result of that, where you can leverage open source components and deploy them wherever you want.

In the end of the day, Service Fabric has a more restrictive nature but also gives you a more rapid development experience, while ACS provides the most flexibility.

So what exactly is the comparison of Containers and microservices with Service Fabric at this point?

What they indeed do have in common is that this is another layer of abstraction in addition to the things we are already dealing with. Forget what you know about virtual machines for a moment. Containers and microservices is exactly what engineers and developers are demanding to unlock new business scenarios, especially in a time where IoT, Big Data, insight and analytics is becoming more and more important for businesses world wide. The cloud itself is the foundation that enables all of this, but having the great flexibility that both container – and service fabric provides is really speeding up the innovation we’re seeing.

For organizations that has truly been able to adopt the DevOps mindset, they are harnessing that investment and is capable of shipping quality code at a much more frequent cadence than ever before.

Coffee number 4 and closing notes

First I want to thank you for spending these minutes reading my thoughts around Azure, containers, microservices, Service Fabric and where we’re heading.

2016 is a very exciting year and things are changing very fast in this industry. We are seeing customers who are making big bets in certain areas, while others are taking a potential risk of not making any bets at all. I know at least from my point of view what’s the important focus moving forward. And I will do my best to guide people on my way.

While writing these closing notes, I can only use the opportunity to point to the tenderloin in this blog post:

My background is all about ensuring that the Infrastructure is providing whatever the applications need.
That skillset is far from obsolete, however, I know that the true value belongs to the upper layers.

We are hopefully now realizing that even the infrastructure that we have been ever so careful about is turning into commodity, and now handled more through an ‘infrastructure as code’ approach than ever before, trusting that it works, empowers the PaaS components – that again brings the world forward while powering SaaS applications.

Container technologies and Microservices as part of Service Fabric is taking that for granted, and from now on, I am doing the same.




Monday, December 21, 2015

Azure Windows Server Container with IIS

A couple of months ago, Microsoft announced their plans for Azure and containers where they would provide you with a first class citizen resource provider in Azure so that you could build, run and manage scalable clusters of hosts machines onto which containerized applications would be running.

What you also probably have noticed is that Microsoft is having an open approach to container management. In fact, the container service is currently based and pre-configured with Docker and Apache Mesos, so any tools you would prefer for management “should just work”.
This is a new game for me to play so I am learning a lot. J

In the meantime, I am also working a lot with Windows Server Containers in Windows Server Technical Preview 4 – which is an image that is available in the Azure gallery.
However, I wanted to extend the experience a bit and decided to create my own ARM template that will ‘mimic’ some of the functionality in the Azure Container Resource Provider, to actually instantiate a new container running IIS Web-Server and be available for requests.

The template will deploy:

·         A Vnet
·         Network interface
·         Public IP address with DNS (the DNS will be based on the hostname.region.cloudapp.azure.com and provided as output once the deployment has completed)
·         Storage account
·         Network Security Group to allow RDP to the host – as well as http
·         Virtual machine (based on the TP4 image)
o   Custom Extension that will:
§  Spin up a new Windows Server Container based on the existing image (server core)
§  Install Web-Server within the newly created container
§  Stop the container – and create a new container image
§  Deploy a new container based on the newly created container image
§  Create a static NAT rule and a firewall rule to allow traffic on port 80 to the container from the host


This is a working experiment and I am planning to extend the template with more applicable tasks as we move forward.

The template can be explored and deployed from this GitHub repo: 

https://github.com/krnese/AzureDeploy/tree/master/AzureContainerWeb 


Thursday, December 3, 2015

Getting started with Containers in Azure

Recently, I had a presentation/workshop in Norway at a Docker conference (http://www.code-conf.com/day-of-docker-osl15/program/#knese )


This was quite a new audience for me and it was great to be the person who showed them what Microsoft is doing in the era of container technologies, using both Microsoft Azure and Windows Server 2016 Technical Preview 4.

The big picture 

One of the key things to point out is that containers are “just” a part of the big picture that we are seeing in these days.
The following graphic shows where we are coming from – and also where we’re heading.


Starting at the bottom, the early generation in this industry used to have a lot of physical machines to run their business. We all know that having workloads and applications on physical machines is not where we want to be today, because that is not flexible, scalable and for sure want do any good for our demand for utilization.

Above physical machines we can find machine virtualization. This should all be quite common now and we have been very good at virtualizing servers for quite some time. In fact, we are now not only virtualizing servers – but also the other infra components too, such as networks and storage.
Machine virtualization in this context is showing us that we are abstracting the compute resources from the underlying physical machine – which introduces us to the first stepping stones towards flexibility, scalability and increase the utilization.

Further, we have infrastructure hosting which can be seen as the early days of cloud, although the exact service model here is not defined. This means that “someone” would do the investment and ensure the required amount of capacity for you as a customer, and you can have your workloads and applications hosted in the hosting datacenter. This was machine virtualization at scale.

The next step is the more familiar service models we can consume from a cloud, such as Infrastructure as a Service, Platform as a Service and Software as a Service. Although these service models are different, they share the same set of attributes such as elasticity, self-servicing, broad network access, chargeback/usage and resource pooling. Especially elasticity and resource pooling is a way to describe the level of flexibility, scalability and utilization we can achieve. I expect you as the reader to be quite comfortable with cloud computing in general, so I won’t dive deeper into the definition at this point.

Next, we are now facing an era where containers are lit up – regardless whether you are a developer or IT-pro. Containers builds on many of the same principals as machine virtualization, where abstraction is key. A container can easily be lifted – and shifted to other deployment environments without having the same cost, luggage and complexity as a virtual machine – as a comparison.

In the Microsoft world we have two different runtimes for containers.
Windows Server Containers that are sharing the kernel with the container host which is ideal for scalability, performance and resource utilization.
Hyper-V Containers gives you the exact same experience, only that the kernel in this case isn’t shared among the containers. This is something you need to specify during deployment time. Hyper-V Containers will give you the level of isolation you require and is ideal when the containers aren’t trusting each other nor the container host.
Microsoft has also announced that they will come with their own Azure Container Service in the future, as a first-class citizen resource provider managed by ARM.

Last but not least, we have something called “microservices” on the top in this graphic. In the context of Microsoft we are talking about Service Fabric – which is currently a preview feature in Microsoft Azure today.
Service Fabric is a distributed system platform where you can build scalable, reliable and easily managed applications for the cloud. This is where we are really seeing that the redundancy, high-availability, resiliency and flexibility isn’t built into the infrastructure – but handled at the application level instead.
Service Fabric represents the next-generation middleware platform for building and managing these enterprise class, tier-1 cloud scale services.

From a Microsoft Azure standpoint it is also important that you know that “VM Scale Sets” (http://kristiannese.blogspot.no/2015/11/getting-started-with-vm-scale-sets-with.html ) is the IaaS that enables these PaaS services (Azure Container Service + Service Fabric).
Also, as part of Windows Server 2016 Technical Preview 4, we will be able to leverage Nano Server for containers too, so you can get the optimal experience for your born-in-the-cloud applications.

So, that was me trying to put things into context and why I spent some time that day to have a workshop on Containers using Azure.

Getting started with Containers in Microsoft Azure

The material I used for this workshop can be found in this public GitHub repo: https://github.com/krnese/AzureDeploy/tree/master/AzureContainer


I created an ARM template that will:

·         Create a new storage account
·         Create a new Network Security Group
o   Create a new vNet and associate the new subnet with the NSG
·         Create a new network interface
o   Associate the vNic with a public IP address
o   Associate the vNic with the vNet
·         Create a new virtual machine
o   Associate the VM with the storage account
o   Associate the VM with the network interface
o   Use Custom Script Extension that will create x amount of Windows Server Containers based on the parameter (count) input
 


















If you deploy this from GitHub and follow the ps1 examples you should be able to simulate the life-cycle of containers in Windows Server 2016 TP4.