You are on page 1of 75

Containers and Nano Server

Contents

¢ Understanding application
containers * Containers and Nano
Server
* Windows Server containers versus Hyper-V
containers * Docker and Kubernetes
* Working with containers
1. Understanding application containers

* What does it mean to contain an application?


* We have a pretty good concept these days of containing
servers, by means of virtualization. Taking physical
hardware, turning it into a virtualization host-like
Hyper-V, and then running many virtual machines on top of
it is a form of containment for those VMs.
* We are essentially tricking them into believing that they
are their own entity, completely unaware that they are
sharing resources and hardware with other VMs running on
that host.
1. Understanding application containers

* Application containers are the same idea, at a different


level.
¢ Where VMs are all about virtualizing hardware, containers
are more like virtualizing the operating system.
* Rather than creating VMs to host our applications, we
can create containers, which are much smaller.
* We then run applications inside those containers, and the
applications are tricked into thinking that they are running
on top of a dedicated instance of the operating system.
1. Understanding application containers
A huge advantage to using containers is the unity that they bring
between
the
We development and operations teams.
* e h e which is a
hear the term D vOps all te time th se
of development and operation processes
days, to in order
te
combination
h make entire
application-rolloutprocess more efficient.
The utilization of containers is going to have a huge impact on
the DevOps mentality, since developers can now do their job
develop applications) without needing to accommodate for the
operations and infrastructure
side of things.
When the application is built, operations can take the
container within which the application resides, and simply
spin it up inside their container infrastructure, without any
worries that the application is going to break
servers or have compatibility problems.
1. Understanding application containers

* Let's discuss a few particular benefits that containers


bring to the table.
1.1. Sharing resources

¢ Just like when we are talking about hardware being split


up among VMs, application containers mean that we are
taking physical chunks of hardware and dividing them
upamong containers.
* This allows us to run many containers from the same
server — whether a physical or virtual server.
1.1. Sharing resources

* Where we really start to see benefits in using containers


rather than separate VMs for all of our applications is
that all of our containers can share the same base
operating system.
* Not only are they spun up from the same set, which makes it
base extremely fast to bring new it also means that
containers online, are sharing the same they
kernel resources.
* Every instance of an operating system has its own set of user
processes, and often it is tricky business to run multiple
applications together on servers because those applications
traditionally have access to the same set of processes, and
have the potential to be negatively affected by those
processes.
1.1. Sharing resources

* The kernel in Windows Server 2019 has been enhanced so


that it can handle multiple copies of the user mode
processes.
* This means you not only have the ability to run instances
of the same application over many different servers, but it
also means that you can run many different applications,
even if they don't typically like to coexist, on the same
server.
1.2. Isolation

* One of the huge benefits of application containers is that


developers can build their applications within a container
running on their own workstation!
¢ When built within this container sandbox, developers will know
that their application contains all of the parts, pieces, and
dependencies that it needs in order to run properly, and that it
runs in a way that doesn't require extra components from the
underlying operating system.
* This means the developer can build the application, make sure
it works in their local environment, and then easily slide
that application container over to the hosting servers where
it will be spun up and ready for production use.
1.2. Isolation

* The other aspect of isolation is the security aspect.


* This is the same story as multiple virtual machines running
on the same host, particularly in a cloud environment. You
want security boundaries to exist between those machines, in
fact most of the time you don't want them to be aware of
each other in any way.
* You even want isolation and segregation between the
virtual machines and the host operating system, because you
sure don't want your public cloud service provider snooping
around inside your VMs.
* The same idea applies with application containers.
1.2. Isolation

* The processes running container are not visible to the


inside a operating system, hosting you are consuming
even though operating resources from that
system.
* Containers maintain two different forms of isolation:
= There is namespace isolation, which means the containers are confined
to their own filesystem and registry.
= Then there is also resource isolation, meaning that we can define what
specific hardware resources are available to the different containers,
and they are not able to steal from each other.
* Shortly, we will discuss two different categories of containers,
Windows Server Containers and Hyper-V Containers. These two types of
containers handle isolation in different ways, so stay tuned for
more info on that topic.
1.2. Isolation

* We know that containers share resources and are spun up from the
same base image, while still keeping their processes separated
so that the underlying operating system can't negatively affect
the application and also so that the application can't tank the
host operating system.
* But how is the isolation handled from a networking aspect?
¢ Well, application containers utilize technology from the
Hyper-V virtual switch in order to keep everything straight
on the networking side.
* In fact, as you start to use containers, you will quickly
see that each container has a unique IP address assigned
to it in order to maintain isolation at this level.
1.3. Scalability

* Think about a web application that you host whose


use might fluctuate greatly from day to day.
* Providing enough resources to sustain this application
during the busy times has traditionally meant that we are
overpaying for compute resources when that application is
not being heavily used.
* Cloud technologies are providing dynamic scaling for
these modern kinds of applications, but they are doing so
often by spinning up or down entire virtual machines.
1.3. Scalability
* There are three common struggles with dynamically scaling
applications like this.
= First is the time that it takes to produce additional virtual
machines; even if that process is automated, your application may be
overwhelmed for a period of time while additional resources are
brought online.
= Our second challenge is the struggle that the developer through in
needs to go to make that application so agnostic that it order
doesn't care if there are between the different machines inconsistencie
upon which their application might s
be running.
= Third is cost. Not only the hardware cost, as new VMs coming each be
online will consuming an entire set of kernel resources, but Spinnin
monetary costs as well. virtual machines up and down in your g
cloud environment can quickly get expensive.
* These are all hurdles that do not exist when you utilize
containers as your method for deploying applications.
1.3. Scalability
Since application containers are using the same underlying
kernel, and the same base image, their time to live is extremely
fast. New containers can be spun up or down very quickly, and in
batches, without having to wait for the boot and kernel mode
processes to start.
Also, since we have provided the developer this isolated
container structure within which to build the application, we
know that our application is going to be able to run
successfully anywhere that we spin up one of these containers.
No more worries about whether or not the new VM that is coming
online is going to be standardized correctly, because containers
for a particular application are always the same, and contain
all of the important dependencies that the application needs,
right inside that container.
2. Containers and Nano Server

* Before discussing the


purpose that Nano Server
now serves, take a
e.g., eShop
let's quick structur
look at the e
of a Windowsbased .g., IIS, Powershell

container. Here is a graphic am Windows Server Core, or Nano Server

borrowed from a public


slide deck that was part
of a Microsoft Ignite
presentation:
2. Containers and Nano Server

* The lowest layer of a container is the base operating system.


When spinning up a container, you need a base set of code and
kernel from which to build upon. This base operating system can
be either Server Core or Nano Server.
* The next layer of a container is the customization layer. This
is where the technologies that will ultimately be used by your
application reside. For example, our containers may include IIS
for hosting a website, PowerShell, or even something such
as .NET. Any of these toolsets reside in this layer.
* Finally, the top slice of the container cake is the application
layer. This, of course, is the specific app that you plan to
host inside this container, which your users are accessing.
2. Containers and Nano Server

* While Server Core is a great operating system for


building small and efficient servers, it is still a
heavyweight compared to Nano Server.
* Nano is so incredibly different, and so incredibly tiny,
that it really isn't a comparison. You probably remember
earlier where we installed Server Core and came out with a
hard drive size of around 6 GB.
While that is much smaller than a Desktop Experience
version of Windows Server, think about this. A Nano Server
base image can be less than 500 MB!
2. Containers and Nano Server

¢ Additionally, updates to Server are expected to be few and far between.


Nano deal with monthly patching and updates on your
This means you won't have
to application containers.
¢ In fact, since containers include all that they need in order to run
the applications hosted on them, it is generally expected that when
you need to update penne a container, you'll just go ahead and build
out a new container
image, rather than update existing ones.
¢ If Nano gets an update, |yat build out a new container, install and
Server test the
application it, and roll it out.
on
* Need to make some changes to the application itself? Rather than
figuring out how to update the existing container image, it's quick
and easy to build out a new one, test it outside of your production
environment, and once it is ready, simply start deploying and spinning
up the new container image into production, letting
the old version fall away.
2. Containers and Nano Server

* Nano Server is now only used as a base operating


system for containers.
* This is a major change since the release of Server
2016, when the scope that Nano was expected to provide
was much larger.
* If you are utilizing Nano Server for workloads outside of
container images, you need to start working on moving
those workloads into more traditional servers, such as
Server Core.
2. Containers and Nano Server

* You may be wondering, "Why would anybody use Server Core


as the base for a container image, if Nano Server is
available?"
* The easiest answer to that question is application
compatibility. Nano Server is incredibly small, and as such
it is obviously lacking much of the code that exists inside
Server Core. When you start looking into utilizing
containers to host your applications, it's a great idea to
utilize the smaller Nano Server asa base if possible, but
often your applications simply won't be able to run on that
platform and in these cases, you will be using Server Core
as the base operating system.
3. Windows Server containers versus Hyper-V
containers

¢ When spinning your containers, it is important that there


up two categories to know containers that you can are Server
of run in Windows 2019.
* All aspects of application containers that we have been talking
about so far apply to either Windows Server containers or to
Hyper-V containers.
* Like Windows Server Containers, Hyper-V Containers can run the
same code or images inside of them, while keeping their
strong isolation guarantees to make sure the important stuff
stays separated.
* The decision between using Windows Server Containers or
Hyper-V Containers will likely boil down to what level of
security you need your containers to maintain. Let's discuss
the differences between the two so that you can better
understand the choice you are facing.
3.1. Windows Server Containers

* In the same way that Linux containers share the host


operating system kernel files, Windows Server Containers
make use of this sharing in order to make the containers
efficient.
* What this means, however, is that while namespace,
filesystem, and network isolation is in place to keep the
containers separated from each other, there is some
potential for vulnerability between the different Windows
Server Containers running on a host server.
* For example, if you were to log into the host operating
system on
your container server, you would be able to see the running
processes of each container.
3.1. Windows Server Containers

* Windows Server Containers are going to be most useful in


circumstances where your container host server and the
containers themselves are within the same trust boundary.
* In most cases, this means that Windows Server Containers are
going to be most useful for servers that are company-owned,
and only run containers that are owned and trusted by the
company.
* If you trust both your host server and your containers,
and are okay with those entities trusting each other,
deploying regular Windows Server Containers is the most
efficient use of your hardware resources.
3.2. Hyper-V Containers

* If you're for an increased amount isolati and


looking of is where you will on stronger
boundaries, foray into Hyper-V Containers.
that
* Hyper-V Containers are more like a superoptimized version
of a virtual machine. While kernel resources are still
shared by Hyper-V Containers, so they are still much more
performant than full virtual machines, each Hyper-V
Container gets its own dedicated Windows shell within
which a single container can run.
* This means you have isolation between Hyper-V Containers
that is more on par with isolation between VMs, and yet are
still able to spin up new containers at will and very
quickly because the container infrastructure is still in
place underneath.
3.2. Hyper-V Containers

* Containers are going to be more useful in multi-tenant


infrastructures, where you want to make sure no code or
activity is able to be leaked between the container and
host, or between two different containers that might be
owned by different entities.
* Earlier, we discussed how the host operating system can
see into the processes running within a Windows Server
Container, but this is not
the case with Hyper-V Containers.
* The host operating system. is completely unaware of, and
unable to tap into, those services that are running
within the Hyper-V Containers themselves. These processes
are now invisible.
3.2. Hyper-V Containers

* The availability of Hyper-V Containers means that even if


you have an application that must be strongly isolated, you
no longer need to dedicate a full Hyper-V VM to this
application.
* You can now spin up a Hyper-V Container, run the application in
that container, and have full isolation for the application,
while continuing to share resources and provide a better,
more scalable experience for that application.
4. Docker and Kubernetes

* Docker is an open source project — a toolset, really — that


was originally designed to assist with the running of
containers on Linux operating systems.
* In Server 2016, Microsoft took some steps to start reinventing
the container wheel, with the inclusion of PowerShell cmdlets that
could be used to spin up and control containers running on your
Windows Server, but the Docker platform has been growing at such
a fast rate that Microsoft really now expects that anyone who
wants to run containers on their Windows machines is going to do
so via the Docker toolset.
* If you want to utilize or even test containers in your
environment, you'll need to get Docker for Windows in order
to get started.
4. Docker and Kubernetes

* Docker is a container platform. This means that it


provides the commands and tools needed to download,
create, package, distribute, and run containers.
* Docker for Windows is fully supported to run on both
Windows 10 and on Windows Server 2019.
* By installing Docker for Windows, you acquire all of the
tools needed to begin using containers to enhance your
application isolation and scalability.
4. Docker and Kubernetes

* Developers have the ability to use Docker to create an


environment on their local workstation that mirrors a live
server environment, so that they can develop applications
within containers and be assured that they will actually run
once those applications aremoved to the server.
* Docker is the platform that provides pack, ship, and run
capabilities for your developers. Once finished with development,
the container package can be handed over to the system
administrator, who spins up the container(s) that will be running
the application, and deploys it accordingly.
* The developer doesn't know or care about the container host
infrastructure, and the admin doesn't know or care about the
development process or compatibility with their servers.
4.1. Linux containers

* Earlier, in Server 2016, a Windows container host server


could only run Windows-based containers, because Windows
Server Containers share the kernel with the host operating
system, so there was no way that you could spin up a Linux
container on a Windows host.
* Times are a-changing, we now have some creative new
and capabilities in to handle such as
Server 2019 containers. scenarios Linux
* While these are still being polished, some new
features options, there are VM and LCOW, that enable
called Moby are going to Linux
containers to side with
run side by
on a Windows
Server
container
host, even
running
Windows
containers!
4.2. Docker Hub

¢ When you work with containers, you are building container


images that are usable on any server instance running the same
host operating system — that is the essence of what containers
enable you to do.
* When you spin up new instances of containers, you are just
pulling new copies of that exact image, which is all-inclusive.
This kind of standardized imaging mentality lends well to a
shared community of images, a repository of images that people
have built that might benefit others.
* Docker is open source, after all. Does such a sharing resource
exist, one you can visit in order to grab container image files
for testing, or even to upload images that you have created and
share them with the world? Absolutely! It is called Docker Hub,
and is available at https://hub.docker.com.
4.2. Docker Hub

* Visit this site and create a login, and you immediately


have access to thousands of container base images that the
community has created and uploaded.
* This can be a quick way to get a lab up and running with
containers, and many of these container images could even
be used for production systems, running the applications
that the folks here have pre-installed for you inside these
container images.
* Or you can use Docker Hub to upload and store your own
container images
4.2. Docker Hub

Welcome to Docker
Hub Here area few
things to get you
started

|
Docker Hub

Create a
Repository
Push container
images to
©

© 6
Create an
Organizati
on
Manage Docker Hub repostorwiieths
u y te
o am
r
4.3. Docker Trusted Registry

* Docker Trusted Registry is a container image repository


system, similar to Docker Hub, but it's something that you
can contain within your network, behind your own firewalls
and security systems.
* This gives you a container image-repository system without
the risk of sharing sensitive information with the rest of
the world
4.3. Docker Trusted Registry

Docker Trusted Registry


Docker Trusted Registry allows you to store and manage your Docker
images ene ooe

(eee mn hse sg
4.4. Kubernetes

* Kubernetes is a container-orchestrationsolution. This


means that Kubernetes orchestrates, or facilitates, how
the containers run.
* It is the tool that enables many containers to run
together, in harmony, as if they were one big
application.
* If you intend to use containers to create scaling
applications that have the ability to spin up new.
containers whenever additional resources are needed, you will
absolutely need to have a container orchestrator, and
Kubernetes is currently the best and most popular.
4.4. Kubernetes

* Microsoft recognized this popularity, and has taken steps


to ensure that Kubernetes is fullysupported on Windows
Server 2019.
4.4. Kubernetes

* As with any software, Kubernetes is not the only name in the


game.
* In fact, Docker has its own orchestration platform, called
Docker Swarm. While it might make sense that Docker and
Docker Swarm would work together better than Docker and any
other orchestrator, the numbers don't lie.
* Arecent report shows that 82% of companies using scaling
applications in the cloud are utilizing Kubernetes for
their container orchestration.
5. Working with containers

* There are a lot of moving pieces that work together


to make containers a reality in your environment, but
it's really not too difficult to get started.
* Let's walk through the initial setup of turning Windows
Server 2019 into a container-running mega machine.
5.1. Installing the role and feature

* The amount of work that you need to accomplish here


depends on whether you want to run Windows Server
Containers, Hyper-V Containers, or both.
* The primary feature that you need to make sure that you
install is Containers, which can be installed by using
either the Add roles and features link from inside Server
Manager, or by issuing the following
PowerShell command:
* Add-WindowsFeatureContainers
5.1. Installing the role and feature

* Additionally, if you intend to run Hyper-V Containers,


you need to ensure that the underlying Hyper-V components
are also installed onto your container host server.
* To do that, install the Hyper-V role and accompanying
management tools onto this same server.
* As indicated following the role and feature installation,
make sure to restart your server after these changes.
5.1. Installing the role and feature

* It makes sense that companies would want to deploy


containers, but they may also want their container host
servers to be VMs, with
multiple containers being run within that VM.
* Therefore, nested virtualization was required to make this
possible. If you are running a Windows Server 2019 physical
hypervisor server, and a Windows Server 2019 virtual machine
inside that server, you will now find that you are able to
successfully install the Hyper-V role right onto that VM.
* | told you virtual machines were popular, so much so
that they are now being used to run other virtual
machines!
5.2. Installing Docker for Windows

* Now that our container host server is prepped with the


necessary Windows components, we need to grab Docker for
Windows from the internet.
* The Docker interface is going to provide us with all of
the commands that are needed in order to start building and
interacting with our containers.
5.2. Installing Docker for Windows

* This is the point where that Docker Hub login becomes


important.
*¢ If you are working through this in order to test
containers on your own workstation and need to install
Docker Desktop for Windows on your Win10 client, the easiest
way is to visit Docker Hub, log in, and search for the
Docker client software.
* Here is a link to that software (this is the tool you need
to use if you are installing on Windows 10):
https://hub.docker.com/editions/community/docker-ce-desktop-
windows.
5.2. Installing Docker for Windows

¢ After package installation has finished, Docker is now


configured on your server as a service, but that service
needs to be started with the following command:
* Start-Service docker
5.2. Installing Docker for Windows

Docker Commands
* Once Docker is installed on your system, whether you are
working with a Windows Server 2019 or a Windows 10 machine,
you now have the Docker Engine running on your machine and
it is ready to accept some commands in order to begin
working with containers.
* If there is a single word to remember when it comes to
working with containers, it is Docker.
* That is because every command that you issue to interact
with containers will begin with the word docker. Let's have
a look are some of the common commands that you will be
working with.
5.2. Installing Docker for Windows

Docker Commands *
docker --help
* The help function for Docker will generate a list of the
possible docker commands that are available to run. This is
a good reference point as you get started.
5.2. Installing Docker for Windows

Docker Commands *
docker images
¢ After downloading some container images from a repository
(we will do this for ourselves in the next section of this
chapter), you can use the docker images command to view all
of the images that are available on your local system.
5.2. Installing Docker for Windows

Docker Commands
¢ docker search
* Utilizing the search function allows you to search the
container repositories (such as Docker Hub) for base
container images that you might want to utilize in your
environment.
* For example, in order to search and find images provided
from inside Microsoft's Docker Hub repository, issue the
following:
¢ docker search microsoft
5.2. Installing Docker for Windows
Docker Commands ¢
docker pull
* We can use docker pull to pull down container images
from online repositories.
* There are multiple repositories from which you can get
container images. Most often, you will be working with
images from Docker Hub, which is where we will pull a
container image from shortly.
* However,there are online repositories from which you can
containe other images, get Microsoft's public container
r as MCR. such as registry, known
5.2. Installing Docker for Windows

Docker Commands
* Here are some sample docker pull commands showing how to
pull container images from Docker Hub, as well as MCR:
* docker Microsoft\nanoserver
pull
Microsoft\windowsservercore
* docker
pull
* docker pull mcr.microsoft.com/windows/servercore:1809
image
pull mcr.microsoft.com/windows/nanoserver:1809
* docker
image
5.2. Installing Docker for Windows

Docker Commands ¢
docker run
* This is the command for starting a new container from a base
image.
* You will find that you can retain multiple container
images in your local repository that are all based off
the same container image.
5.2. Installing Docker for Windows
Docker Commands *
docker run
* You may have numerous container images that are all
named windowsservercore, for example.
* In this case, container tags become very important, as tags
help you to distinguish between different versions of those
container images. As an example, here is acommand that would
start a container based on a windowsservercore image for
which | had associated the
Itsc2019 tag:
* docker run -it --rm Microsoft\windowsservercore:ltsc2019
5.2. Installing Docker for Windows

Docker Commands *
docker ps -a
* You utilize docker ps when you want to view the
containers that are currently running on your system.
5.2. Installing Docker for Windows

Docker Commands
* docker info
* This will summarize your Docker environment, including the
number of containers that you have running and additional
information about the host platform itself.
5.3. Downloading a container image

* The first command we will run on our newly-created


container host is docker images, which shows us all of the
container images that currently reside on our system; there
are none:

E® Administrator: Windows PowerShell - a x

IMAGE ID
5.3. Downloading a container image

* First, we can use docker search to check the current container


images that reside inside Microsoft's Docker Hub repository.
Once we find
the image that we want to download, we use docker pull to
download it onto our server:
¢ docker search microsoft
* docker image pull microsoft/nanoserver
5.3. Downloading a container image
5.3. Downloading a container image

* The preceding command downloaded a copy of the standard


Nano Server base image, but we want to make our container do
something in the end, so here is a command that will download
that .NET sample image as well:
docker image pull microsoft/dotnet-samples:dotnetapp-nanoserver-1809

¢ After the downloads are finished, running docker images


once again now shows us the newly-available Nano Server
container image, as well as the .NET sample image
B® Administrator: Windows

PowerShell Ba
5.4. Running a container

* We are so close to having a container running on our host!


* Now that we have installed the service, implemented Docker,
imported the Docker module into our PowerShell prompt, and
downloaded a base container image, we can finally issue a
command
to launch a container from that image.
* Let's run the .NET container that we downloaded before:
¢ docker run microsoft/dotnet-samples:dotnetapp-nanoserver-1809
5.4. Running a container

* The container starts and


runs through its included
code, and we see some fun
output
5.4. Running a container

* This container showcases that a// components necessary for


this .NET application to run are included inside the
container.
* This container is based on Nano Server, which means it has an
incredibly small footprint. In fact, looking back a few
pages at the last Docker images command that we ran, | can
see that this container image is only 417 MB! What.a
resource saving, when compared with running this
application ona traditional IIS web server.

You might also like