You are on page 1of 73

An Introduction to

Containers and
Orchestration for IT
Admins
In this e-guide In this e-guide:
 What containers are and Containers are an IT workload hosting option increasingly tapped for major production
deployments. Often pushed upon the masses by developers or pilot DevOps teams,
how they work containers represent significant deployment, management and security changes for enterprise
IT operations.
 Get all those containers
Arguably, containers are nothing new, based in resource partitioning technology from the
under wraps
1960s. Many IT professionals know containers by what they are not -- namely, VMs.
 Details of container Containers are OS-level virtualization instances, while VMs virtualize the hardware resources
of hosts or clusters of hosts. The present form of containers is embodied by Docker, an
operations application containerization technology, and LXC, which is a method of system
containerization.

Over the course of a decade, container adoption has shot up, first in Linux and now also
Windows systems, and many enterprise IT organizations must now decide what technologies
to deploy and what standards to enforce for container provisioning and support.

These expert tips provide an introduction to containers and container management


technologies for IT operations and administrators. Once you have a grasp on the major
points, dig into the common questions and concerns -- such as security -- covered in the last
section.

Page 1 of 72
In this e-guide What are containers, and how do
 What containers are and
they work?
how they work Clive Longbottom, Independent Commentator and ITC Industry Analyst
 Get all those containers Containers offer an attractive alternative to physical servers and VMs, which has prompted
under wraps many IT organizations to consider them for application provisioning. But what are containers,
and how do they interact with the application and underlying infrastructure?
 Details of container
A basic physical application installation needs server, storage, network equipment and other
operations physical hardware on which an OS is installed. A software stack -- an application server, a
database and more -- enables the application to run. An organization must either provision
resources for its maximum workload and potential outages and suffer significant waste
outside those times or, if provisioned resources are set for average workload, expect traffic
peaks to lead to performance issues.

VMs get around some of these problems. A VM creates a logical system that sits atop the
physical platform (see Figure 1). A Type 1 hypervisor, such as VMware ESXi or Microsoft
Hyper-V, provides each VM with virtual hardware. The VM runs a guest OS, and the
application software stack interprets everything below it the same as a physical stack.
Virtualization utilizes resources better than physical setups, but the separate OS for each VM
creates significant redundancy in base functionality.

Page 2 of 72
In this e-guide

 What containers are and


how they work

 Get all those containers


under wraps

 Details of container
operations

Containers provide greater


flexibility than virtual and physical hardware stacks. A basic application container
environment, as seen in Figure 2, runs on physical -- or virtual and physical -- hardware, a
host OS and a container virtualization layer directly on the OS. Containers share the OS and
its functions instead of running individual OS instances. This greatly reduces the resources
required per application. Docker, Rkt (a CoreOS container runtime acquired by Red Hat),
Linux Containers and Windows Server Containers operate generally in a similar manner.

Page 3 of 72
In this e-guide

 What containers are and


how they work

 Get all those containers


under wraps

 Details of container
operations

The benefits have downsides


OS sharing led to problems in early containers. Code that required raised privilege at the OS
level opened businesses to the potential of malicious entities able to gain access to, and
attack, the underlying platform to bring down all the containers in the environment. Some
organizations ran containers within VMs to combat the issue, but others argued that doing so
destroyed the point of containers. Modern container environments mitigate these security

Page 4 of 72
issues, but many organizations still host containers in VMs for security or management
In this e-guide
reasons.

 What containers are and The fact that all container applications must use the same underlying OS is a strength, as well
as a weakness, of containerized applications. Every application container sharing a Linux OS,
how they work for example, must not only be based on Linux, but also on the same version and often patch
 Get all those containers level of that Linux distribution. That isn't always manageable in reality, as some applications
have specific OS requirements.
under wraps
Containerize the system
 Details of container
operations

System containerization,
demonstrated in Figure 3, resolves this tangle. System containers use the shared capabilities

Page 5 of 72
of the underlying OS. Where the application needs a certain patch level or functional library
In this e-guide
that the underlying platform lacks, a proxy namespace captures the call from the application
and redirects it to the necessary code or library held within the container itself.
 What containers are and
System containerization is available from Virtuozzo. Microsoft also offers a similar approach
how they work to isolation via its Hyper-V containers.
 Get all those containers
What are containers enabling?
under wraps
Applications have evolved through physical hardware to VMs to containerized environments.
 Details of container In turn, containerization is ushering in microservices architecture.
operations Microservices create single-function entities that offer services to a calling environment. For
example, functions such as calendars, email and financial clearing systems can live in
individual containers available in the cloud for any system that needs them, in contrast to a
collection of disparate systems that each contain internal versions of these capabilities.

Performance benefits from hosting such functions in the cloud. Sharing the underlying
physical resources elastically with other functions minimizes the likelihood of hitting resource
limits.

Microservices also offer flexible, process-based methods to handle business needs in an


application architecture. Rather than code that tries to guess at the business process and
ends up constraining it, microservices create a composite application of dynamic functions
pulled together in real time that enables a business to respond to market forces more rapidly
than monolithic applications can.

Page 6 of 72
In this e-guide

 What containers are and


how they work

 Get all those containers


under wraps

 Details of container
operations

As shown in Figure 4, the


container doesn't carry around physical code within it, but rather a list of required functions
that it pulls together as required. The container manages areas such as technical contracts
and process audits.

IT professionals can soon expect to see how containers will evolve from here. Containers
exist in a highly dynamic and changing world. They optimize resource utilization and provide
much needed flexibility better than their predecessors, so enterprise IT organizations should
prepare to use them.

Page 7 of 72

In this e-guide
Next Article
 What containers are and
how they work

 Get all those containers


under wraps

 Details of container
operations

Page 8 of 72
In this e-guide Grasp container basics to plan
 What containers are and
enterprise adoption
how they work Tom Nolle, President
 Get all those containers Nobody wants to waste money on underutilized servers. Server power has grown enormously
under wraps over just the last five years, which easily could exacerbate the problem of idle resources due
to dedicated application and server relationships. The answer is everything from virtualization
 Details of container to the cloud to containers -- and they're all related.

operations Virtualization is the most foundational technology of modern IT operations. A virtual entity,
such as a virtual server or private network, is an abstraction that represents physical
components. It behaves like dedicated resources but actually comprises a portion of shared
ones. But containers bring virtualization beyond its origins, and enterprise IT shops should
know container basics around isolation, portability and resource consumption when they add
the technology to IT plans.

Virtualization and cloud computing


Before container basics comes container history. Physical hardware systems have an
inherent risk of inefficient use and the inherent benefit of application isolation. To address
inefficiency, the IT industry adopted multitasking systems, which run several applications at
once, but that simple form of resource sharing doesn't separate the applications enough. One
app can contaminate the performance of other apps if it behaves badly, and attackers may
even be able to breach security from one app to another.

Page 9 of 72
This tradeoff between isolation and efficiency is
In this e-guide This tradeoff between
inherent in virtualization because of shared
resources. Perfect security and performance isolation and efficiency
 What containers are and management requires physical isolation in bare is inherent in
how they work metal. Highest efficiency calls for multitasking
OSes. Virtualization options fall between these virtualization because of
 Get all those containers extremes. shared resources.
under wraps Virtual machines (VMs) replicate the server, with
a full OS and middleware. Hypervisor software
 Details of container manages and runs these VMs on physical resources. Because VMs are highly separated,
operations they enable multiple users or applications to share a server, even if those workloads come
from different organizations or even companies.

Most cloud computing services are based on VMs due to their isolation traits. Applications run
on VMs are largely unaffected by other workloads that share the physical server or cluster of
servers, and a VM can move from one server to another easily because the machine image
that runs in the VM carries everything necessary to run the application. This standardization
means an on-premises data center operator can mimic the setup of public clouds, such as
AWS or Microsoft Azure, and run the same machine images on premises with private cloud
software.

VMs set a standard for isolation, but they only go so far to improve efficiency over physical
hardware. In many implementations, all the VMs on a server run the same OS and
middleware, which is a lot of duplication. VMs also require the same configuration steps as
real servers, which means that they don't always reduce IT operations costs or tasks.

Page 10 of 72
In this e-guide

 What containers are and


how they work

 Get all those containers


under wraps

 Details of container
operations

Container basics
Containers are more efficient but less independent than VMs. Containers enable portable
multitasking in IT hosting. The OS creates partitions of resources for each container, which
run applications or services. The OS is shared across all the containers, although middleware
is still packaged with the application. Still, one server usually can host twice as many
containers as VMs.

Page 11 of 72
The downside of all this efficiency is weaker isolation between containerized applications and
In this e-guide
components; containers are not as secure as VMs. Container security is improving as the
hosting technique evolves. There's also a greater risk of having the behavior of a
 What containers are and containerized app affect other apps by hogging resources or even from being written
how they work improperly for the deployment.

 Get all those containers On public clouds, most containers run inside VMs or on bare-metal servers for improved
isolation and lower overhead. As container technology improves, so too will container hosting
under wraps in public cloud.

 Details of container Uniform deployment in containers


operations
While containers offer greater efficiency than VMs, the technology's value stems from the fact
that containers abstract an application deployment environment, including the application
network. Another key element of container basics is the presumptive deployment structure,
which is imposed -- and, therefore, can be relied upon -- by tools that deploy and redeploy
applications and components. This setup makes container management easier than that for
VMs and physical servers.

Containers bring the virtualization trend away from strict mimicry of a server and toward a new
hosting environment that is closely related to a multitasking slot in an OS. Concurrently,
development and IT organizations are deploying componentized software in a highly
structured framework complete with networking tools so that common orchestration and
lifecycle management techniques meet the requirements for container operations. In the
container management space, reduced complexity enables the higher efficiency of containers
to come with lower operations costs and errors.

Page 12 of 72

In this e-guide
Next Article
 What containers are and
how they work

 Get all those containers


under wraps

 Details of container
operations

Page 13 of 72
In this e-guide Why is Docker's container approach
 What containers are and
so important?
how they work Stephen Bigelow, Senior Technology Editor
 Get all those containers Docker isn't container virtualization. Docker's container approach is an open source platform
under wraps that can help administrators automate application deployment in isolation on a shared OS
kernel.
 Details of container
Container isolation tools -- Docker included -- rely on a container layer implemented in Linux
operations through components that include LXC (Linux Containers), libvirt or systemd-nspawn. Docker
includes its own library for containerization called libcontainer. Other container approaches
include cgroups, Checkpoint/Restore in Userspace for Linux and Kubernetes, which focuses
on container orchestration and automation.

Docker's container platform has garnered so much attention across the industry because it
provides a single tool that can effectively assemble and manage an application and all of its
dependencies into a single package; called a container image file or a Docker file, it can be
placed into a container and run on any Linux server or Windows Server with Microsoft's
container products.The way Docker packages the application enables it to run on premises, in
a private cloud and in the public cloud. Containers also are generally less resource-
demanding and faster to spin up than VMs. So, Docker provides enormous application
flexibility and portability. It's these attributes that have attracted the attention of so many
enterprise adopters.

For example, Docker is integrated into major cloud platforms, such as AWS, Google Cloud
Platform and Microsoft Azure. It works with leading cloud infrastructure tools, like Cloud

Page 14 of 72
Foundry Diego container management, OpenStack Nova provisioning and OpenSVC cluster
In this e-guide
and configuration management. It's also compatible with configuration automation tools, such
as Chef and Puppet. Docker is also integrated
 What containers are and into Red Hat's OpenShift platform.
how they work Containers need orchestration and automation. While a platform such as
 Get all those containers While a platform such as Docker can create and Docker can create and
run container images, the sheer number of image
under wraps
run container images,
files presents a potential nightmare for
management. Tools like Kubernetes have also the sheer number of
 Details of container evolved substantially to support Docker and other image files presents a
operations containers so that administrators can automate potential nightmare for
and manage complex environments.
management.
Although Docker's container platform was
originally only focused on Linux environments,
Microsoft's Windows Server and Hyper-V containers natively run Docker Windows container
images. Projects such as the Open Container Initiative (OCI) aim to create a vendor-neutral
standard that supports multiple OSes. CoreOS Rkt, Apache Mesos and Amazon Elastic
Container Registry are among the projects that support OCI. The idea is to create a single
uniform container environment rather than create multiple competing -- and incompatible --
environments. A software developer should be able to package an application for containers
and know that it will run with Docker, Rkt from CoreOS or other projects, such as the Jetpack
runtime for FreeBSD, Cloud Foundry and Apcera's Kurma container environment. OCI v1.0.0,
released in July 2017, includes an image specification that defines how to create, assemble or
bundle a container image. The standard also includes a runtime specification that stipulates
how to unpack and run a container image file. A certification process is being developed to

Page 15 of 72
outline the process and requirements for OCI-based software for multiple OSes and
In this e-guide
environments.

 What containers are and An alternative to Docker's container approach


how they work
The Rkt platform, a competitor to Docker's container approach, appeared in late 2014 and
 Get all those containers gained some traction as an application container engine intended for cloud-native
environments. Red Hat acquired CoreOS in early 2018. The Rkt approach is based on pods,
under wraps which are a collection of apps running in a shared environment, similar to Kubernetes'
 Details of container orchestration scheme. The Rkt platform can execute Docker and OCI containers. In March
2017, CoreOS and Docker proposed adding Rkt and the containerd engine to the Cloud
operations Native Computing Foundation, enabling Rkt and containerd to garner the same attention as
Kubernetes and other platforms.

Containers offer new opportunities for software developers and data center administrators,
but they also pose new challenges. The good news is that, while Docker's container approach
has caused great disruption, container technology is not exclusive. Containers are simply
another tool in the virtualization toolbox. They can coexist with current hypervisor-based
virtualization in the same environment, even stacking containers onto VMs, which gives
administrators and developers freedom to experiment with and embrace containers at a
comfortable pace as new application development and deployment tasks emerge.

▼ Next Article

Page 16 of 72
In this e-guide Understand how Docker works in the
 What containers are and
VM-based IT world
how they work Clive Longbottom, Independent Commentator and ITC Industry Analyst
 Get all those containers Docker, a household name in IT, is still far from mainstream adoption, although it has gained
under wraps a degree of traction in enterprises. With mounting acceptance, your organization can't avoid
containers and stick to virtual machines in perpetuity.
 Details of container
In simplest terms, Docker is a means to package and provision code so that it can move
operations across different parts of an IT platform. While it may seem unclear how Docker works, it's
used for various reasons in enterprise IT. Application containerization optimizes hybrid cloud
setups and provides a flexible and responsive IT platform.

Does that mean Docker is used for the same purposes as VMs? Yes and no. Docker operates
differently, and that informs where it is used.

How Docker works in contrast to VMs


The basic VM holds everything necessary for the workload to run, such as the OS, app
server, application and any associated databases. That package can transition onto any
platform that supports the VM: VMware VMs operate on any platform that has an ESXi
hypervisor; Microsoft VMs work on any platform with Hyper-V.

A Docker container works differently. It holds only what the application requires to run above a
platform. Docker containers are not used for hardware virtualization or complete application

Page 17 of 72
workload hosting. The container, generally, doesn't include the OS, nor do individual
In this e-guide
containers require an app server -- provided the underlying platform has one installed.

 What containers are and


how they work

 Get all those containers


under wraps

 Details of container
operations

Page 18 of 72
In this e-guide Containers aren't the pinnacle of virtualization
 What containers are and Docker containers are not as portable as VMs are: Docker images are OS-dependent, and in
some cases, the container might require a specific version and patch level of the OS,
how they work although hard versioning is a bad coding practice.

 Get all those containers A long-standing user complaint of Docker was that a poorly written container could pass
privileged calls from the container through to the underlying platform. Therefore, if a malicious
under wraps entity hijacks the container, it could compromise the underlying platform and subsequently
 Details of container bring all Docker images to their knees. Later versions of Docker addressed this security issue.

operations However, these points prompt container newcomers to ask how Docker works, if VMs are
seemingly more platform-independent and more secure?

Containers are more efficient than VMs


The way in which Docker works gives it both obvious and subtle advantages over server
virtualization with VMs.

Docker containers require fewer resources, both physical and virtual, than comparable VMs.
Because the OS is external and shared among the containers, each instance requires
significantly less storage space, whereas each VM needs resources to run its own OS. For
every VM that runs on a given platform, Docker can run several containers.

The shared OS means that container maintenance can be easier than with VMs -- but this
isn't guaranteed. Admins must touch every single VM to implement an OS patch or upgrade,
but with Docker environments, they simply update the one underlying shared OS.

Page 19 of 72
Maintenance is not so easy with high version- and patch-level sensitivity. In a VM, each
In this e-guide
isolated workload has its own OS at whatever version and with whatever patches it needs to
function; in containers, the underlying platform can only support one OS. Some organizations
 What containers are and embed containers inside of VMs to circumvent this hurdle, but this isn't a best practice for
how they work long-term operations: It introduces unnecessary complexity, along with performance issues.

 Get all those containers Additionally, Docker is used for increased granularity in application deployments. It isn't
impossible to operate a microservices environment deployed on VMs, but as storage and
under wraps resource constraints become apparent, organizations find only regret and a thinner wallet.
Docker application containerization enables admins to create, deploy and link small functional
 Details of container pieces of code to provide a composite application that is still lightweight.
operations How Docker works has advantages over VMs when used for business continuity and disaster
recovery efforts. New instances of a Docker container can be provisioned on different parts of
the IT platform easily and quickly: A container that normally runs on premises can be shot into
a public cloud environment and provisioned, or brought from cold storage to a live
environment, more rapidly than a VM.

In some cases, VMs are still the right model. For example, a static application environment
where the capability to provision it to new hardware is of paramount importance suits VM-
based deployment. VMs won't disappear overnight. Generally speaking, however, Docker is
for the DevOps age: It can handle continuous development and delivery, and its
microservices focus makes it fit for future application architectures. Combine this with either
the Docker support ecosystem -- which is technically complex but competent -- or an
orchestration and management tool, like Kubernetes, and Docker is ready for prime time.

▼ Next Article
Page 20 of 72
In this e-guide Ten lessons for enterprises
 What containers are and
deploying containers
how they work Torsten Volk, Managing Research Director
 Get all those containers Containers are an IT trend full of high expectations and operations concerns.
under wraps Containers promise to enable applications to float through the data center and cloud,
 Details of container managed by DevOps teams, without much need for costly IT operations management
anymore. But enough technologies have come and gone to keep an industry analyst attuned
operations to the fact that someone deploying containers has to guarantee service-level agreements and
ensure security, performance, data locality, availability and scalability.

When they engage in the container discussion and shape a strategy, enterprises should
absorb these 10 lessons. The lessons result from a two-month-long research project, the
Enterprise Management Associates (EMA) Top 3 Decision Guide. EMA surveyed developers,
IT operations and line-of-business staff from 300 U.S.-based enterprises, having 500 or more
employees.

1. Container management trumps AI, DevOps in


enterprises
Container management is the most important IT topic of 2018, far ahead of AI, DevOps
pipeline automation and serverless, according to survey respondents. However, the same
respondents predicted AI will become the most important topic in 2019.

Page 21 of 72
In this e-guide 2. Containers are no magic potion
 What containers are and There's plenty of enthusiasm -- even hype -- over containers, with 83% of respondents
indicating that containers will replace VMs entirely within five years and 45% of that group
how they work believing that this will happen on an accelerated timeline, within 24 months. However, these
same study respondents identified a plethora of challenges with deploying containers that
 Get all those containers often destroy the business case for it. To attain success with containers, organizations must
under wraps attack these challenges and learn in the process.

 Details of container 3. Rein in shadow containers


operations Container deployments not controlled by corporate IT -- so-called shadow containers -- affect
72% of enterprises, according to the survey. Shadow containers and shadow Kubernetes
result from several factors. Containers are often not seen as infrastructure and instead as part
of an application; business units make the argument that IT operations only needs to provide
the infrastructure, while developers manage container platforms. Shadow containers stand in
the way of corporate IT's ability to ensure service-level agreements (SLAs) and provide
support when things go wrong. Therefore, the interface between containers and the
hypervisor is the critical integration point between containers and corporate IT.

Page 22 of 72
In this e-guide

 What containers are and


how they work

 Get all those containers


under wraps

 Details of container
operations

Page 23 of 72
In this e-guide

 What containers are and


how they work

 Get all those containers


under wraps

 Details of container
operations

4. Containers need VMs -- for now


Half of EMA's survey respondents run containers on on-premises VMs, while 32% run on off-
premises VMs, adding up to a large majority (85%) that rely on the more conventional
abstraction technology. This is the case due to container schedulers, such as Kubernetes and
Page 24 of 72
Docker swarm mode, not being infrastructure-aware. The deployment, therefore, needs a
In this e-guide
solid management foundation for compute, storage and networking, provided by conventional
virtualization.
 What containers are and
how they work 5. Container technology is all about integration
 Get all those containers Integration with current data center technologies is the No. 1 requirement for enterprises
selecting container technologies. Enterprises struggle to enable their current IT operations
under wraps staff to control security, compliance, availability and performance across bare metal,
 Details of container hypervisors, containers and various other platforms in use.

operations 6. Security and compliance complicate container


adoption
Security and compliance are the No. 1 container management pain points, significantly ahead
of performance, complexity of container deployment and reskilling staff for the technology.
Half of respondents have no solid compliance plan in place, and 58% do not include policy
staff in their ongoing DevOps and container management projects and tasks.

7. Apps don't stay the same in containerization


Only 16% of enterprises are primarily deploying applications in containers with a lift-and-shift
approach. Modification to the application is paired with containerization for 45% of
respondents. Behind application modernization, 39% of respondents regard building new
cloud-native applications as their key use case.

8. IT operations calls the shots


Page 25 of 72
Corporate IT is the most influential group when it comes to container technology decisions,
In this e-guide
with software development groups in second place.

 What containers are and Nearly 40% of survey respondents said that corporate IT should make the decisions around
deploying containers, while only 20% assigned that role to the software developers.
how they work
Container decisions come from the ground up, the survey indicated, with the CIO, CTO and
 Get all those containers CFO falling behind IT and developers.
under wraps
9. Containers as a service are popular
 Details of container
Amazon Elastic Container Service, Google Kubernetes Engine, Microsoft Azure Container
operations Service and IBM Cloud Container Service are in a close race to host the most container
workloads. Enterprises must understand how and when to integrate these containers-as-a-
service offerings with traditional and containerized applications that run in the data center
today.

10. Kubernetes doesn't stand alone


Kubernetes is an open source container orchestration and scheduling tool and the basis of a
wide array of commercial container management products. Ninety percent of enterprises
surveyed indicated that they require a commercial container management solution to
effectively manage Kubernetes, with 96% reporting that they use external professional
services to manage the container lifecycle.

▼ Next Article

Page 26 of 72
In this e-guide Container orchestration tools ease
 What containers are and
distributed system complexity
how they work Tom Nolle, President
 Get all those containers Far more than half the businesses that use containers do so with Docker technology. But
under wraps that's only half the story.

 Details of container The majority of containers actually deployed for production workloads rely on more than just
Docker. Container deployment isn't simple, no matter what tools you use. There is an inherent
operations complexity in the process to create containers and use them to deploy applications. This
difficulty is compounded when there are a lot of applications involved or when containers are
hosted on both cloud and on-premises infrastructure, in a cloud-bursting or failover scenario.
Large data centers are more complicated than small ones.

IT teams that use basic Docker containerization must resolve complicated problems manually,
which eats up time and can introduce errors.

Kubernetes-based container orchestration tools


Automated container orchestration tools deploy and redeploy apps and handle failures.
They're the next step for organizations that require more than command line-based container
management via Docker.

Kubernetes is the most popular container orchestration technology. More containers deploy
via a combination of Docker and Kubernetes than do via Docker alone. This is due in part to

Page 27 of 72
the fact that Docker-only container users typically run fewer applications than their
In this e-guide
orchestrating brethren.

 What containers are and Kubernetes organizes the relationship between applications and resources. The user defines
clusters of resources available to host applications. Kubernetes simplifies assignment of
how they work containers to hosts, as well as how updated components of complex applications get
 Get all those containers replaced. It also enhances DevOps processes because it replaces common manual tasks in
the IT environment with policy-driven automation and standardizes the way that components
under wraps and applications integrate into complex workflows.

 Details of container Cloud benefits of container orchestration


operations
Kubernetes is a fixture in large container deployments on private infrastructure, but it shines
when combined with the cloud. A user can divide applications between the data center and
public cloud and also between public cloud providers in multi-cloud deployments by properly
defining Kubernetes clusters. Public cloud providers support Kubernetes within their cloud
offerings, as a web service:

 Amazon Elastic Container Service for Kubernetes, called EKS;


 Google Kubernetes Engine, called GKE;
 Microsoft Azure Container Service, called AKS;
 IBM Cloud Container Service; and others.

Page 28 of 72
The broad support for Kubernetes as a Docker-
In this e-guide IT teams that use basic
plus-container orchestration tool creates its own
confusion, particularly with hybrid and multi-cloud. Docker containerization
 What containers are and Should the IT team in charge of the application must resolve
how they work also set up and manage Kubernetes or use the
cloud provider's version of the technology? complicated problems
 Get all those containers Generally, large enterprises and teams that manually, which eat up
under wraps
change cloud providers often prefer to manage time and can introduce
Kubernetes in-house. These organizations can
errors.
 Details of container still use public cloud providers in their container
strategy but should host Docker and Kubernetes
operations on infrastructure-as-a-service VMs rather than
adopt the cloud provider's managed Kubernetes service. The organization can use
Kubernetes' capability for resource cluster management and integration to tie things together.

Kubernetes as a service is a good choice if most container deployment occurs in the public
cloud or if there is a clear public/private boundary. For example, if front-end components,
which are device- and GUI-centric, run in the cloud and back-end applications in the data
center, the managed Kubernetes service of the cloud provider of choice will work well for the
organization, because they'll orchestrate the two pieces of the applications independently.
These application components should not fail over or burst between the public and private
resource pools.

Container orchestration options


Docker container orchestration tool choices don't end with Kubernetes. Some container users
create highly complex and dynamic applications with components that float among cloud
providers and into and out of the data center. A second level of orchestration creates a
Page 29 of 72
universal virtual resource pool that's independent of the hosting provider or server technology.
In this e-guide
Under this model, the hosting resources all look the same and are therefore easier to
manage. They should be evaluated for cloud bursting, failover and event processing.
 What containers are and
The Apache Mesos project and the commercialized Mesosphere DC/OS tool achieve this kind
how they work of container orchestration. Mesos and DC/OS create, essentially, a fully distributed OS kernel
 Get all those containers that spans every cloud and on-premises system on which it's run.

under wraps Don't be insecure

 Details of container The most publicized reason to select containerization tools beyond Docker is security, but
Docker security has improved considerably as the platform matured over the course of 2017.
operations Unless you have exceptionally stringent security and compliance requirements for container
deployment, Docker should fit the project. Otherwise, evaluate CoreOS Rkt as the
fundamental container software for the deployment. Orchestration tools used for Docker will
also work with Rkt.

Mesos and DC/OS can be paired with a kind of super-orchestration tool, Marathon, to create
sophisticated container deployment and operations. Marathon features target high availability.
A user can set deployment policies that limit where containers are hosted in order to meet
security and compliance goals. It also includes APIs so orchestration processes can integrate
with load balancers, management systems and other tools.

Despite the various advanced container orchestration tools available, not all deployments
need to go beyond the Docker platform. The majority of cases that require orchestration are
addressed by Kubernetes. As interest in containers and real production-level deployments
grow, the orchestration demands of users will evolve as well.

Page 30 of 72

In this e-guide
Next Article
 What containers are and
how they work

 Get all those containers


under wraps

 Details of container
operations

Page 31 of 72
In this e-guide Apache Mesos better utilizes
 What containers are and
resources, improves scalability
how they work Walker Rowe,
 Get all those containers Created by developers at the University of California, Berkeley and embraced by major
under wraps enterprises, including Twitter, Apple and Netflix, Apache Mesos is open source software that
abstracts storage, CPU and memory across a cluster of machines. One of the major draws of
 Details of container Apache Mesos is that it scales linearly, meaning that as the load level increases, response
time increases proportionally -- essentially, scaling without limit. Mesos refers to itself as a
operations
"distributed systems kernel" because it takes the core principles of a Linux kernel, but applies
them to a different level of abstraction.

Stop wasting VM resources


Data center administrators underutilize VMs all too often, causing organizations to waste
money on resources they aren't using. This issue is usually addressed through partitioning,
which sets aside a specific set of servers to run specific functions. In a traditional
environment, you would use partitioning to define a requirement, such as the need for a
specific amount of servers, and then assign VMs and storage accordingly. In a public cloud --
where users are billed on a resource usage basis and don't have access to the cloud OS --
partitioning applies larger or smaller templates to the VM configuration.

Page 32 of 72
While this method is effective enough, it's more
In this e-guide As you can imagine, an
practical to colocate services, which is where
Apache Mesos enters the picture. Partitioning enterprise as large as
 What containers are and dedicates a machine to a specific task, such as a Uber has some heavy-
how they work database server, and another to run, say, a web
server. Colocating is more efficient, because it duty data processing
 Get all those containers allows you to run more than one service on one needs. Uber uses the
under wraps
VM or server. It also cuts down costs, because Apache Cassandra
running more than one service on the same
database -- a NoSQL
 Details of container server reduces the number of servers you
require. Rather than relying on partitioning to run column-oriented
operations services, Mesos uses colocation to allow the database -- to store
software to take resources on an as-needed
basis. Put in technical terms, Mesos replaces
location data.
whichever resource manager you're using with its
own framework and implements scheduling and
execution interfaces.

Mesos works with individual software, Docker containers and big data clusters that are
configured to use Mesos as the resource manager. Apache Mesos is not an orchestration
system for VMs. Mesos also uses Linux control groups, also known as cgroups, to limit
resources, prioritize processes and do accounting. This is useful in the public cloud because it
allows vendors to charge customers based on how many resources they use. Cgroups are
helpful in traditional environments as well, because they can limit processes so they don't take
over a machine.

Page 33 of 72
In this e-guide Solve the problem of partitioning with Mesos and
 What containers are and
YARN
how they work Apache Hadoop Yet Another Resource Negotiator (YARN), the resource manager for Apache
Hadoop MapReduce, performs roughly the same function as Mesos. In fact, Myriad, an open
 Get all those containers source project, enables data centers to use both products at the same time. You would use
Mesos and YARN together if, for example, you wanted to run container applications with
under wraps
Mesos, but use YARN to run Hadoop.
 Details of container If we take a closer look at why Yahoo rewrote Hadoop to add YARN, we can get a better
operations understanding of what both Mesos and YARN do.

Programming the framework for YARN is a complex task, one better suited for engineers of
large software products, like Apache Spark, than end users. The Hadoop configuration makes
it much easier to use YARN. In a clustered environment, you can simply edit a few
configuration files on the name node and then copy the entire Hadoop installation to the date
nodes -- YARN works without any further configuration changes needed.

The major problem with earlier releases of Hadoop was partitioning. With partitioning, you can
designate slots to run either map jobs or reduce jobs to a machine in a Hadoop cluster. Once
you're assigned a slot to run a map job, you can't use it to run a reduce job, and vice versa.
Suppose you've assigned 10 slots to run map jobs and 10 slots to reduce jobs to a machine in
a Hadoop cluster. Now, let's say Hadoop needs to run 11 map jobs -- you'll find yourself in a
bit of a bind because you haven't allocated enough slots for map jobs, and you can't use any
of the additional slots allotted for reduce jobs. This problem made it clear that there needed to
be a better way to colocate services and do away with partitioning. Apache responded by
making it so YARN and Mesos could dole out services.

Page 34 of 72
In this e-guide The Apache Mesos architecture
 What containers are and The Apache Mesos architecture consists of a master daemon, which manages the agent
daemons running on each cluster node. The agent daemons also use cgroups to keep them
how they work working within their allocated memory, CPU and storage. Each of these agents uses a Mesos
framework to run tasks. This framework is made up of two components: the scheduler, which
 Get all those containers registers with the master to receive resources, and the executor, which takes these resources
under wraps from the scheduler and uses them to run the framework's tasks. Essentially, the executor
identifies whichever application you're running that resources are available.
 Details of container
operations
Mesos and container orchestration
Much like Kubernetes and Docker Swarm, Mesos also performs container orchestration.
Mesos works with three types of container technologies: Composing, which allows different
container technologies to run together, Docker and Mesos's own containerization, which is the
default configuration.

Popular Mesos frameworks for container


orchestration
Developed by Twitter to run stateless services, like Java VMs and web servers, Apache
Aurora is a framework designed for both long-running and cron jobs. Apache Chronos is an
elastic distributed system that expresses dependencies between jobs. Written by
Mesosphere, Apache Marathon is a container orchestration system that can scale to
thousands of physical servers. Aurora, Chronos and Marathon all interface with Mesos using
JSON and a REST API.

Page 35 of 72
In this e-guide Major companies embrace Mesos
 What containers are and As you can imagine, an enterprise as large as Uber has some heavy-duty data processing
needs. Uber uses the Apache Cassandra database -- a NoSQL column-oriented database --
how they work to store location data. A column-oriented database writes one row/column combination at a
time rather than writing an entire row of columns, so it wastes no space on empty columns. A
 Get all those containers column-oriented database also keeps columns together for rapid retrieval.
under wraps In addition to relying on Mesos for tracking data, Uber also regularly contributes code to
 Details of container Mesos. In 2016, Uber wrote a framework for Apache Cassandra, which makes it easier to
deploy Cassandra on DC/OS, an open source distribution for Apache Mesos. You can run
operations Hadoop, Spark, Cassandra and more on top of DC/OS because it's extendable.

Netflix is also a major user of and contributor to Apache Mesos. Netflix says it runs on
Amazon Elastic Cloud Compute (EC2) and uses Mesos to deliver "fine-grained resource
allocation to tasks of various sizes that can be bin packed to a single EC2 instance." In 2015,
Netflix developed Fenzo, an open source scheduler for Apache Mesos frameworks. Fenzo
manages scheduling and resource assignment for deployments and adds cluster auto scaling
to Mesos.

▼ Next Article

Page 36 of 72
In this e-guide Is Kubernetes free as an open source
 What containers are and
software?
how they work Stephen Bigelow, Senior Technology Editor
 Get all those containers The question of cost often crops up around open source software -- especially when that
under wraps software has been widely adopted and integrated into other products.

 Details of container Kubernetes is an open source container orchestration and management tool managed by the
vendor-neutral Cloud Native Computing Foundation. While tools such as Docker actually build
operations and drive containers, tools like Kubernetes automate the deployment, scaling and
management thereof.

So, is Kubernetes free?


Yes, but also no.

Pure open source Kubernetes is free and can be downloaded from its repository on GitHub.
Administrators must build and deploy the Kubernetes release to a local system or cluster or to
a system or cluster in a public cloud, such as AWS, Google Cloud Platform (GCP) or
Microsoft Azure.

While the pure Kubernetes distribution is free to download, there are always costs involved
with open source software. Without professional support, Kubernetes adopters need to pay in-
house staff for help or contract someone knowledgeable. The Kubernetes admin needs a
detailed working knowledge of Kubernetes software build creation and deployment within a
Linux environment.

Page 37 of 72
In effect, users need to know what they're getting into before they adopt open source software
In this e-guide
in the enterprise.

 What containers are and When isn't Kubernetes free?


how they work
Kubernetes isn't just a do-it-yourself proposition and can be obtained from numerous other
 Get all those containers sources that aren't necessarily free. In most cases, Kubernetes is integrated into hosted cloud
services because containers are well-suited to application deployments in the cloud.
under wraps
For example, Kubernetes is integrated into Red Hat OpenShift, a container application
 Details of container platform built with default registry, networking and other setup options, along with automation
operations and a service catalog to take away some of the complexity of container operations. The
dedicated version, which provides a high availability version of OpenShift as a virtual private
cloud, starts at $48,000 per year. Red Hat OpenShift Online supports up to 10 projects
starting at $50 per month.

Paid Kubernetes options abound. VMware Pivotal Container Service supports Kubernetes
alongside its other container provisioning and management features. Platform9 delivers
Kubernetes as a service, supporting hybrid cloud across public clouds and local, on-premises
server infrastructure. IBM Cloud Kubernetes Service brings cluster management, container
security and isolation capabilities to container environments often deployed for other IBM
services, such as Watson, IoT and big data projects. The exact pricing for these services
requires a detailed quote directly from the specific vendor.

Page 38 of 72
In this e-guide

 What containers are and


how they work

 Get all those containers


under wraps

 Details of container
operations

While it is possible to deploy Kubernetes in a public cloud instance without vendor support,
many public cloud providers have done this for users by providing Kubernetes as a public
cloud service. For example, Azure Kubernetes Service (AKS), Google Kubernetes Engine
(GKE) and Amazon Elastic Container Service for Kubernetes (EKS) all provide a fully
managed Kubernetes container orchestration. AKS and GKE are free. Azure and Google
users pay only for the costs of compute, storage, monitoring and other services used to
architect the cloud application deployment. Amazon EKS currently charges $0.20 per hour for
each cluster, in addition to the cost of compute, storage, monitoring and other services used
in the AWS cloud.

▼ Next Article
Page 39 of 72
In this e-guide Kubernetes Pods and Nodes map out
 What containers are and
container relationships for
how they work

 Get all those containers production


Tom Nolle, President
under wraps
In increasingly abstracted compositions of applications and infrastructure, relationships are
 Details of container more important than ever. How a DevOps deployment abstracts both goals and resources is
operations critical.

One of the strengths of container orchestration system Kubernetes is that it abstracts


deployment with Nodes and Pods. These elements work in the context of clusters,
Deployments and Services. Admins must understand the relationships between them to fully
understand Kubernetes Pods and Nodes.

An application comprises a set of related components threaded together via workflows. Each
of these components is critical to the application and must be deployed on infrastructure that
matches its hosting and connection requirements. The application won't run when deployed
and can't be restored if its workflow-based component relationships are lost. Kubernetes Pods
capture this relationship information.

Kubernetes Pods are virtual structures that hold a set of colocated containers. From a
systems management perspective, each Pod looks like a single server, with an IP address
and one set of ports, hosting multiple Linux containers. The number of containers within a Pod
is not visible from this outside perspective, as the Pod abstracts the application at only one

Page 40 of 72
level. A Pod is a complete unit of deployment, as each contains all the environment variables
In this e-guide
that the components they represent need.

 What containers are and The mechanics to deploy Kubernetes Pods


how they work
Kubernetes Pods deploy on Nodes, a kind of logical machine. A Node is a hosting point for a
 Get all those containers container and may be a VM or bare metal. One or many Pods can deploy per Node. Nodes
are typically grouped into clusters that represent pools of resources that cooperate to support
under wraps applications. Kubernetes redeploys Nodes within a cluster if a Node breaks. Nodes contain a
 Details of container Kubernetes runtime agent that manages the container orchestrator's tasks and also a runtime
for the container system, such as Docker or rkt.
operations
A Kubernetes deployment starts with clusters of Nodes. Each cluster is a resource pool for a
set of cooperative applications: They exchange information via workflow or shared databases,
or they support a cohesive set of business functions. Cluster servers are akin to virtualization
or cloud resource pools. They consist of Nodes that are mapped to Pods at deployment. Each
cluster has a master Node that coordinates Kubernetes processes across the cluster.

Deployment is the process of assigning Pods to Nodes in a cluster. Policies determine things
like how Nodes are shared and how many replicas of a Pod Kubernetes creates for
redundancy. Nodes in a cluster are general resources for the applications associated with it,
but it's possible to reserve -- or, in Kubernetes parlance, taint -- a Node to limit its use to a
particular set of Pods. After Kubernetes Deployment assigns Pods to Nodes, it activates the
Node container management system to deploy based on the parameters and then waits. The
result isn't yet functional.

When Deployment occurs, Kubernetes Pods are there but invisible, not exposed yet to the
outside world. Kubernetes and the container management system recognize the Pods, but
there's no address mapping for the IP addresses and ports. Services provide that exposure.
Page 41 of 72
A Service is a collection of Pods within a cluster that includes instructions on how the Pods'
In this e-guide
functionality is accessed within the production deployment. Service definitions can expose
functionality within a cluster only -- the default -- or externally on a virtual private network
 What containers are and (VPN) or the public internet. The administrator can also define the Service to specify an
how they work external load balancer that divides work across a set of Pods.

 Get all those containers Pods like containers

under wraps Kubernetes Pods are pools of containers mapped to an application. Don't put containers that
aren't closely related to each other into the same Pod or put Nodes that supported unrelated
 Details of container applications into the same cluster. Discipline in relationship assignments is critical to making
operations Kubernetes work in production.

A Pod represents the containers that should be hosted as a unit. If anything in a Pod breaks,
administrators will recover the entire Pod. If two unrelated applications share the same Pod, a
failure could break both. Similarly, replicating a Pod replicates everything in it.

Containers normally have host-private IP addresses, which restrict container communication


to the single shared machine: a Pod, in Kubernetes. Kubernetes gives every Pod its own
cluster-private IP address so that all Pods within a cluster communicate by default. Users can
alternatively map some cluster-private addresses to public IP addresses, such as the
corporate VPN. To define microservices or components that will be shared among
applications, define them as their own cluster and Service, and expose them as public
addresses.

Pods and Nodes do not define Kubernetes. Kubernetes is really about clusters and Services,
so think of Pods and Nodes with clusters and Services in mind.

Page 42 of 72

In this e-guide
Next Article
 What containers are and
how they work

 Get all those containers


under wraps

 Details of container
operations

Page 43 of 72
In this e-guide Kubernetes books put container
 What containers are and
deployments on the right track
how they work Meredith Courtemanche, Executive Editor
 Get all those containers When enterprise IT teams start out with Kubernetes, a lot can go wrong. In two Kubernetes
under wraps books, the authors take an incremental approach to help readers absorb application
deployment and architecture lessons that ease the transition to containers.
 Details of container
"The stuff that we build is really important, but it can be complicated at times," said Brendan
operations Burns, Microsoft distinguished engineer and co-author on both Kubernetes books. "People try
to master the whole thing all at once," he added, and the concepts can be opaque until you
start to use the tool.

The abstraction that Kubernetes provides -- decoupling the infrastructure from the application
-- enables a DevOps approach to code deployment, Burns said. New projects are easier to fit
into the Docker containers and Kubernetes orchestration model, but repeatability, code
review, immutability and standardized practices aren't limited to greenfield apps. "Even for
companies with a lot of code, efficiency is worth [transitioning to containers]," Burns said. "It's
100% fine to run a fat, heavy container." Organizations should host monoliths in containers
orchestrated by Kubernetes and use this model to slowly slice that monolith apart with
distributed services implemented with the original app.

Page 44 of 72
While Kubernetes simplifies container
In this e-guide While Kubernetes
provisioning and operation, users face manifold
decisions around application efficiency and simplifies container
 What containers are and scalability. For example, Burns said, Kubernetes provisioning and
how they work has specific objects for replication, stateful and
other deployments; external or internal load operation, users face
 Get all those containers balancer options; and debugging and inspection manifold decisions
under wraps
tools that provide multiple ways to troubleshoot around application
incorrect operations.
efficiency and
 Details of container Burns, Kelsey Hightower and Joe Beda wrote scalability.
operations Kubernetes: Up and Running: Dive Into the
Future of Infrastructure to help DevOps and other
IT teams use Kubernetes and build apps on top of it; the book assumes a reliable Kubernetes
deployment. Kubernetes: Up and Running is available from O'Reilly Media.

Brendan Burns

This book brings readers from their first container build and Kubernetes cluster through
common kubectl commands and API objects and then the bulk of deployment considerations:
pod operations, services and replicas, storage and more. The final chapter walks through
Page 45 of 72
three examples that run the gamut of real-world app deployments. "We didn't want to write an
In this e-guide
app from scratch because it will be a little toy that is just for demo purposes," Burns said.
Instead, the examples are popular scenarios that readers can relate to their own jobs.
 What containers are and
how they work

 Get all those containers


under wraps

 Details of container
operations
Download a free excerpt from the chapter here.

Users learn the power of Kubernetes once they get into objects, such as ReplicaSets. "Once
you can run one pod well, you want to run multiple for resiliency or to scale out," Burns said.
ReplicaSets act like a cookie cutter, creating identical copies of a pod, as many as the user
specifies. The next consideration is how to allocate and balance the load across these copies.
Chapter 4, "Common kubectl Commands," delves into API objects:

Everything contained in Kubernetes is represented by a RESTful resource.


Throughout this book, we refer to these resources as Kubernetes objects. Each
Kubernetes object exists at a unique HTTP path; for example, https://your-
k8s.com/api/v1/namespaces/default/pods/my-pod leads to the representation of
a pod in the default namespace named my-pod. The kubectl command
makes HTTP requests to these URLs to access the Kubernetes objects that
reside at these paths.

Page 46 of 72
Managing Kubernetes: Operating Kubernetes Clusters in the Real World, an upcoming
In this e-guide
Kubernetes book that Burns coauthored with Craig Tracey, will cover how to keep the tool
working, its architecture, upgrade decisions, how to handle backup and disaster recovery, and
 What containers are and other operational concerns around Kubernetes. Managing Kubernetes comes out in
how they work November 2018; O'Reilly Media publishes both Kubernetes books.

 Get all those containers


under wraps
▼ Next Article
 Details of container
operations

Page 47 of 72
In this e-guide Build a secure Docker host
 What containers are and
environment on Linux systems
how they work Stuart Burns, Virtualization and Linux expert
 Get all those containers A secure Docker setup, broadly speaking, relies on how IT operations manages the Linux
under wraps host OS, the Docker environment and its containers.

 Details of container The following guidelines are general, not OS-specific, and help ensure safe container
operations in diverse environments. Check with the specific OS vendor for best practices
operations around security, including access control, up-to-date patches, audits and isolation for the OS
version used with containers.

Start at the host layer


Run the latest stable OS release and patches on container hosts. Unlike VMs, containers
share host OS resources and files, so a security issue could affect the entire Docker estate.
OS management isn't difficult for enterprise IT teams, but approach with caution -- review all
documentation prior to committing an update for Docker hosting systems. Virtual snapshots
are a useful tool for this process, providing a log of changes and a rollback target if needed.

Application security is only as good as what's on the stack below it. Assess the security
settings on the host in question. Anyone with administrator-level access to the OS can
manipulate the containers in the default configuration. Administrators should use keys for
remote login to increase the environment's security level. In addition, implement a firewall,
and restrict access to only trusted networks. Keep the attack surface to a minimum.

Page 48 of 72
Audits work hand in hand with security. Don't ignore system audits until the information is
In this e-guide
needed -- it won't be available, because it hasn't been recorded. Engage in a strong log
monitoring and management process that terminates in a dedicated log storage host, with
 What containers are and restricted access.
how they work In the same vein, Docker host systems should run only Docker containers. A host should run
 Get all those containers as few services as possible. Non-Docker services can, if necessary, be converted to
containers to abstract them away from the host OS and other containers on the system.
under wraps
Create a secure Docker environment
 Details of container
operations Administrators can take simple steps to stabilize container operations. Keep the
/var/lib/docker directory partitioned within the system. This separation ensures that any
storage space issues within the Docker environment won't crash the OS and consequently
take out all the containers on that host. Use logical volume management to ease storage
allocation. It virtualizes storage partitioning to share resources effectively across multiple
workloads.

A secure Docker setup also depends on vigilance regarding images, which are the static
packages of code, dependencies and libraries needed for a container to run. Not all Docker
images are to be trusted. Build Docker images in-house to ensure that developers and users
get only what the image is meant to have. This assuages security concerns -- and paranoia --
and provides the opportunity for image optimization because the organization is the only
target user.

Security at the next level

Third-party container security tools provide more fine-grained control than admin-led
measures to secure Docker, but the benefits might not balance out the price tag or training
Page 49 of 72
investment. Organizations that do seek out third-party tools should consider startups and free
In this e-guide
tools, established IT vendors that have added container security capabilities through
acquisition and vendors building up the capabilities natively.
 What containers are and
A sampling of container security tools includes:
how they work
 Alert Logic Cloud Defender and Threat Manager
 Get all those containers  Aqua Security
under wraps  CSPi's ARIA Software Defined Security
 Deepfence
 Details of container  Docker Enterprise Edition
 FlawCheck from Tenable Network Security
operations  NeuVector
 Qualys Container Security
 Twistlock

Additionally, cloud vendors provide container security services and best practices as part of
their offerings, such as Google Kubernetes Engine.
If custom image builds are out of your organization's wheelhouse, or not always needed, use
official builds only. For example, download the Ubuntu Docker image from the official
repository. Unofficial builds aren't necessarily wrong but might not meet the user's
expectations and reliability demands -- and some third-party images are intentionally
fraudulent or malicious.

Avoid infiltration of erroneous or malicious container images through a global content trust
requirement. Trusted images come from a verified source -- such as Docker's official
repository on Docker Hub -- and can be built upon for finely tuned control over the
environment. Enable the content trust flag globally to prevent potentially dangerous images
from sneaking in uninvited.

Page 50 of 72
Many aspects of basic Docker security boil down to good computer hygiene and common
In this e-guide
sense: Tighten down the virtual hatches, run only the minimum necessary services and
applications and restrict host access to only those users who need it. And always be careful
 What containers are and what you download.
how they work

 Get all those containers ▼ Next Article


under wraps

 Details of container
operations

Page 51 of 72
In this e-guide Master the Docker command line for
 What containers are and
container ops
how they work Alan Earls, Contributor
 Get all those containers Developers are most often the ones to bring Docker into an organization, but once containers
under wraps deploy into production, IT ops and admins need to manage the stack with effective
commands.
 Details of container
Systems administrators use the Docker command line to control containers and the resources
operations they use. This command-line interface (CLI) is included in Docker Engine.

Getting to know the most useful and common Docker commands is a good way to ensure that
containers become a good fit in the organization. We took nominations from a wide range of
IT pros for this Docker command list, which ranges from simple to more complex. It includes a
bonus at number 20: the least helpful command.

Editor's note: Other vendors in the Docker ecosystem have created alternatives to the official
Docker command line, such as Dry, but these CLIs are not within the scope of this article.

1. docker exec -it [container-id] bash


or docker exec -it $(docker ps -l -q) bash
Neel Somani, founder of web and mobile development startup Apptic LLC and a computer
science student at the University of California, Berkeley, likes to use these two commands to

Page 52 of 72
enter into a Docker container that is already running. From there, an engineer or admin can
In this e-guide
execute arbitrary bash commands, he said.

 What containers are and 2. docker system prune


how they work
The docker system prune command rescues systems low on disk space because of
 Get all those containers frequent image updates, said Benjamin Waldher, DevOps department head at Wildebeest
under wraps Design & Development, a web and software studio located in Marina del Rey, Calif. Zombie
containers, while more lightweight than VMs, can starve active containers of resources and
 Details of container contribute to virtual sprawl. The Docker command prune cleans up images and containers
that are no longer in use -- if you're on a newer version of the platform.
operations
For the many people and organizations running older versions of Docker, prune won't work,
according to Alex Ough, senior software engineer at Sungard Availability Services of Wayne,
Pa. They need granular commands to clean up unused resources; numbers three through six
show Ough's recommendations to address this.

3. docker rmi $(docker images -f


dangling=true -q)
This Docker command removes untagged and dangling (<none>) images, as Ough
recommends for users that run a version of Docker older than 1.13. The command rmi is
shorthand for remove images.

4. docker rm $(docker ps -a -f
status=exited -q)
Page 53 of 72
This command removes all exited containers. Similar to rmi, rm removes one or more
In this e-guide
containers.

 What containers are and 5. docker volume rm $(docker volume ls -f


how they work
dangling=true -q)
 Get all those containers
Need to remove dangling volumes? This is how. The rm command returns for another
under wraps example, referencing volumes rather than containers in this instance.
 Details of container
6. docker network ls | awk '$3 == "bridge"
operations
&& $2 != "bridge" { print $1 }
An administrator can remove nondefault bridge networks with this command, which also
applies to those using versions prior to Docker 1.13. Here, the acronym ls represents the
action list containers. The command awk appears in many Linux systems administration
scripts, performing a pattern match function with text files.

7. docker cp
container_name:/var/log/file.log
/tmp/file.log
This command, which relies on the Unix-like cp, is useful to pull out log files with contents
that, Waldher explained, "for some reason you aren't sending to stdout." The cp command

Page 54 of 72
copies files and folders between a container and the local file system, and this is only one
In this e-guide
example of its usefulness.

 What containers are and 8. docker exec -ti container_name sh


how they work
IT operations and DevOps teams often need to "poke around" in a running container, whether
 Get all those containers for troubleshooting or optimization plans. This command is a way to temporarily access the
Docker container as if it were its own machine, Waldher said.
under wraps

 Details of container 9. An immutable servicename trio: rm, pull and run


operations Three Docker commands -- docker rm servicename and docker pull servicename,
as well as docker run --restart=always servicename -- control any service files,
according to Waldher.

"Running these commands will ensure that your container's behavior is immutable," he said.
Every restart of the service running this Docker container will remove old containers,
preventing a state from building up with a container that is reused over a long period of time,
and ensure that containers are up-to-date, he explained.

In addition, he noted, using --restart=always will cause the Docker daemon to handle
any container crashes itself, rather than relying on the init script. Waldher recommends it as
a faster way to restart a service when it crashes, minimizing downtime.

10. docker ps
This Docker command lists the running containers in a given deployment.

Page 55 of 72
"It's my go-to when I log onto a machine, [and] I want to know exactly what's running," said
In this e-guide
Maryum Styles, a back-end engineer working with containers at New Relic in San Francisco,
Calif., which uses Docker in its platform. The docker ps command is just the start to
 What containers are and troubleshoot a production container environment, as seen in the next three commands.
how they work
11. docker ps -a
 Get all those containers
This variant on docker ps lists all containers, not just the ones that are running.
under wraps
"Right after I see what containers are running, I want to know what containers have recently
 Details of container failed and why," Styles said.
operations
12. docker logs
Once you know which container failed, you need clues about the cause of death -- or worse,
thrashing, wherein the container restarts and fails constantly. This docker logs command
shows the administrator all the logs for a given container, tracking what's happened and
when.

13. docker rm <container_id>


If you see something that indicates that a particular container is not acting as it should or if
that container should not run anymore for some reason, return to the rm command described
above to take it down.

14. docker images

Page 56 of 72
This is a great way to see the names and versions of containers on the infrastructure,
In this e-guide
according to Styles. Container-based IT organizations often also follow DevOps objectives,
which encourage collaboration and sharing.
 What containers are and
"I build a lot of Docker images to share with other people, so seeing the images I have is very
how they work
helpful," Styles said. Also, docker images displays the size of each image, a useful
 Get all those containers resource planning stat. For a small host machine, operations must keep track of hosted
images and balance them for performance.
under wraps

 Details of container
15. docker rmi <image_name>
operations Once you have a list of the images in use, implement the rmi command, this time to remove
unneeded images by name.

16. docker tag


Styles recommends docker tag -- it gives administrators a way to categorize images,
essentially creating a versioning scheme for containers. When a wider audience works with
images, IT teams should represent bug fixes and new features for images in the most clear
possible format. Thanks to this structure, "users know what version they're using and whether
it's the latest [one]," she explained. In production, organizations will test newer image versions
to ensure nothing in the update breaks the existing app or workflow, causing an outage,
Styles said.

A semantic number version scheme -- i.e., x.y.z -- avoids the easily misunderstood latest
tag that Docker assigns to any untagged images, whether they're the latest or not.

Page 57 of 72
In this e-guide 17. docker start
 What containers are and The team at WhiteSource, an Israeli company that helps developers identify known
vulnerabilities in open source components, shared three essential Docker commands. The list
how they work starts with start, a command that gets one or more stopped containers going.

 Get all those containers


18. docker stop
under wraps
This command stops one or more running containers, gracefully. When everything executes
 Details of container correctly, the stop command allows processes time to clean up and exit.
operations
19. docker kill
This command stops one or more running containers, forcefully. The kill command is
immediate and can disrupt processes rather than allowing them time to exit.

These last two commands, docker stop and docker kill, are similar but can have
differing effects on the production deployment. Take time to analyze the intent and desired
effects before you fire off any of these common Docker commands -- then you'll get the
results you want.

20. A useless command


Sure, this long command is pointless to actually control a Docker container -- but disguises its
true helpfulness:

docker run -ti

Page 58 of 72
--privileged
In this e-guide
--net=host --pid=host --ipc=host
 What containers are and --volume /:/host
how they work
busybox
 Get all those containers chroot /host
under wraps
The command's instruction to bypass the network and other namespaces, privilege status for
 Details of container the user and other attributes creates a Docker container that runs as root in the host's file
system, network, process table and so on. Creator Ian Miell, lead OpenShift architect at
operations Barclays in London, says this command, while having no purpose as written, is an instructive
starting point from which to make your own network or process checkers for given
namespaces.

▼ Next Article

Page 59 of 72
In this e-guide Select the best container monitoring
 What containers are and
tools for your environment
how they work Sander van Vugt, Independent Trainer and Consultant
 Get all those containers IT should monitor everything in production, and containers are no exception.
under wraps Many IT organizations already have network, server and application monitoring tools in place
 Details of container to observe critical assets in the data center. They can also evaluate dedicated container
monitoring tools.
operations
Container monitoring tools
For the most detailed information available about containers, talk to the container engine
itself. The container engine, which builds and deploys containers from images, knows exactly
where those containers are running and what resources they consume. On Docker, for
example, docker stats returns a fair amount of critical data about the containers. And
some tools specifically monitor containers.

CAdvisor shows essential container properties graphically on a real-time monitoring


dashboard, but without any option to go back and track what has happened in the past. For
monitoring trend analysis, cAdvisor can export data for another application.

Page 60 of 72
Prometheus is a promising open source
In this e-guide If your company already
monitoring technology from the container world.
Prometheus works with data that tools, such as trusts a good monitoring
 What containers are and cAdvisor, export, as well as with Docker tool for its networks, you
how they work containers directly. This makes Prometheus a
good option for containers, even if other solutions should integrate
 Get all those containers are available as well. container monitoring
under wraps As most container platforms use an open API,
into it.
there's no need to stick within that platform. Any
 Details of container monitoring application can get information from
operations the API directly. If your company already trusts a good monitoring tool for its networks, you
should integrate container monitoring into it.

Network and IT asset monitoring tools


Zabbix, Nagios and Zenoss are all established IT monitoring tools, used to observe critical
routers, switches, servers, workstations and other assets. Each includes support to monitor
containers through agents installed on the container platform.

Zenoss comes as pure open source or in an enterprise supported tool to monitor servers,
applications, cloud and other IT assets. Zenoss Core, the open source option, lacks the
reporting and analytics capabilities of the paid product, Zenoss Service Dynamics. Both
options support ZenPack extensions, such as the Docker ZenPack, which discovers
containers and monitors critical parameters on them.

Nagios similarly monitors assets throughout the application stack and is available in
supported, augmented versions or upstream open source form. Nagios Remote Plugin
Executor, commonly called NRPE, is a sort of agent that is started on the container host and
Page 61 of 72
accesses the container engine API to access any type of container information that is
In this e-guide
exposed through the API.

 What containers are and Finally, Zabbix is an open source monitoring tool with paid support choices. In addition to the
network, it monitors VMs, services, cloud and other areas. The Zabbix Docker module offers
how they work native support to monitor Docker containers, as well as some other container types, such as
 Get all those containers Linux Containers. Turn to network-focused tools if the container monitoring should be part of
the bigger picture, integrated with overall corporate IT management.
under wraps
What are the best container monitoring tools?
 Details of container
operations These and many more options are available to monitor containers, so selection of container
monitoring tools revolves around which technology is best for the environment. It depends. In
an IT organization or group within the organization that mainly uses containerized services,
evaluate tools designed with containers in mind, such as Prometheus. However, if containers
are just a part of a bigger network infrastructure or if network admins also want to monitor
containers alongside the container-specific team, look for a network monitoring platform that
supports integration.

▼ Next Article

Page 62 of 72
In this e-guide Can container communication cross
 What containers are and
over to noncontainerized apps?
how they work Chris Moyer, VP of Technology
 Get all those containers Containers resemble VMs in that they get full network access and can connect to any other
under wraps service on the internet as long as the containers are configured to do so.

 Details of container For example, containers on the AWS Fargate compute engine for containers and Kubernetes
can have their own IP addresses and be used to network with any service that is internet-
operations enabled. Containers can also exist behind virtual private clouds (VPCs) -- which are walled-off
dedicated resources on AWS or another public cloud host -- or other network firewalls that
isolate and protect them from outside connections. Container communication isn't much
different than other application deployment strategies, as long as you understand the options.

In addition to containers' ability to connect to anything outside of their environment through


traditional networks, they can also use Docker's bridge networks to connect to other Docker
containers running on the same physical system. They can also employ overlay networks to
connect to services running on other Docker hosts.

Although no longer recommended, Docker links can also join multiple containers together to
provide one service. Consider how a Python container can run SQL queries against a Docker
container that's running Microsoft SQL, and publish those results to an Apache web server in
another container.

Editor's note: Docker Inc. states that the functionality of Docker links was incorporated into
Docker networks, adding capabilities and integrating with the network options for better
security, multihost overlay networking, DNS, automatic load balancing and simpler
Page 63 of 72
configuration. Docker links is still available as a legacy feature but not recommended by the
In this e-guide
company.

 What containers are and To make much of this container communication in networking easier, services like AWS
Fargate enable developers to run multiple containers within the same task and automatically
how they work link them.
 Get all those containers
under wraps

 Details of container
operations

When
the workload moves around as described, security for container communication is not
different from when it stays within the container deployment. The only real concern is if the
workload gets passed over the open web -- just like with any other app. However, if the

Page 64 of 72
deployment uses a VPC -- or the containers communicate through Docker links -- security is
In this e-guide
not really an issue.

 What containers are and There's a wide range of networking options for different container communication setups with
Docker. Some allow just connecting to outside services, and others create a private internal
how they work connection to other Docker containers. In general, anything you can do with a traditional VM
 Get all those containers can be done with a Docker container. You can make networking with Docker as complex, or
simple, as the application requires.
under wraps

 Details of container ▼ Next Article


operations

Page 65 of 72
In this e-guide Integrate DevOps and containers
 What containers are and
with simple tool adjustments
how they work Kurt Marko, Consultant
 Get all those containers Organizations adopt DevOps methodologies to reduce application development time and
under wraps improve code quality, consistency and security. They adopt containers for many of the same
reasons.
 Details of container
A typical implementation pulls together multiple tools for development and deployment tasks
operations in a chain. It must be flexible enough to accommodate:

 different programming languages, such as C, Python and Go;


 application targets, both mobile and web; and
 diverse deployment platforms, including virtual servers, cloud services and containers.

Given the ever-increasing popularity of containers as a runtime environment for server


applications, IT organizations often combine DevOps and containers. DevOps methodologies
appeal to organizations and developers on the leading edge of technology -- a natural fit with
container usage and the often open source software to support them.
Automation and containers: A perfect match
The key decision in a container strategy is the choice of orchestration and cluster
management software; for the runtime engine, Docker is the de facto standard. Increasingly,
container orchestration is narrowing down to a single technological choice, Kubernetes,
particularly for large enterprises and online businesses that need scalable capacity.
Kubernetes isn't the right fit for everyone because it's often complicated to set up and difficult
Page 66 of 72
to manage. Kubernetes is the upstream technology for many commercial orchestration
In this e-guide
products that aim to smooth over these challenges. Some organizations prefer a packaged
container management system that builds on existing virtual infrastructure, such as VMware
 What containers are and vSphere Integrated Containers or Windows Server Containers. Other companies that
how they work implement DevOps and containers prefer a structured platform as a service (PaaS) stack,
such as Cloud Foundry or Google App Engine, that uses containers as the runtime
 Get all those containers environment by default.

under wraps Because a toolchain decouples components -- with different software for code version control,
module integration, regression testing and application deployment -- the user can mix and
 Details of container match tools to suit an exact environment. When the application developers swap out
operations traditional VMs for containers, they can update rather than replace the entire DevOps
toolchain.

Adapt the pipeline


Several steps constitute a basic DevOps process: a code repository with version
management, build automation, CI/CD, configuration management and infrastructure
deployment.

Moving from VMs to containers only requires changes to configuration management and
infrastructure deployment practices, with some possible customization to the CI/CD pipeline.
IT operations members of the DevOps team are affected, while developers continue to use
existing version control and build systems. The container management system handles image
packaging, resource configuration and deployment. Organizations that migrate to containers
100% might eliminate infrastructure configuration and automation tools, but most enterprises
run applications in production with various deployment methods.

Page 67 of 72
An example organization runs the Docker suite for container-based applications. The DevOps
In this e-guide
team keeps private container images in the Docker Hub repository. The DevOps toolchain
connects to the developers' Git code management systems and CI/CD tools using webhooks,
 What containers are and such that new code commits that pass integration tests are automatically built to a Docker
how they work image and posted to Docker Hub. Docker Hub feeds image instances to Docker's swarm
orchestration system. The system then either automatically deploys new images to the
 Get all those containers container cluster via swarm or Kubernetes or sends an alert to operations that code has been
delivered and is ready for production.
under wraps
Editor's note: As of October 2017, Docker has added native Kubernetes integration as an
 Details of container option, in addition to its internal swarm orchestration tool.
operations
Add cloud services
It's a similar scenario to deploy containers on cloud services in a DevOps model; the
difference is whether the toolchain itself is based in the cloud or if only the runtime
environment resides there.

Hybrid toolchain implementations appeal to DevOps teams that already use build automation
and CI/CD tools but seek to change the runtime deployment target. When deployment moves
from VMs to cloud-based containers, the image repository and deployment tools are the only
elements that change. Cloud vendors can help: Google Cloud Platform (GCP) documents
how to integrate its Container Registry and Kubernetes Engine with 10 popular CI/CD tools;
Microsoft Azure has similar information for using Jenkins with its container service. The CI/CD
tool handles the build and test cycle and pushes new code to either a local image repository
or cloud service. To deploy the code to production, it uses cloud RESTful APIs or command-
line scripts to install the updated image onto a container cluster. For example, a DevOps team
uses the Codefresh CI/CD tool as the pipeline. It provides a script for the gcloud and kubectl

Page 68 of 72
command-line tools, along with a YAML deployment template that pushes successfully tested
In this e-guide
images to a GCP Kubernetes cluster.

 What containers are and Organizations that want to run the entire DevOps pipeline in the cloud should evaluate the
various developer services on AWS, Azure and GCP. For example, in the AWS suite,
how they work CodePipeline detects new submissions to the CodeCommit code repository and triggers
 Get all those containers CodeBuild to create a Docker image; after tests return successful, CodeBuild pushes the
image to Elastic Container Registry. CodePipeline can then trigger a Lambda function to
under wraps Kubernetes to pull the new image from the repository and start a rolling update to all container
instances in its Pod. This is one example of how DevOps and containers combine to enable
 Details of container rapid, automated application updates without major production changes.
operations

Page 69 of 72
In this e-guide

 What containers are and


how they work

 Get all those containers


under wraps

 Details of container
operations

The
same process flow and integrations work with a PaaS, such as Azure App Service or Google
App Engine. These platforms handle the container configuration, deployment and scaling
automatically, obviating the need for an ops admin to manage a Kubernetes cluster. Cloud
Foundry, a container-based private PaaS, works with popular developer tools, such as private
Git servers or hosted GitHub source code management services. However, some users report
shortcomings with other CI pipeline software and instead developed Concourse CI as an
alternative. Due to the tight integration, Cloud Foundry users are advised to use Concourse to
automate the code-to-image-to-deployment pipeline.
Page 70 of 72
CI/CD-based DevOps toolchains and containers are a natural combination for Agile
In this e-guide
development and deployments. Due to the portability of container images and cross-platform
support, an automated pipeline makes it easier to implement a hybrid or multi-cloud
 What containers are and deployment strategy, whether it relies on Kubernetes clusters managed by IT operations or a
how they work cross-platform PaaS.

 Get all those containers


under wraps
▼ Next Article
What are containers, and how do they work?
 Details of container
Grasp container basics to plan enterprise adoption
operations
Why is Docker's container approach so important?

Understand how Docker works in the VM-based IT world

Ten lessons for enterprises deploying containers

Container orchestration tools ease distributed system complexity

Apache Mesos better utilizes resources, improves scalability

Is Kubernetes free as an open source software?

Kubernetes Pods and Nodes map out container relationships for production

Kubernetes books put container deployments on the right track

Build a secure Docker host environment on Linux systems

Master the Docker command line for container ops


Page 71 of 72
Select the best container monitoring tools for your environment
In this e-guide
Can container communication cross over to noncontainerized apps?
 What containers are and
Integrate DevOps and containers with simple tool adjustments
how they work

 Get all those containers


under wraps

 Details of container
operations

Page 72 of 72

You might also like