You are on page 1of 34


DevOps Toolchain eMag Issue 28 - May 2015


Docker: Present Immutable Service

and Future Infrastructure Discovery
Advanced DevOps Toolchain // eMag Issue 28 - May 2015 1
Virtual Panel on
Immutable Infrastructure Docker: Present and Future
InfoQ reached out to experienced ops engineers to ask about Chris Swan presents an overview of the Docker journey so
their definition and borders of immutable infrastructure as far and where it is headed along with its growing ecosystem
well as its benefits and drawbacks, in particular when com- of tools for orchestration, composition and scaling. This ar-
pared to desired state configuration management solutions. ticle provides both a business and a technical point of view
Is it a step forward or backwards in effective infrastructure on Docker and separates the hype from the reality.

How I Built a Self-Service

Automation Platform
The current buzz in the industry is how DevOps is chang-
ing the world and how the traditional role of operations
is changing. Part of these discussions lead to building
self-service automation tools which enable developers
to actually deploy and support their own code. Brian
Carpio describes his journey towards an infrastructure
that development teams could deploy with the click of a
mouse or an API call.

Managing Build
Jobs for Continuous
The number of jobs in a continuous inte-
gration tool can range from a few to sev-
eral thousand, all performing various
functions. There is an approach to manage
these jobs in a more efficient manner.

Service Discovery
Building microservices using immutable infrastructure means that
your services need to be reconfigured with the location of the other
services they need to connect to. This process is called Service Dis-
covery. This article compares the approaches by several key con-
tenders, from Zookeeper to Consul, Etcd, Eureka and rolling your



/InfoQ /+InfoQ company/infoq
MANUEL isofInfoQs DevOps Lead Editor and an enthusiast
Continuous Delivery and Agile practices.
PAIS Manuel Pais tweets @manupaisable

and weaknesses of containers when compared to
THE EDITOR the more established virtual machines. He also high-
lights the expanding ecosystem around Docker, in-
cluding tools and libraries for networking, composi-
tion and orchestration of containers.
Many companies haven taken the route of au-
tomating delivery of traditional ops services such as
provisioning infrastructure, deploying to production
environments, adding metrics to monitor or creat-
The edition follows in the footsteps of the previous ing test environments as similar as possible to pro-
DevOps Toolchain for Beginners eMag. Once an duction. This decision often comes from reaching a
organization has acquired the essential skill set to bottleneck in operations workload as development
provision and manage infrastructure, set up delivery teams are empowered to deploy and own their ap-
pipelines and analyze logs and metrics, it will then plications through their entire lifecycle in a DevOps
gradually face more complex decisions. Decisions mindset. Brian Carpio describes his technical jour-
that depend much more on the type of systems ar- ney into creating such a self-service platform.
chitecture and organizational structure than on indi- When teams reap the benefits of continuous
vidual tool choices. delivery, they want more of it. On every project. Ev-
In this eMag we provide both implementation ery component. The number of jobs in the contin-
examples and comparisons of different possible ap- uous integration/delivery server explodes and a
proaches on a range of topics from immutable infra- maintenance nightmare ensues if care is not taken.
structure to self-service ops platforms and service Automated job creation and update becomes cru-
discovery. In addition, we talk about the Docker eco- cial to keep pace and avoid development teams get-
system and the different aspects to consider when ting bogged down by build and deployment issues.
moving to this increasingly popular system for ship- Martin Peston provides an overview and practical
ping and running applications. examples on this topic.
In the immutable infrastructure virtual panel we When adopting immutable infrastructure the
learn from both advocates and skeptics on the pros need to (re)connect services dynamically becomes
and cons of this disposable approach to infrastruc- a priority. This process is called Service Discovery.
ture changes. Instead of trying to converge machines Microservice architectures further exacerbate the
to a desired state, outdated machine are simply de- importance of reliably discovering and connecting
stroyed and replaced by a brand new machine with an increasing number of standalone services. The
the new changes applied. Simplicity Itself team analyzed the main solutions
Chris Swan gives us a rundown of the increas- available, as well as the pros and cons of (trying to)
ingly popular Docker tool, outlining the strengths do this on your own.

4 Advanced DevOps Toolchain // eMag Issue 28 - May 2015


Virtual Panel on Immutable Infrastructure

By Manuel Pais

Chad Fowleris an internationally known software developer, trainer, manager, speaker, and
musician. Over the past decade, he has worked with some of the worlds largest companies
and most admired software developers. Chad is CTO of 6 Wunderkinder. He is the author or
co-author of a number of popular software books, including Rails Recipes and The Passionate
Programmer: Creating a Remarkable Career in Software Development.

Mark Burgessis the CTO and founder of CFEngine, formerly professor of network and system
administration at Oslo University College, and the principal author of the CFEngine software.
Hes the author of numerous books and papers on topics from physics to network and system
administration to fiction.

Mitchell Hashimotois best known as the creator of Vagrant and founder of HashiCorp. He is
also an OReilly author and professional speaker. He is one of the top GitHub users by followers,
activity, and contributions. Automation obsessed, Mitchell strives to build elegant, powerful
DevOps tools at HashiCorp that automate anything and everything. Mitchell is probably the only
person in the world with deep knowledge of most virtualization hypervisors.

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 5

Servers accumulate change for better or worse over time, including: new
applications, upgrades, configuration changes, scheduled tasks, and in
the moment fixes for issues. One thing is certain, the longer a server has
been provisioned and running the more likely it is in an unknown state.
Immutable Servers solve the problem of being certain about server state by
creating them anew for each change described previously. Immutable servers
make infrastructure scalable and reliable in spite of change. However, this
can require fundamentally new views of systems, patterns, deployments,
application code and team structure, says panel member Chad Fowler.

Mitchell Hashimoto, another in functional programming have. one: it would be frozen and noth-
panelist, sees immutable infra- Infrastructure components, like ing would happen). What the
structure as an enabler for re- in-memory data structures, are term tries to capture is the idea
peatable environments (through running components that can that one should try to predeter-
configuration management be accessed concurrently. So the mine as many of the details as
tools) and extremely fast deploy- same problems of shared state possible of a server configuration
ments times (through pre-built exist. at the disk-image stage, so that
static images) and service or- In my definition of im- no configuration is needed.
chestration times (through de- mutable infrastructure, servers Then the idea supposes that this
centralized orchestration tools). (or whatever) are deployed once will make it fast and reliable to
This idea has been deemed and not changed. If they are spin up machines. When some-
utopian by Mark Burgess, our changed for some reason, they thing goes wrong, you just dis-
third panelist, due to the large are marked for garbage collec- pose of the machine and rebuild
number of external dependen- tion. Software is never upgraded a new one from scratch. That is
cies in any system today. Burgess on an existing server. Instead, my understanding of what peo-
criticizes the abuse of the notion the server is replaced with a new, ple are trying to say with this
of immutability. functionally equivalent server. phrase.
InfoQ reached out to them Mitchell: Immutable infrastruc- I believe there are a number
to ask about their definition and ture is treating the various com- of things wrong with this argu-
borders of immutable infrastruc- ponents of your infrastructure ment though. The idea of what it
ture as well as its benefits and as immutable, or unchangeable. means to fix the configuration is
drawbacks, in particular when Rather than changing any com- left very unclear and this makes
compared to desired state con- ponent, you create a new one the proposal unnecessarily con-
figuration management solu- with the changes and remove tentious. First of all, predeter-
tions. Is it a step forward or back- the old one. As such, you can be mining everything about a host
wards in effective infrastructure certain (or more confident) that is not possible. Proper configura-
management? if your infrastructure is already tion management deals with dy-
functioning, a change to config namic as well as static host state.
management, for example, wont Take the IP address and network-
break whats already not broken. ing config, for instance this
InfoQ: Can you briefly ex- Immutable infrastructure has to be set after the machine
plain your definition of im- is also sometimes referred to as is spun up. What about execut-
mutable infrastructure? phoenix servers but I find that ing new programs on demand?
term to be less general, since im- What if programs crash, or fail to
mutability can also apply at the start? Where do we draw the line
Chad: I wrote about this on service level, rather than just the between what can and cant be
my blog several months ago in server level. done after the machine has been
detail, but to me the high-level Mark: The term immutable is spun up? Can we add a package?
essence of immutable infrastruc- really a misnomer (if infrastruc- Should we allow changes from
ture shares the same qualities ture were really immutable, it DHCP but not from CFEngine or
that immutable data structures would not be much use to any- Puppet? Why? In my view, this is

6 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

all change. So no machine can be afraid of destroying a com-
be immutable in any meaningful ponent, so when a new version InfoQ: Is immutable infra-
sense of the word. is available, you should be able structure a better way to
The real issue we should to eventually replace every com- handle infrastructure than
focus on is what behaviours we ponent. Therefore, your infra- desired-state convergence?
want hosts to exhibit on a con- structure is highly convergent. If yes, what are its main ad-
tinuous basis. Or, in my language, However, this is mostly a disci- vantages? If not, what are its
what promises should the infra- pline-and-process thing, which is main disadvantages?
structure be able to keep? sometimes difficult to enforce in
an organization. Chad: I think so. Of course,
Mark:I dont believe immutable there are tradeoffs involved and
infrastructure helps prevent sys- you have to weigh the options
InfoQ: Can immutable infra- tems divergence. Trying to freeze in every scenario, but I think
structure help prevent sys- configuration up front leads to immutable infrastructure is a
tems divergence while still a microwave dinner mental- better default answer than de-
coping with the need for reg- ity. Just throw your prebaked sired-state convergence.
ular configuration updates? package in the oven and suffer Immutable servers are easi-
through it. It might be okay for er to reason about. They hold up
Chad: Absolutely. Its just a dif- some people, but then you have better in the face of concurrency.
ferent model for updates. Rather two problems: either you cant They are easier to audit. They are
than update an existing system, get exactly what you need or you easier to reproduce, since the ini-
you replace it. Ultimately, I think have a new problem of making tial state is maintained.
this is a question of granularity and managing sufficient variety Mitchell: It has its benefits and
of what you call a component. of prepackaged stuff. The latter it has its downsides. Overall, I
Fifteen years ago, if I wanted to is a harder problem to solve than believe it to be a stronger choice
update a software component just using fast model-based con- and the right way forward, but it
on a UNIX system, I upgraded the figuration management because is important to understand it is
software package and its depen- its much harder to see into pre- no silver bullet, and it will intro-
dencies. Now I tend to view run- packaged images or microwave duce problems you didnt have
ning server instances as compo- dinners to see whats in them. So before (while fixing others).
nents. If you need to upgrade the youd better get your packaging The advantages are deploy-
OS or some package on the sys- exactly right. ment speed, running stability,
tem, just replace the server with Moreover, what happens development testability, version-
one thats updated. If you need if there is something slightly ing, and the ability to roll back.
to upgrade your own application wrong? Do you really want to go With immutable, because
code, create new server instanc- back to the factory and repack- everything is precompiled into
es with the new code and replace age everything just to follow the an image, deployment is ex-
the old servers with it. dream of the microwave meal? tremely fast. You launch a server,
Mitchell: Actually, immutable It is false that prepackaging is and it is running. There may be
infrastructure makes things a bit the only way to achieve consis- some basic configuration that
worse for divergence if you dont tency. Configuration tools have happens afterwards but the slow
practice it properly. With mu- proven that. You dont need to parts are done: compiling soft-
table infrastructure, the idea is destroy the entire machine to ware, installing packages, etc.
that configuration management make small repeatable changes And because everything is
constantly runs (on some inter- cheaply. Would you buy a new immutable, once something is
val) to keep the system in a con- car because you have a flat tyre, running, you can be confident
vergent state. With immutable or because it runs out of fuel? that an external force wont be as
infrastructure, you run the risk of What prevents divergence likely to affect stability. For exam-
deploying different versions of of systems is having a clear mod- ple, a broken configuration-man-
immutable pieces, resulting in a el of the outcome you intend agement run cannot accidentally
highly divergent environment. not the way you package the corrupt configuration.
This mistake, however, is starting point from which you Immutable infrastructure is
the result of not properly em- diverge. incredibly easy to test, and the
bracing or adopting immutable test results very accurately show
infrastructure. With immutable what will actually happen at run
infrastructure, you should never time. An analogy I like to make

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 7

is that immutable infrastructure the answer unless every transac- waste returns. At that stage, we
is to configuration management tion is designed to be carried out start to discover that the dispos-
what a compiled application is in separately built infrastructure. able scheme was actually toxic to
to source code. You can unit-test Thats a political oversimplifica- the environment (what are the
your source code, but when you tion, not a technical one, and it side effects of all this waste?),
go to compile it, there is no guar- adds overhead. and we wonder why we were not
antee that some library versions I would say that the im- thinking ahead. We are current-
that could ruin your build didnt mutable paradigm is generally ly in an age of apparent plenty,
change. Likewise, with configura- worse than one that that bal- with cloud environments hiding
tion management, you can run it ances a planned start image the costs of waste, so that devel-
over and over, but you cant guar- with documented convergent opers dont have to think about
antee that if it succeeds that itll adaptation. The disadvantage of them. But I wonder when the
still succeed months down the a fixed image is a lack of trans- margins will shrink to the point
road. But with a compiled appli- parency and immediacy. Advo- where we change our minds.
cation, or a prebuilt server, all the cates of it would probably argue
dependencies are already sat- that if they know what the disk
isfied and baked in; the surface image version is, they have a bet-
area of problems that can hap- ter idea of what the state of the InfoQ: What are the borders
pen when you go to launch that machine is. The trouble with that for immutable infrastructure,
server are much, much smaller. is that system state is not just i.e., for which kind of chang-
Versioning is much simpler what you feed in at the start it es would it make sense to
and clearer because you can tag also depends on everything that replace a server vs. updating
a specific image with the config- happens to it after it is running. its configuration/state?
uration-management revision It promotes a naive view of state.
that is baked into it, the revision At a certain scale, pre-cach- Chad: If there are borders and
of an application, the versions of ing some decision logic as a fixed some changes are done on an
all dependencies, etc. With mu- image might save you a few sec- existing server, the server isnt
table servers, its harder to be onds in deployment, but you immutable. Idealistically, I dont
certain what versions or revisions could easily lose those seconds think we should allow borders.
of what exist on each server. (and a lot more business conti- Immutable isnt a buzzword. It
Finally, you get rollback ca- nuity) by having to redeploy ma- has meaning. We should either
pability! There are many people chines instead of adapting and maintain the meaning or stop
who think rollback is a lie, and at repairing simple issues. If there using the word. An in-between
some point it is. But if you prac- is something wrong with your is dangerous and may provide a
tice gradual, incremental chang- car, it gets recalled for a patch; false sense of security in the per-
es to your infrastructure, rolling you dont get a new car, else the ceived benefits of immutability.
back with immutable infrastruc- manufacturers would be out of That said, systems do need
ture to a recently previous ver- business. to be quickly fixable, and the
sion is cheap and easy: you just Caching data can certain- methods were currently using
replace the new servers with ly make sense for optimizing for replacing infrastructure are
servers launched from a previous effort, as part of an economy of slower than hot-fixing an exist-
image. This has definitely saved scale, but we should not turn this ing server. So there needs to be
us some serious problems a few into a false dichotomy by claim- some kind of acceptable hybrid
times, and is very hard to achieve ing it is the only way. In general, which maintains the benefits of
with desired-state configura- an approach based on partial immutability. My current plan
tions. disk images, with configuration for Wunderlist is to implement a
Mark:The way to get infrastruc- management for the last mile hotfix with a self-destruct built
ture right is to have a continuous changes makes much more busi- in. So if you have to hot-fix a serv-
knowledge relationship with ness sense. er, it gets marked to be replaced
system specifications (what I call In practice, immutability automatically. We havent done
promises in CFEngine language). (again poor terminology) means it automatically yet, but weve
To be fit for purpose, and to sup- disposability. Disposability manually done this and it works
port business continuity, you emerges often in a society when well. I see this as an ugly optimi-
must know that you can deliver resources seem plentiful. Eventu- zation rather than a good design
continuous consistency. Chang- ally resources become less plen- approach.
ing the starting point cannot be tiful, and the need to handle the

8 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

Mitchell: The borders of im- parallel as separate branches for the server, you might be destroy-
mutable infrastructure for me some kinds of applications. But ing the state as well, which is
break down to where you want we have to remember that the usually unacceptable.
to be able to change things rap- whole world is not in the cloud. Mark: Mission-critical servers
idly: small configuration chang- Planes, ships, rockets, mobile running monolithic applications
es, application deploys, etc. devices are all fragile to change cannot generally be managed
But wanting to be able to and mission critical. There are in this disposable manner. As I
deploy an application on an im- thus still embedded devices that understand it, the principal ar-
mutable server doesnt make spend much of their time large- gument for this pattern of work-
that server immutable. Instead, ly offline, or with low-rate com- ing is one of trust. Some people
you should think of the immuta- munications. They cannot be would rather trust an image than
bility of a server like an onion: it re-imaged and replaced safely a configuration engine. One
has layers. The base layer of the or conveniently, but they can be would like to allow developers to
server (the OS and some con- patched and managed by some- manage and maintain their own
figuration and packages) is im- thing like CFEngine that doesnt infrastructure requirements in-
mutable. The application itself is even need a network connection creasingly. However, if you force
its own immutable component: to function. everyone to make changes only
a hopefully precompiled binary through disk images, you are ty-
being deployed to the server. So ing their hands with regard to
while you do perhaps have an making dynamic changes, such
arguably mutable component in InfoQ: Is it possible to treat as adjusting the number of paral-
your server, it itself is another ver- every server of a given in- lel instances of a server to handle
sioned immutable component. frastructure as immutable or latency and tuning other aspects
What you dont want to be are there some server roles of performance. Disregarding
doing for application deploys on that cannot be treated that those concerns is a business de-
immutable infrastructure is to be way? cision. Developers often dont
compiling live on an immutable have the right experience to un-
server. The compilation might Chad: We have what I call derstand scalability and perfor-
fail, breaking your application cheats with immutable infra- mance, and certainly not in ad-
and perhaps the functionality of structure. Relational databases vance of deployment.
the server. are a good example. I think its
Mark: When a particular con- possible to work with them im-
figuration reaches the end of mutably, but so far it hasnt been
its useful life, it should probably worth the effort for us. If we were InfoQ: What are the main con-
be replaced. That ought to be a an infrastructure vendor, I would ceptual differences between
business judgment, not a tech- be applying some effort here, tooling for immutable infra-
nical one. The judgment can be but since were in the business of structure and desired-state
made on economic grounds, re- making applications for our cus- convergence?
lated to what would be lost and tomers, we have been content
gained by making a change. But to outsource more and more to Chad: Desired-state conver-
be careful of the hidden costs if managed services such as Ama- gence is in a different realm of
your processes are not transpar- zon RDS. complexity to implement. Its a
ent and your applications are My goal is that our entire fascinating idea, but at least for
mission critical. infrastructure consists of either my use cases its outdated. The
Anytime you have to bring pieces we dont manage direct- longer a system lives, the more
down a system for some reason, ly or components that are com- afraid of it I become. I cant be
it could be an opportunity to pletely replaceable. Were almost 100% sure that it is configured
replace it entirely without un- there and so far its been a very the way I want and that it has
necessary interruption, as long positive experience. exactly the right software. Thou-
as you have a sufficient model Mitchell:It is possible, but there sands upon thousands of devel-
of the requirements to replace are roles that are much easier oper hours have gone into solv-
it with minimum impact, and a to treat as immutable. Stateless ing this problem.
hands-free approach to automa- servers are extremely easy to In the world of immutable
tion for creating that environ- make immutable. Stateful serv- infrastructure, I think of servers
ment. Today, it is getting easy ers such as databases are much as replaceable building blocks,
to deploy multiple versions in trickier, because if you destroy like parts in an automobile. You

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 9

dont update a part. You just re- pairs and even simple changes imenting with and testing for
place it. The system is the sum of in real time and respond to busi- some aspects of your infrastruc-
its parts. The parts are predict- ness problems on the fly, at pret- ture, though.
able since they dont change. Its ty much any scale, making only Mark: I dont believe it can be-
conceptually very simple. minimal interventions. This is like come mainstream unless all
Mitchell: The tooling doesnt the role of cellular DNA in biolo- software becomes written in a
change much! Actually, all tools gy. There are repair processes on- completely stateless way, which
used for desired-state conver- going because there is no redun- would then be fragile to com-
gence are equally useful for im- dancy at the intracellular level. munication faults in a distributed
mutable infrastructure. Instead, Bulk information is easier world. Even then, I dont think it
immutable infrastructure adds to manage from a desired-state is desirable. Do we really want
a compilation step to servers model than from piles of bulk to argue that it is better for the
or applications that didnt exist data because it exploits patterns whole world to eat microwave
before. For example, instead of to good advantage. You can eas- dinners, or to force chefs to pack-
launching a server and running ily track changes to the state (for age things in plastic before eat-
Chef, you now use Packer as a compliance and auditing purpos- ing it? If history has taught us
compilation tool to launch a es) because a model defines your anything, it is that people crave
server, run Chef, and turn it into standard of measurement over freedom. We have to understand
an image. many versions. Imagine compli- that disposability is a large-scale
One thing that does change ance auditing like PCI or HIPPA. economic strategy that is just not
is a mindset difference: im- How do you prove to an auditor suitable at all scales.
mutable tools know they can that your system is compliant? If
only run once to build an image you dont have a model with de-
whereas desired-state conver- sired outcome, that becomes a
gence tools expect that they can process of digging around in files InfoQ: Is immutable infra-
run multiple times to achieve and looking for version strings. structure applicable for or-
convergence. In practice, this Its very costly and time wasting. ganizations running their
doesnt cause many problems services on physical infra-
because you can just run the structure (typically for per-
desired-state convergence tool formance reasons)? If so,
multiple times when building an InfoQ: Is the choice and ma- how?
image. However, the tools built turity of todays tools and
for immutability tend to be much the immutable-infrastructure Chad: Sure. While perhaps the
more reliable in achieving their pattern itself enough to be- servers themselves would run
intended purpose the first time. come mainstream? longer and probably require in-
Mark: If you want to freeze the place upgrades to run efficiently,
configuration of a system to a Chad:Probably not. The founda- with the many options available
predefined image, you have to tions are getting better and bet- for virtualization, everything on
have all the relevant information ter with both hosted and internal top of that is fair game. I sup-
about its environment up front, cloud providers and frameworks, pose it would be possible to take
and then you are saying that but creating a solid immutable the approach further down the
you wont try to adapt down the architecture is not currently the stack, but I havent had to do it
line. You will kill a host to repair path of least resistance. I think and I dont want to speculate.
the smallest headache. Its an most of us will move in this direc- Mitchell:Yes, but it does require
overtly discontinuous approach tion over time, but its currently a bit more disciplinary work. The
to change, as opposed to one far from mainstream. organization needs to have in
based on preservation and con- Mitchell: Not yet, but theyre place some sort of well automat-
tinuity. If you think of biology, its leaps and bounds better than ed process for disposing of and
like tissue, where you can lose a they were a year ago, and theyre re-imaging physical machines.
few cells from your skin because only going to continue to be- Unfortunately, many organiza-
there are plenty more to do the come more mature and solve the tions do not have this, which is
job. It can only work if resources various problems early adopters somewhat of a prerequisite to
are plentiful and redundant. of immutable infrastructure may making immutable infrastruc-
With desired-state conver- be having. ture very useful.
gence, you can make a system All the tools are definitely
completely predictable with re- mature enough to begin exper-

10 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

One thing is
For example, what you real- Developers in an immutable infra-
ly want is something like Mesos or structure have to constantly keep
Borg for physical hardware. in mind that any service they talk to
Mark: The immutable-infrastruc-
ture idea is not tied specifically to
can die at any moment (and hope-
fully can be replaced rather quick-
certain, the longer
virtualization. The same method
could be applied to physical infra-
ly). This mindset alone results in de-
velopers generally building much a server has
structure, but the level of service more robust and failure-friendly
discontinuity would be larger.
Today, immutability is often
With strongly coupled, mu-
been provisioned
being mixed up with arguments for
continuous delivery in the cloud,
table infrastructures, it isnt uncom-
mon to interrupt a dependent ser- and running the
but I believe that disposable com- vice of an application and have that
puting could easily be contrary to
the goals of continuous delivery
application be completely broken
until it is restarted with the depen-
more likely it is
because it adds additional hoops
to jump through to deploy change,
dent service up.
While keeping immutable in- in an unknown
and makes the result less transpar- frastructure in mind, applications
ent. are much more resilient. As an ex-
ample from our own infrastructure
state. Immutable
managing Vagrant Cloud, we were
able to replace and upgrade every Servers solve the
problem of being
InfoQ: With respect to an ap- server (our entire infrastructure)
plications architecture and without any perceivable downtime
implementation, what factors and without touching the web front
need to be taken into account
when targeting an immutable
ends during the replacement pro-
cess. The web applications just re-
certain about
infrastructure? tried some connects over time and
eventually came back online. The server state by
Chad: Infrastructure and services only negative experience was that
need to be discoverable. Whatev-
er youre using to register services
for some people their requests were
queued a bit longer than usual!
creating them
needs to be programmable via an
API. You need to have intelligent
Mark: The aim should not be to
make applications work around an anew for each
monitoring and measuring in place, immutable infrastructure. You dont
and your monitoring needs to be
focused less on the raw infrastruc-
pick the job to fit the tools. The
immutable infrastructure is usual-
change described
ture than on the end purpose of the
service than youre probably used
ly motivated as a way of working
around the needs of application de- previously
to. velopers. The key question is how
Everything needs to be state- do you optimize a continuous-de-
less where possible. livery pipeline?
As I mentioned previously, we
have cheats like managed database
systems. Im not going to com-
ment on how you have to change InfoQ: Does immutable infra-
architecture to have your own structure promote weak root-
immutable, disposable database cause analysis as it becomes
systems since its thankfully not a so easy to trash a broken serv-
problem Ive needed or wanted to er to repair service instead of
solve yet. fixing it? If so, is that a problem
Mitchell: Nothing has to change or just a new modus operandi?
in order to target immutable in-
frastructure, but some parts of Chad:I dont think so. In the worst
architecting and developing an case, it doesnt change how we do
application become much easier. root-cause analysis, because re-

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 11

placing and rebooting are sort of
the same last-ditch effort when
ble scripts and pre-templated
files, where the reason for the de- Immutable servers
something is wrong. In the best cisions only lives in some devel-
case, it makes it easier to exper-
iment with possible solutions,
opers head. That process might
work to some extent if the devel-
are easier to
tweak variables one at a time,
bring up temporary test servers,
oper is the only one responsible
for making it, but it makes repro- reason about. They
swap production servers in and ducibility very challenging. What
out, etc.
I see the point youre hint-
happens when the person with
that knowledge leaves the orga-
hold up better
ing at in the question, though.
There may be a class of problems
In psychology, one knows in the face of
concurrency. They
that is all but eliminated (read: that humans cannot remember
obscured to the point of essen- more than a small number of
tially not existing) if the average things without assistance. The
lifespan of a server is less than
one day. It may also be harder to
question is how do you create
a knowledge-oriented frame-
are easier to audit.
pinpoint these problems if they
do occasionally pop up without
work where intent and outcome
are transparent and quickly re- They are easier to
knowing to try server longevity producible with a minimum
as a variable.
Maybe thats okay.
of repeated effort. This is what
configuration management was
reproduce, since
Mitchell:Because it is very likely
that the server youre replacing it
designed for and I still believe
that it is the best approach to the initial state is
with will one day see that same managing large parts of the in-
issue, immutable infrastructure
doesnt promote any weaker
frastructure configuration. The
key to managing infrastructure
root-cause analysis. It may be is in separating and adapting
easier to ignore for a longer pe- different behaviours at relevant
riod of time, but most engineer- scales in time and space. I have
ing organizations will care to fix written a lot about this in my
it properly at some point. book In Search of Certainty: The
Actually, I would say the Science of Our Information Infra-
root cause analysis becomes structure.
much stronger. Since the compo-
nent is immutable and likely to
exhibit the same problems under
the same conditions, it is easier
to reproduce, identify, fix, and
finally to deploy your change
across your entire infrastructure.
Additionally, desired-state
configuration has a high chance
of making the problem worse: a
scheduled run of the configura-
tion management system may
mask the real underlying issue,
causing the ops team to spend
more time trying to find it or
even to just detect it.
Mark: Disposal of causal evi-
dence potentially makes un-
derstanding the environment
harder. Without model-based
configuration, a lot of decisions
get pushed down into inscruta-

12 Advanced DevOps Toolchain // eMag Issue 28 - May 2015


Docker: Present and Future

Chris Swanis CTO atCohesiveFT, a provider of cloud networking software. Previously, he spent
a dozen years in financial services as a technologist to bankers and banker to technologists. He
spent most of that time at large Swiss banks wrangling infrastructure for app servers, compute
grids, security, mobile, and cloud. Chris also enjoys tinkering with the Internet of Things,
including a number of Raspberry Pi projects.

Docker is a toolset for Linux containers designed to build, ship, and run
distributed applications, first released as an open-source project by dotCloud
in March 2013. The project quickly became popular, leading dotCloud to
rebrand as Docker, the company (and ultimatelyselling off their original PaaS
business). The company released Docker 1.0 in June 2014 and has since
sustained the monthly release cadence that led up to the June release.

The 1.0 release marked the point sion of Docker can be used with zon, Google, and Microsoft. It is
where the company considered any other version (with both for- almost ubiquitously available
the platform sufficiently mature ward and backward compatibil- wherever Linux can be found. In
for use in production (with the ity), something that provides a addition to the big names, many
company and partners selling stable foundation for Docker use startups are growing up around
support options). The monthly despite rapid change. Docker or changing direction to
release of point updates shows The growth of Docker into better align with Docker. Those
that the project is still evolving one of the most popular open- partnerships (large and small)
quickly, adding new features, source projects could be per- are helping to drive rapid evo-
and addressing issues as they are ceived as hype, but there is a lution of the core project and its
found. The project has success- great deal of substance. Dock- surrounding ecosystem.
fully decoupled ship from run, er has attracted support from A brief technical overview
so images sourced from any ver- the industry, including Ama- of Docker

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 13

Docker makes use of Li- theyre able to share a single and public clouds. Its also pos-
nux-kernel facilities such kernel and application librar- sible to run VMs inside contain-
as cgroups, namespaces, and ies. This can lead to substantial- ers, which is something that
SELinux to provide isolation be- ly smaller RAM footprints even Google uses as part of its cloud
tween containers. At first, Docker when compared to virtualisation platform. Given the widespread
was a front end for theLXCcon- systems that can make use of availability of infrastructure as a
tainer-management subsys- RAM over-commitment. Storage service (IaaS) that provides VMs
tem, but release 0.9 introduced footprints can also be reduced on demand, its reasonable to
libcontainer, which is a native when deployed containers expect that containers and VMs
Go-language library that pro- share underlying image layers. will be used together for years
vides the interface between user IBMs Boden Russel has bench- to come. Its also possible that
space and the kernel. markedthese differences. container management and vir-
Containers sit on top of a Containers also present a tualisation technologies might
union file system, such as AUFS, lower systems overhead than be brought together to provide
which allows for the sharing of VMs, so the performance of an a best-of-both-worlds approach;
components such as operat- application inside a container so a hardware trust-anchored
ing-system images and installed will generally be the same or micro-virtualisation implemen-
libraries across multiple contain- better than the same application tation behind libcontainer could
ers. The layering approach in running within a VM. A team of integrate with the Docker tool
the file system is also exploited IBM researchers have published chain and ecosystem at the front
by the Dockerfile DevOps tool, a performance comparison of end, but use a different back
which is able to cache opera- virtual machines and Linux con- end that provides better isola-
tions that have already success- tainers. tion. Micro virtualisation (such
fully completed. This can greatly One area where containers as Bromium vSentry and VM-
speed up test cycles by eliminat- are weaker than VMs is isolation. wares Project Fargo) is already
ing the wait time usually taken to VMs can take advantage of ring used in desktop environments
install operating systems and ap- -1 hardware isolation such as that to provide hardware-based iso-
plication dependencies. Shared provided by Intels VT-d and VT-x lation between applications,
libraries between containers can technologies. Such isolation pre- so similar approaches could be
also reduce RAM footprint. vents VMs from breaking out and used along with libcontainer as
A container is started from interfering with each other. Con- an alternative to the container
an image, which may be locally tainers dont yet have any form mechanisms in the Linux kernel.
created, cached locally, or down- of hardware isolation, which
loaded from a registry. Docker, makes them susceptible to ex- Dockerizing
Inc. operates the Docker Hub ploits. A proof-of-concept attack applications
Registry, which hosts official re- named Shocker showed that Pretty much any Linux applica-
positories for a variety of oper- Docker versions prior to 1.0 were tion can run inside a Docker con-
ating systems, middleware, and vulnerable. Although Docker 1.0 tainer. There are no limitations
databases. Organisations and fixed the particular issue exploit- on choice of languages or frame-
individuals can host public re- ed by Shocker, Docker CTO Solo- works. The only practical limita-
positories for images at Docker mon Hykes said, When we feel tion is what a container is allowed
Hub, and there are also subscrip- comfortable saying that Docker to do from an operating-system
tion services for hosting private out of the box can safely contain perspective. Even that bar can be
repositories. Since an uploaded untrusted uid0 programs, we will lowered by running containers
image could contain almost any- say so clearly. Hykess statement in privileged mode, which sub-
thing, the Docker Hub Registry acknowledges that other exploits stantially reduces controls (and
provides an automated build fa- and associated risks remain, and correspondingly increases risk of
cility (previously called trusted that more work will need to be the containerised application be-
build) where images are con- done before containers can be- ing able to cause damage to the
structed from a Dockerfile that come trustworthy. host operating system).
serves as a manifest for the con- For many use cases, the Containers are started from
tents of the image. choice of containers or VMs is a images, and images can be made
false dichotomy. Docker works from running containers. There
Containers versus VMs well within a VM, which allows are essentially two ways to get
Containers are potentially much it to be used on existing virtu- applications into containers:
more efficient than VMs because al infrastructure, private clouds, manually and Dockerfile.

14 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

Manual builds container, a supervisor process Connectivity
A manual build starts by launch- must be launched that can then Dockers networking capabili-
ing a container with a base oper- spawn the other desired process- ties are fairly primitive. Services
ating-system image. An interac- es. There is no init system within within containers can be made
tive terminal can then be used to containers, so anything that re- accessible to other containers on
install applications and depen- lies on systemd, upstart, or simi- the same host, and Docker can
dencies using the package man- lar wont work without modifica- also map ports onto the host op-
ager offered by the chosen fla- tion. erating system to make services
vour of Linux. Zef Hemel provides available across a network. The
a walk through of the process in Containers and officially sponsored approach
his InfoQ article Docker: Using microservices to connectivity islibchan, which
Linux Containers to Support Por- A full description of the phi- is a library that provides Go-
table Application Deployment. losophy and benefits of using like channels over the network.
Once the application is installed, a microservices architecture is Until libchan finds its way into
the container can be pushed to a beyond the scope of this article applications, theres room for
registry (such as Docker Hub) or (and is well covered in theInfoQ third parties to provide comple-
exported into a tar file. e-mag Microservices). Containers mentary network services. For
are however a convenient way to example, Flocker has taken a
Dockerfile bundle and deploy instances of proxy-based approach to make
Dockerfile is a system for script- microservices. services portable across hosts
ing the construction of Docker Whilst most practical exam- (along with their underlying stor-
containers. Each Dockerfile spec- ples of large-scale microservices age).
ifies the base image to start from deployments to date have been
and then a series of commands on top of (large numbers of ) VMs, Compositing
that are run in the container and/ containers offer the opportunity Docker has native mechanisms
or files that are added to the con- to deploy at a smaller scale. The for linking containers together
tainer. The Dockerfile can also ability for containers to have a by which metadata about a de-
specify ports to be exposed, the shared RAM and disk footprint pendency can be passed into
working directory when a con- for operating systems and librar- the dependent container and
tainer is started, and the default ies and common application consumed within as environ-
command on start up. Contain- code also means that deploying ment variables and hosts en-
ers built with Dockerfiles can multiple versions of services side tries. Application compositing
be pushed or exported just like by side can be made very effi- tools like Fig and geard express
manual builds. Dockerfiles can cient. the dependency graph inside a
also be used in Docker Hubs single file so that multiple con-
automated build system so that Connecting containers tainers can be brought together
images are built from scratch in Small applications will fit inside a into a coherent system. Centu-
a system under the control of single container, but in many cas- ryLinks Panamax compositing
Docker, Inc. with the source of es an application will be spread tool takes a similar underlying
that image visible to anybody across multiple containers. Dock- approach to Fig and geard, but
that might use it. ers success has spawned a flurry adds a web-based user inter-
of new application compositing face, and integrates directly with
One process? tools, orchestration tools, and GitHub so that applications can
Whether images are built man- platform as a service (PaaS) im- be shared.
ually or with Dockerfile, a key plementations. Behind most of
consideration is that only a sin- these efforts is a desire to sim- Orchestration
gle process is invoked when the plify the process of constructing Orchestration systems like deck-
container is launched. For a con- an application from a set of in- ing, New Relics Centurion and
tainer serving a single purpose, terconnected containers. Many Googles Kubernetes all aim to
such as running an application tools also help with scaling, fault help with the deployment and
server, running a single process tolerance, performance manage- lifecycle management of con-
isnt an issue (and some argue ment, and version control of de- tainers. There are also numerous
that containers should only have ployed assets. examples (such as Mesosphere)
a single process). For situations of Apache Mesos (and partic-
where its desirable to have mul- ularly its Marathon framework
tiple processes running inside a for long-running applications)

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 15

being used along with Dock- date kernel, it can run in pretty Docker and the distros
er. By providing an abstraction much every cloud that offers Docker has already become a
between the application needs IaaS. Many of the major cloud standard feature of major Linux
(e.g. expressed as a requirement providers have announced addi- distributions like Ubuntu, Red
for CPU cores and memory) and tional support for Docker and its Hat Enterprise Linux (RHEL), and
underlying infrastructure, the or- ecosystem. CentOS. Unfortunately, the distri-
chestration tools provide decou- Amazon has introduced butions move at a different pace
pling thats designed to simplify Docker in its Elastic Beanstalk to the Docker project, so the ver-
both application development system (which is an orchestration sions found in a distribution can
and datacentre operations. There service over underlying IaaS). be well behind the latest avail-
is such a variety of orchestration Google has Docker-enabled able. For example, Ubuntu 14.04
systems because many have managed VMs, which provide was released with Docker 0.9.1,
emerged from internal systems a halfway house between the and that didnt change on the
previously developed to manage PaaS of App Engine and the IaaS point release upgrade to Ubun-
large-scale deployments of con- of Compute Engine. Microsoft tu 14.04.1 (by which time Dock-
tainers; for example Kubernetes and IBM have both announced er was at 1.1.2). There are also
is based on GooglesOmegasys- services based on Kubernetes so namespace issues in official re-
tem thats used to manage con- that multi-container applications positories since Docker was also
tainers across the Google estate. can be deployed and managed the name of a KDE system tray
Whilst there is some degree on their clouds. so with Ubuntu 14.04 the pack-
of functional overlap between To provide a consistent in- age name and command-line
the compositing tools and the terface to the wide variety of back tool are both
orchestration tools there are also ends now available, the Dock- Things arent much differ-
ways that they can complement er team introduced libswarm ent in the enterprise Linux world.
each other. For example, Fig (now Docker Swarm) to inte- CentOS 7 comes with Docker
might be used to describe how grate with a multitude of clouds 0.11.1, a development release
containers interact functionally and resource-management that precedes Docker, Inc.s an-
whilst Kubernetes pods might be systems. One of the stated aims nouncement of production
used to provide monitoring and of libswarm is to avoid vendor readiness with Docker 1.0. Linux
scaling. lock-in by swapping any ser- distribution users that want the
vice out with another. This is latest version for promised sta-
Platforms (as a service) accomplished by presenting a bility, performance, and security
A number of Docker native consistent set of services (with would be better off following
PaaS implementations such associated APIs) that attach to the installation instructions and
asDeisandFlynnhave emerged implementation-specific back using repositories hosted by
to take advantage of the fact that ends. For example, the Docker Docker, Inc. rather than taking
Linux containers provide a great server service presents the Dock- the version included in their dis-
degree of developer flexibility er remote API to a local Docker tribution.
(rather than being opinionated command-line tool so that con- The arrival of Docker has
about a given set of languages tainers can be managed on an spawned new Linux distributions
and frameworks). Other plat- array of service providers. such as CoreOS and Red Hats
forms such as Cloud Foundry, New service types based on Project Atomicthat are designed
OpenShift, and Apceras Continu- Docker are still in their infancy. to be a minimal environment
um have taken the route of inte- London-based Orchard offered for running containers. These
grating Docker-based function- a Docker hosting service, but distributions come with new-
ality into their existing systems, Docker, Inc. said that the service er kernels and Docker versions
so that applications based on wouldnt be a priority after ac- than the traditional distributions.
Docker images (or the Docker- quiring Orchard. Docker, Inc. has They also have lower memory
files that make them) can be de- also sold its previous dotCloud and disk footprints. The new dis-
ployed and managed alongside PaaS business to cloudControl. tributions also come with new
apps using previously supported Services based on older contain- tools for managing large-scale
languages and frameworks. er-management systems such deployments such asfleet, a dis-
as OpenVZ are already com- tributed init system andetcdfor
All the clouds monplace, so to a certain extent metadata management. There
Since Docker can run in any Li- Docker needs to prove its worth are also new mechanisms for
nux VM with a reasonably up-to- to hosting providers. updating the distribution itself

16 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

The future of Docker
so that the latest versions of the
kernel and Docker can be used. Docker, Inc. has set a clear path For many use
This acknowledges that one of to the development of core capa-
the effects of using Docker is
that it pushes attention away
bilities (libcontainer), cross-ser-
vice management (libswarm),
cases the choice
from the distribution and its
package-management solution,
and messaging between con-
tainers (libchan). Meanwhile, the of containers or
making the Linux kernel (and the company has already shown a
Docker subsystem using it) more
willingness to consume its own
ecosystem with the Orchard ac-
VMs is a false
New distributions might be
the best way to run Docker, but
quisition. There is however more
to Docker than Docker, Inc., with dichotomy.
traditional distributions and their contributions to the project com-
package managers remain very
important within containers.
ing from big names like Google,
IBM, and Red Hat. With a benev-
Docker works
Docker Hub hosts official images
for Debian, Ubuntu, and CentOS.
olent dictator in the shape of
CTO Solomon Hykes at the helm, well within a VM,
which allows it
Theres also a semi-official repos- there is a clear nexus of technical
itory for Fedora images. RHEL leadership for both the company
images arent available in Docker and the project. Over its first 18
Hub, as theyre distributed direct-
ly from Red Hat. This means that
months, the project has shown
an ability to move fast by using
to be used on
the automated build mechanism
on Docker Hub is only available
its own output, and there are no
signs of that abating. existing virtual
to those using pure open-source Many investors are look-
distributions (and willing to trust
the provenance of the base imag-
ing at the features matrix for
VMwares ESX/vSphere platform
es curated by the Docker team).
Whilst Docker Hub inte-
from a decade ago and figuring
out where the gaps (and oppor- private clouds and
grates with source-control sys- tunities) lie between enterprise
tems such as GitHub and Bit-
bucket for automated builds,
expectations driven by the pop-
ularity of VMs and the existing
public clouds.
the package managers used Docker ecosystem. Areas like
during the build process create a networking, storage, and fine-
complex relationship between a grained version management
build specification (in a Docker- (for the contents of containers)
file) and the image resulting from are presently underserved by the
a build. Non-deterministic results existing Docker ecosystem, and
from the build process arent spe- provide opportunities for both
cifically a Docker problem its a startups and incumbents.
result of how package managers Over time, its likely that
work. A build done one day will the distinction between VMs
get a given version, and a build and containers (the run part of
done another time may get a lat- Docker) will become less import-
er version, which is why package ant, which will push attention to
managers have upgrade facili- the build and ship aspects. The
ties. The container abstraction changes here will make the ques-
(caring less about the contents tion of what happens to Docker
of a container) along with con- much less important than what
tainer proliferation (because of happens to the IT industry as a
lightweight resource utilisation) result of Docker.
is, however, likely to make this a
pain point that gets associated
with Docker.

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 17


How I Built a Self-Service

Automation Platform

Brian Carpio is a technical leader in the IT space with over 15 years of experience Brian designed
and supported solutions for multiple Fortune 500 and 100 companies. His specialty is in large
scale environments where automation is a must. Brian loves working with new technology like
NoSQL and AWS.

The current buzz in the industry is how DevOps is changing the world and
how the traditional role of operations is changing. These discussions lead to
building self-service automation tools that let developers deploy and support
their own code. I want to discuss a little the work I did at a previous company
and how it changed the way the company did business.

AWS architecture service). One of our most heated other open standards. Initially,
We used AWS as the foundation discussions was over what AWS this wasnt a well-accepted view-
of our platform, but using AWS services we actually wanted to point because many of our de-
isnt the be-all and end-all; if any- support. I was a huge advocate velopers wanted to play with as
thing, it presented its own chal- of keeping it simple and sticking many AWS services as they could,
lenges to the self-service plat- strictly to EC2-related services but this paid off in the long run.
form we ended up designing. (compute, storage, load balanc-
ing, and public IPs). The reason Environments
Cloud-agnostic was simple: I knew that eventual- AWS doesnt provide an environ-
Amazon provides a wide range ly our AWS expense would grow ments concept, so the first archi-
of services from basic EC2 fea- so much that it might eventually tectural decision was on envi-
tures like instance creation to make sense to build our own pri- ronments. Having spent the last
full blown DBaaS (database as a vate cloud using OpenStack or 15 years working in operations, I

18 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

understood that it was critical to creating an environment of con- stance, we had a v3 branch of
design an EC2 infrastructure that sistency. our Puppetfile, which represent-
was identical among develop- ed a specific set of Puppet mod-
ment, staging, and production. Puppet ules at a specific version; this al-
So we made the decision that we We next decided which config- lowed us to work on a v4 branch
would create separate VPCs for uration-management system to of Puppetfile that introduced
each of these environments. Not use. This was easy, since I wrote breaking changes to the environ-
only did the VPC design allow for the prototype application and ment without impacting existing
network and routing segregation no one else in my team existed at nodes. A node in our ENC would
between each environment, it the time I picked Puppet. Puppet look like this:
also allowed developers to have is mature and it allowed me to
a full production-like environ- create a team of engineers who classes:
ment for development purposes. strictly wrote Puppet modules,
which we ultimately made avail- base:
Security groups able via the self-service REST API.
The next decision was how se- nptclient:
curity groups would work. Ive Puppet ENC
seen complex designs where At the time we built our plat- nagiosclient:
databases, webservers, product form, Hira was still a bolt-on
lines, etc. all have their own se- product, and we needed a way backupagent:
curity groups, and while I dont to programmatically add nodes
fault anyone for taking such an to Puppet as our self-service mongodb:
approach, it didnt really lend automation platform created
itself to the type of automation them. Nothing really stood out environment: v3
platform I wanted to design. I as a solution, so we decided to
wasnt willing to give develop- roll our own Puppet ENC, which parameters:
ment teams the ability to man- can be found here: http://www.
age their own security groups This allowed our development
so I decided there would only puppet-enc-with-mongodb- teams to decide when they actu-
be two. I simply created a public backend/ ally wanted to promote existing
security group for instances that By using the ENC we creat- nodes to our new Puppet man-
lived in the public subnets and a ed, which supported inheritance, ifest or simply redeploy nodes
private security group that lived we were able to define a default with the new features.
in the private subnets. set of classes for all nodes and
This allowed for strict sets then allow specific nodes to have Command line to REST
of rules regarding which ports optional Puppet modules. For in- API
(80/443) were open to the Inter- stance, if a developer wanted a Our self-service platform started
net and which ports would be NodeJS server, they could simply with my working with a single
open from the public to the pri- pass the NodeJS module, but by team who wanted to go to the
vate zone. I wanted to force all using the inheritance feature of cloud. Initially, I wrote a simple
of the applications to listen on a our ENC, we were able to enforce Python module that used boto
specific port on the private side. all the other modules like moni- and Fabric to do the following:
An amazing thing happens toring, security, base, etc. Enforce a naming conven-
when you tell a developer, You tion.
can push this button an get a Puppet environments Create an EC2 instance.
Tomcat server, and if your app We decided to use Puppet en- Assign the instance to the
runs on port 8080 everything will vironments entirely differently correct environment (dev,
work. However, if you want to run than how Puppet Labs intended stg, prd).
your app on a non-standard port, them to be used. We used them Assign the correct security
you need to get security approv- to tag specific instances to spe- group.
al, and wait for the network team cific versions of our Puppet man- Assign the instance to a pub-
to open the port. The amazing ifest. By using a branch strategy lic or private subnet.
thing is that quickly every ap- for Puppetfile and r10k, we were Tag the instance with the ap-
plication in your environment able to create stable releases plication name and environ-
begins to run on standard ports, of our Puppet manifest. For in- ment.

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 19

Add the node and correct Instance creation and man-
Puppet modules to the Pup- agement
pet ENC. Database deployments
Add the node to Amazon (MongoDB, Cassandra, Elas-
Route 53. ticSearch, MySQL)
Register the node with Zab- Haproxy with A/B deploy-
bix. ment capabilities using Con-
Configure Logstash for Kiba- sul
na. Elastic load-balancer man-
By using Puppet, the node would agement
spin up, connect to the Puppet Data encryption at rest
master, and install the default set DNS management
of classes plus the classes passed CDN package uploading
by our development teams. In Multi-tenant user-authenti-
the steps above, Puppet was re- cation system
sponsible for configuring Zabbix,
registering with the Zabbix serv- Conclusion
er, and configuring Logstash for While the original idea was sim-
Kibana. ple, it revolutionized the way my
company delivered software to
Rapid adoption market. By centrally managing
Once development teams were the entire infrastructure with
able to deploy systems quickly, Puppet, we had an easy way to
bypassing traditional process- push out patches and security
es, adoption spread like wildfire. updates. By creating an easy-
After about four different teams to-use API, we let development
began using this simple Python teams deploy code quickly and
module, it became clear that we consistently from development
needed to create a full-blown through production. We were
REST API. able to maintain infrastructure
The good thing about start- best practices with backups,
ing with a simple Python module monitoring, etc., all built into the
was that the business logic for infrastructure that development
interacting with the system was teams could deploy with the
defined for a team of real devel- click of a mouse or an API call.
opers tasked with building a REST
API. The prototype provided the
information they needed to cre-
ate the infrastructure across the
environments, and interact with
Zabbix, Kibana, Route 53, etc.
It wasnt long till we were
able to implement a simple au-
thentication system that used
Crowd for authentication and
provided team-level ownership
of everything from instances to
DNS records.

Over time, the number of fea-
tures grew. Here is the list of
features that finally seemed to
provide the most value across 70
development teams around the

20 Advanced DevOps Toolchain // eMag Issue 28 - May 2015


Managing Build Jobs for Continuous Delivery

Martin Pestonis a build and continuous-integration manager forGamesys, the UKs leading
online and mobile-gaming company, owners of the popular bingo brandJackpotjoyand social
gaming success Friendzys on Facebook.
Martin has 15 years of cross-sector IT experience, including work in aerospace, defence, health,
and gambling. He has dedicated around half his career to build management and is passionate
about bringing continuous-delivery best practices to software-development companies.
In his spare time, Martin is a keen amateur astronomer and is author of A Users Guide to the
Meade LXD55 and LXD75 Telescopes, published by Springer in 2007.

Continuous delivery (CD) takes an evolving product from development through

to production. Continuous integration (CI) plays an important part in CD and
defines the software development at the beginning of the process.

There is a wealth of information build tool will subsequently run The number of jobs in a
describing CI processes but there jobs that compile and test the build tool can range from just a
is little information for the many code. The built artifact is then few to perhaps several thousand,
different CI tools and build jobs uploaded to a central reposito- all performing various func-
that compile and test the code, ry ready for further deployment tions. Fortunately, there is a way
which are central to the CI pro- and testing. to manage all of these jobs in a
cess. Hence, it is the role of jobs in more efficient manner.
In typical CI processes, de- the CI tool to manage the contin-
velopers manually build and test uous modifications of the source Automatic creation of
source code on their own ma- code, run tests, and manage the build jobs
chines. They then commit their logistics of artifact transporta- So why should we set up a facility
modifications to a source-con- tion through the deployment to automate the creation of other
trol-management system. A pipeline. jobs?

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 21

Build-tool user guides de- Advantages for Some build tools have a built-in
scribe how to create build jobs automating the creation feature that can save job configu-
but they do not often explain of build jobs ration as a file to any location.
how to manage them in any There are several advantages of Since the job configuration
great detail, despite the tools using a master build job to au- can be saved as a file, it can be
ability to do so. tomatically create multiple jobs stored in a configuration-man-
Normally, the developer in order to facilitate the CI build agement (CM) system. Any
will know how the application is process: change in job configuration can
built and so takes sole responsi- 1 All build job configurations be recorded either by modifying
bility for the creation and config- are created from a set of job the file directly and then upload-
uration of the build job. However, templates or scripts. These ing the file to the build tool or by
this process has some disadvan- can be placed under config- modifying the job configuration
tages: uration control. Creation of a from the build tool, saving the
1 Software architectural con- new job is then a simple mat- file, and uploading it manually to
straints: Applications are ter of using the template or the CM system. The method car-
likely to be built differently running the scripts. This can ried out depends on how easy it
from one another, depend- cut job-creation time signifi- is to access the job-configuration
ing upon architecture. One cantly (many minutes to a file directly from the build tool.
developer may interpret how few seconds). Job configura- Another important reason
an application is built differ- tion changes are made easi- for storing job configuration in a
ently from another develop- er; configuration changes to CM system is that in the event of
er and so the build configu- the build-job template en- a catastrophic failure that loses
ration will slightly differ from sure that all subsequent new all job configurations, the build
one job to another. Hence, jobs will inherit the changes. system can recover relatively
a large number of jobs be- 2 Consistency with all existing quickly and restore all jobs, build
come unmaintainable if they build-job configurations im- logs, and job history to their last
are all configured differently. plies that globally updating known good states.
2 Human factor: Manually all configurations via tools
creating a job introduces or scripts is more straightfor- Point 2: Job maintenance
the risk of making mistakes, ward than with a cluttered Maintaining a thousand jobs of
especially if a new job is cre- inconsistent system. different configurations would
ated by copying an existing 3 The developer does not nec- be a major headache, which is
job and then modifying that essary need detailed knowl- why it is so important to stan-
job. edge of the build tool to cre- dardise job configurations. If a
3 Job timeline control: There ate a build job. required change impacts a large
is generally no record kept 4 The ability to automatically number of similar jobs, then writ-
of job configuration for ev- create and tear down build ing scripts or creating special
ery build, so if a job config- jobs is part of an agile, CI- tasks to modify these jobs would
uration is changed, it could build lifecycle. be straightforward.
break previous builds. Lets discuss these points. However, having a thou-
Because of the points above, if sand jobs in a build system is not
developers are left to create build Point 1: Configuration control the best strategy for CI. We will
jobs without a consistent meth- of job configuration discuss this further under Point 4.
od for creating jobs, then they It is good practice that every
are going end up with a poorly component of a CI system be Point 3: Developer control
maintained CI build system and placed under configuration con- Although developers are encour-
a headache to manage. This de- trol. That includes the underlying aged to work autonomously,
feats the purpose of a CI system: build systems and the build jobs, building their applications using
being lean and maintainable. not just the source code. a centralised build system, they
Most build tools can create, Most build tools store need to work within guidelines
maintain, and back up build jobs build-job configuration as a file outlined by the build process.
automatically, usually via API on the master system. These files A developer will expect fast
scripts. However, these features, are set to a common XML format feedback on their code changes
despite mention in the build- and are normally accessible via and will not want to waste time
tool user manual, are often over- the front-end interface directly messing around with build tools
looked and not implemented. through a REST API of some kind. and job configuration. Provid-

22 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

ing developers with a one-but-
ton self-service-style solution to
Run integration tests
Best practices aside i.e. develop Consistent build
automatically set up build jobs on baseline as much as possible,
will help them achieve their de-
velopment task objectives more
hence few or no development
branches software-develop-
job configuration
quickly. ment teams generally follow the
tasks outlined above by running leads to easier job
Point 4: A lean CI system a suite of build jobs in the build
Although software-development
teams may adopt agile methods,
This is where automatic cre-
maintenance and
the build tools they work with are
not necessarily used in the same
ation of jobs adds huge value to
the software-development pro-
drives consistent
way. A properly configured build cess. If a developer wanted to
tool should not contain thou- manually create four or five jobs
sands of long-lived build jobs. to manage all of the develop-
Ideally, jobs should be created on
demand as part of a CI lifecycle.
ment tasks in day-to-day opera-
tions, it would take a significant
practices across a
As long there is the means to rec-
reate any build job from any his-
amount of time and effort. Com-
pare this to automatically creat- company
torical job configuration and that ing jobs in a fraction of the time
build relics are retained (i.e. logs, at a push of a button.
artefacts, test reports, etc.), only
a handful of jobs should exist in When to implement
the build tool. a self-service job-
Of course, an on-demand creation solution
job creation/teardown solution Implementing a self-service solu-
may not be an achievable goal, tion for creating build jobs is ap-
but the point is that the number plicable in the following cases:
of jobs in a build tool should be 1 Initialisation of new green-
kept to a manageable level. field projects (those that
have never been built be-
Day-to-day fore).
development 2 Projects that already exist
Build management governs the but never had any build jobs
tools and processes necessary set up for them i.e. building
to assist developers in day-to- manually from a developers
day activities with regards to own machine.
build and CI. Utilities can be set There are also cases where imple-
up to help development teams menting self-service job-creation
to create build jobs and provide may not apply:
flexibility in job configuration 1 If a product only has a few
but while remaining governed jobs such as with large,
by software-development best monolithic applications
practices. then it may be considered
not of value to the business
The CI job suite to spend considerable time
Consider typical tasks that devel- and effort to design, test,
opers carry out during day-to- and roll out an automated
day development: create-job process for just
Build a baseline. those few projects. However,
Release the baseline. as the business expands, the
Build a release branch (often product architecture will be-
combined with releasing the come more complex as more
baseline). development streams are
Create a development set up to manage the extra
branch and build the branch. workload, hence the number

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 23

Diagram 1: Hudsonjob-creation workflow.

of jobs in the build tool will ciple should be the same for For clarity, I will talk only
inevitably increase. building applications for any lan- about Hudson but the informa-
2 Projects with existing jobs guage, such as Java, .NET, etc. tion is equally relevant for Jen-
may have issues. Consider Introducing the job-cre- kins.
two scenarios of dealing with ation build form The Hudson CI tool has
existing jobs in the build tool: The build form is a means to become popular among build
Delete the existing jobs pass information from the build managers looking to minimise
and recreate them with the tool directly to another feature their development resource
job-creation utility, thereby or back-end scripts that perform budget and who do not want to
losing all historical build in- the job-creation tasks. be tied down with expensive li-
formation and logs. Ideally, as much project cences.Hudson defines its build
Keep the existing jobs and information should be provid- job-configuration in XML format.
attempt to modify their con- ed up front. This will reduce any Typically, this XML file is acces-
figuration to match the job later effort required to add extra sible from the top-level page of
configuration created by the configurations to newly created the job URL, i.e. http://hostname/
job-creation utility. build jobs. Information such as job-name/config.xml
In any case, there is always going project title, location of source Most of the sections in the
to be some overhead with main- code, and specific build switch- config file are self-explanatory.
taining legacy build jobs. es are normally required.A push Sections are added by plugins in
of the Build button creates the the config file only when they are
Setting up self-service jobs just a few moments later. used in the Hudson GUI.
job creation in a build These config files can be
tool Build-tool made into templates by replac-
We have talked about the advan- implementation ing specific project information
tages of automatically creating Two tools with which I have set in them with tokens. Several
build jobs, so now lets discuss up automated job creation are templates can be used to carry
how we can implement such a Hudson and AnthillPro 3. out different tasks, e.g. checkout
feature in a build tool. The prin- Hudson/Jenkins

24 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

Figure 1: Hudsoncreate-job form.

FIgure 2: Hudsonjob view.

Figure 3: AnthillPro job view.

of source code from different An example of a Hudson has create job privileges,
SCMs such as Subversion or Git. API create-job upload command otherwise the upload will fail
Now we need to link the cre- is shown below: authentication.
ate-job build form to the scripts. curl H Content-Type:applica- 2 If Hudson tabs are used to
Diagram 1 shows the workflow. tion/xml s data @config.xml view specific groups of jobs,
The build form passes the ${HUDSONURL}/create- it will not be possible to au-
information to the scripts, which Item?name=hudsonjobname tomatically place any of the
substitute the tokens in the tem- (see Figure 1) newly created jobs under
plates with the relevant infor- The jobs can be manually those tabs, except for the ge-
mation. The config files are then added to a new view. (see Figure neric all tab (or any tab that
uploaded to the Hudson tool via 2) uses a regex expression).The
an API. More details about the There are a few points to Hudson master-configura-
API can be found in Hudson by note: tion file will require updating
adding the string api to the URL 1 If the Hudson security sec- as well as a manual restart of
in the browser, i.e. http://host- tion is configured, ensure the Hudson interface. Once
name/job-name/api. that the anonymous user the jobs are created, they can

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 25

001 <project name=createjob default=createHudsonjobsConfigs>
003 <!-- Ant script to update hudson job config.xml templates to create new jobs in
hudson -->
005 <!-- Get external properties from Hudson build form -->
006 <property name=hudsonjobname value=${HUDSON.JOB.NAME} />
007 <property name=scmpath value=${SCM.PATH} />
008 <property name=mvngoals value=${MVN.GOALS} />
010 <!-- same for rest of properties from the hudson form -->
011 ...
012 ...
013 <property name=hudson.createItemUrl
014 value=http://hudson.server.location/createItem?name= />
016 <!-- Include ant task extensions-->
017 <taskdef resource=net/sf/antcontrib/antlib.xml/>
019 <target name = createHudsonjobsConfigs description=creates new config.xml file from
input parameters>
021 <mkdir dir=${hudsonjobname}/>
023 <!-- loop through each job template file replacing tokens in the job-templates with
properties from Hudson -->
024 <for list=CI-build-trunk,RC-build-branch param=jobName>
025 <sequential>
026 <delete file=${hudsonjobname}/${configFile} failonerror=false></delete>
027 <copy file=../job-templates/@{jobName}-config.xml
028 <replace file=${hudsonjobname}/${configFile} token=$HUDSON.JOB.NAME
029 <replace file=${hudsonjobname}/${configFile} token=$SCM.PATH
030 <!-- same for rest of tokens in the job template -->
031 ...
032 ...
033 <antcall target=configXMLUpload>
034 <param name=job value=@{jobName}></param>
035 </antcall>
036 </sequential>
037 </for>
038 </target>
040 <!-- contruct the job config upload command -->
041 <target name=configXMLUpload>
042 <echo>curl -H Content-Type:application/xml -s --data @config.xml ${hudson.
043 <exec executable=curl dir=${hudsonjobname}>
044 <arg value=-H />
045 <arg value=Content-Type:application/xml/>
046 <arg value=-s />
047 <arg value=--data />
048 <arg value=@config.xml />
049 <arg value=${hudson.createItemUrl}${hudsonjobname}-${job}/>
050 </exec>
051 </target>
052 </project>
Code 1

26 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

001 <?xml version=1.0 encoding=UTF-8?>
002 <project>
003 <actions/>
004 <description>Builds $POM.ARTIFACTID</description>
005 ...
006 ...
007 <scm class=hudson.scm.SubversionSCM>
008 <locations>
009 <hudson.scm.SubversionSCM_-ModuleLocation>
010 <remote>$SCM.PATH/trunk</remote>
011 <local>.</local>
012 <depthOption>infinity</depthOption>
013 <ignoreExternalsOption>false</ignoreExternalsOption>
014 </hudson.scm.SubversionSCM_-ModuleLocation>
015 </locations>
016 </scm>
017 <assignedNode>$BUILD.FARM</assignedNode>
018 ...
019 ...
020 <builders>
021 <hudson.tasks.Maven>
022 <targets>$MVN.GOALS</targets>
023 <mavenName>$MVN.VERSION</mavenName>
024 <usePrivateRepository>false</usePrivateRepository>
025 </hudson.tasks.Maven>
026 </builders>
027 ...
028 ...
029 </project>

Code 2: CI-build-trunk-config.xml Hudson job template.

Figure 4: AnthillPro create-workflow form.

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 27

then be manually added to the relevant tab
if so desired.
Scripts can be written in any language, such as
Ant, Perl, Groovy, etc., so it is a relatively straight-
forward set of tasks to create scripts in order to
carry out the steps outlined above.
A typical Ant script: Code 1
Code 2 is a typical tokenised Hudson job

AnthillPro is a licensed build and CI tool from
UrbanCode (now part of IBM) that provides the
user almost complete custom control of build
jobs via a GUI and an extensive API library.
The latest incarnation from UrbanCode is
uBuild, which is essentially a cut-down version of
AnthillPro and is geared primarily towards the CI
building of products whereas its parent product
was originally designed to manage an entire CD
pipeline. uBuild does not have the API capability
that AnthillPro has but it is possible to create a
plugin that can do a similar thing.
Build jobs in AnthillPro are called work-
flows, which essentially are pipelines of individu-
al job tasks executed in a specific order (in series
or parallel).
AnthillPros API-scripting language, Bean-
Shell, is based on standard Java methods.
BeanShell scripts are stored within the Ant-
hillPro graphic user interface and called from
within a job task. The latest version of AnthillPro
allows developers to custom-build plugins with
any programming language.
An AnthillPro job task is created using API
calls that essentially construct all the necessary
functions of that workflow. The script can be par-
ticularly long, as every aspect of the workflow
configuration has to be populated with data.
Just like Hudson, a build form collects build-job
information and executes a master workflow,
calling on the BeanShell scripts to create the rel-
evant workflow. (Figure 3 & 4)
A typical Java BeanShell script: Code 2
Note: It is also possible to create a self-con-
tained package of Java BeanShell scripts such as
a jar file, which is placed in a folder on the Ant-
hillPro application server. Direct calls to classes
within the jar file can then be made from a shell
task within the application. This works well, es-
pecially because the scripts can be uni-tested
before being applied in a live development en-

001 private static Project createProject(User user) throws Exception {
002 // get the values from the buildlife properties
003 String groupId = getAnthillProperty(GROUPID);
004 String artifactId = getAnthillProperty(ARTIFACTID);
005 String build_farm_env_name = linux-build-farm;
007 String branchName = branchName;
009 Workflow[] workflows = new Workflow[3];
010 // Set up Project
011 Project project = new Project(artifactId + _ + branchName);
013 // determine whether the project already exists and is active
014 boolean isProjectActive;
015 try {
016 Project projectTest =
017 getProjectFactoryInstance(artifactId + _ + branchName);
018 isProjectActive = projectTest.isActive();
019 } catch (Exception err) {
020 isProjectActive = false;
021 }
023 if (!isProjectActive) {
024 project.setFolder(getFolder(groupId, artifactId));
025 setLifeCycleModel(project);
026 setEnvironmentGroup(project);
027 setQuietConfiguration(project);
029 String applications = getAnthillProperty(APPS);
030 // create project properties
031 createProjectProperty(project, artifactId, artifactId, false, false);
032 // set the server group for the workflow and add environment properties
033 addPropertyToServerGroup(project, build_farm_env_name, applications);
037 ...
038 ...
040 // Create the CI Build workflow
041 String workflowName = CI-build;
042 workflows[0] = createWorkflow(
043 project,
044 workflowName,
045 perforceClientFor(branchName, groupId, artifactId, workflowName)
+ buildLifeId,
046 perforceTemplateFor(branchName, groupId, artifactId,
047 ${stampContext:maven_version},
048 build_farm_env_name,
049 CI-build Workflow Definition
050 );
052 // add trigger url to commit-build branch
053 addRepositoryTrigger(workflows[0]);
055 // Create the Branch workflow
056 workflowName = Branch;
057 workflows[1] = createWorkflow(

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 29

058 project,
059 workflowName,
060 perforceClientFor(branchName, groupId, artifactId, workflowName)
+ buildLifeId,
061 perforceTemplateFor(branchName, groupId, artifactId,
062 From ${stampContext:maven_version},
063 build_farm_env_name,
064 Branch Workflow Definition
065 );
067 // Create the Release workflow
068 workflowName = Release;
069 workflows[2] = createWorkflow(
070 project,
071 workflowName,
072 perforceClientFor(branchName, groupId, artifactId, workflowName)
+ buildLifeId,
073 perforceTemplateFor(branchName, groupId, artifactId,
074 ${stampContext:maven_release_version},
075 build_farm_env_name,
076 Release Workflow Definition
077 );
079 ...
080 ...
082 } else {
083 // project already exixts
084 String msg = Project + artifactId + _ + branchName + already
exists - check project name and pom;
085 throw new RuntimeException (msg);
086 }
087 return project;
088 }

Code 3: Sample CreateProject Java BeanShell script.

Other tools matically create and tear down should be adopted as part of
As long there exists the ability to jobs on a build tool, so I cannot everyday software develop-
create jobs via an API or a tem- detail the technical challenges ment.
plate feature, the implementa- to implement such a task. How- Automatic creation of jobs
tion described here can be simi- ever, everything that has been saves time and allows the
larly applied to other build tools discussed here can lead to imple- developer to get on with
such as Go, TeamCity, and Bam- menting such a solution. In fact, more important tasks.
boo to name but a few. Atlassians Bamboo build tool Consistent build-job config-
Ultimate build-tool agility: can detect branch creation from uration leads to easier job
Implementing the lean build- a main baseline and automati- maintenance and drives con-
tool philosophy cally create an appropriate job to sistent development practic-
An advantage mentioned build it. es across a company.
earlier in this article is automat- Automatic job creation can
ically creating on-demand CI Summary ultimately lead to a true agile
projects and tearing them down We have discussed how to set up adoption of software devel-
after completion. This would sig- facilities to create other jobs that opment right down to the
nificantly reduce the amount of form part of the continuous de- build-tool level.
build jobs in the build tool. livery strategy for a business:
I have yet to implement An automated approach to
an agile, lean method to auto- creating jobs in a build tool

30 Advanced DevOps Toolchain // eMag Issue 28 - May 2015


Service Discovery

David Dawson is CEO of Simplicity Itself, a UK-based consultancy that specialises in

understanding and building simple software that thrives on change. After leading Simplicity
Itself in the building of several large microservices systems and applying DevOps in their
construction and design, David now applies his experience to the development of Muon, a new
open-source programming model for microservices.

When building microservices, you have to distribute your application around

a network. It is almost always the case that you are building in a cloud
environment, and often usingimmutable infrastructure. Ironically, this means
that your virtual machines or containers are created and destroyed much more
often than normal, as this immutable nature means that you dont maintain

These properties together This concept is one of the managers, have some form of
mean that your services need to key underpinnings of micros- service discovery.
be reconfigured with the loca- ervice architecture. Attempting There are current-
tion of the other services they to create microservices without ly several key contenders to
need to connect to. This recon- a service-discovery system will choose from: ZooKeeper, Con-
figuration needs to be able to lead to pain and misery, as you sul,etcd,Eureka, androlling your
happen on the fly, so that when will in effect be working as a own solution.
a new service instance is created, manual replacement.
the rest of the network can auto- As well as the standalone ZooKeeper
matically find it and start to com- solutions presented here, most ZooKeeper is an Apache project
munication with it. This process platforms, whether full PaaS that provides a distributed, even-
is calledservice discovery. or the more minimal container tually consistent, hierarchical
configuration store.

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 31

implementation of master selec- and be discoverable in ZooKeep-
tion is robust and well behaved. er.
If you do choose to use Zoo- Deployment-wise, Consul
Keeper, investigate the Netflix agents are deployed on the sys-
OSS projects, starting withCura- tems that services are running
tor, and only use bare ZooKeeper on, and not in a centralised fash-
if they dont fit your needs. ion.
Since ZooKeeper is ma- This is a newer product and
ture and established, there is a one that Simplicity Itself likes a
large ecosystem of good-quality lot. We recommend it if you are
(mostly!) clients and libraries to able to adopt it.
enrich your projects. As with the other strongly
consistent systems in the list, care
Consul must be taken that you under-
ZooKeeper originated in the stand the implications of adopt-
world of Hadoop, where it was ing it, including understanding
built to help in the maintenance its potential failure modes.
of the various components in If you would like something
a Hadoop cluster. It is not a ser- similar that chooses availability
vice-discovery system per se, but rather than consistency, investi-
is instead a distributed config- gate the related serf project. The
uration store that provides no- serf library serves as the basis
tifications to registered clients. of Consul, but has chosen dif-
With this, it is possible to build a Consul is a peer-to-peer, strong- ferent data guarantees. It is no-
service-discovery infrastructure lyconsistent data store that uses where near as fully featured, but
although every service must ex- a gossip protocol to communi- can handily survive a split-brain
plicitly register with ZooKeeper, cate and form dynamic clusters. scenario and reform afterwards
and the clients must then check It is based on the serf library. without any ill effects.
in the configuration. It provides a hierarchical key-val-
Netflix has invested a lot of ue store in which you can place etcd
time and resources in ZooKeeper data and register watches to be
so a significant number of Netflix notified when something chang-
OSS projects have some Zoo- es within a particular key space.
Keeper integration. In this, it is similar to ZooKeeper.
ZooKeeper is a well-under- As opposed to ZooKeeper
stood clustered system. It is a and etcd, however, Consul im-
consistent configuration store, so plements a full-service discovery
a network partition will cause the system in the library, so you dont
smaller side of the partition to need to implement your own
shut down. For that reason, you or use a third-party library. This
must choose whether consisten- includes health checks on both
cy or availability is more import- nodes and services.
ant to you. It implements a DNS server As an HTTP-accessible key-value
If you do choose a consis- interface, allowing you to per- store, etcd is similar in concept
tent system such as ZooKeeper form service lookups using the to ZooKeeper and the key-value
for service discovery, you need to DNS protocol. It also allows cli- portion of Consul. It functions as
understand the implications for ents to run as independent pro- a distributed, hierarchical config-
your services. You have tied them cesses and can register/monitor uration system, and can be used
to the lifecycle of the discovery services on their behalf. This re- to build a service-discovery sys-
system and have exposed them moves the need to add explicit tem.
to any failure conditions it may Consul support to your applica- It grew out of the CoreOS
have. You should not assume tions. This is similar in concept to project, which maintains it. The
that consistent means free from the Netflix OSS Sidecar concept tool recently achieved a stable
failure. ZooKeeper is among the that allows services with no Zoo- major release.
older cluster managers, and con- Keeper support to be registered If you are primarily using
sensus (pun intended) is that its HTTP as your communication

32 Advanced DevOps Toolchain // eMag Issue 28 - May 2015

mechanism, etcd cant be easily the ability to continue working if It must notify you of services
beaten. It provides a well-distrib- the service-discovery infrastruc- starting and stopping.
uted, fast, HTTP-based system, ture fails. Building your own discovery ser-
and has query and push notifica- Server-wise, Eureka has also vice should not be taken lightly.
tions on change via long polling. chosen availability rather than If you need to do it, we at Sim-
consistency. You must also be plicity Itself recommend building
Eureka aware of the implications of this a system that values availability
choice as it directly affects your rather than consistency. These
application. Primarily, this mani- are significantly easier to build,
fests as a potentially stale or par- and make it more likely that you
tial view of the full data set. This will build something functional.
is discussed in the Eureka docu- We recommend that you
mentation. use some existing message in-
Eureka is now the quickest frastructure and broadcast noti-
to get started with Spring proj- fications on service status. Each
ects because of the investment service caches the latest infor-
the Spring team has made in mation from the broadcasts and
adopting the Netflix OSS compo- uses that as a local set of ser-
nents via the Spring Cloud sub- vice-discovery data. This has the
projects. potential to become stale, but
weve found that this approach
Roll your own scales reasonably well and is easy
Eureka is a mid-tier load balanc- to implement.
er built by Netflix and released If you do require consisten-
as open source. It is designed to cy, using some consistent data
allow services to register with a store could serve as the basis
Eureka server and then to locate for a distributed configuration
each other via that server. system that can be used to build
Eureka has several capabil- service discovery. You will also
ities beyond the other solutions want to emit notifications on
presented here. It contains a status changes. You should real-
built-in load balancer that, al- One major point to note in the ise, though, that building a con-
though fairly simple, certainly above systems is that they all sistent, distributed system is ex-
does its job. Netflix states that require some extra service-dis- ceptionally hard to get right, and
they have a secondary load-bal- covery infrastructure. If you can very easy to get subtly wrong.
ancer implementation internally accommodate that, it is better Overall, we do not recom-
that uses Eureka as a data source to adopt one or more of the sys- mend this strategy but it is cer-
and is much more fully featured. tems above for service discovery. tainly possible.
This hasnt been released. If you cant adapt to that,
If you use Spring for your you will have to create your own
projects, Spring Cloudis an inter- discovery solution within your
esting project to look into. It au- existing infrastructure.
tomatically registers and resolves The basis of this will be that:
services in Eureka. Services must be able to no-
Like all Netflix OSS projects, tify each other of their avail-
Eureka was written to run on the ability and supply connec-
AWS infrastructure first and fore- tion information.
most. While other Netflix OSS Periodic updates to the re-
projects have been extended to cords should strip out stale
run in other environments, Eure- information.
ka does not appear to be moving It must easily integrate with
in that direction. your application infrastruc-
We at Simplicity Itself very ture, often using a standard
much like the relation between protocol such as HTTP or
client and server in the Eureka DNS.
design, as it leaves clients with

Advanced DevOps Toolchain // eMag Issue 28 - May 2015 33


QCon London
2015 Report

This eMag provides a round-up of what we and some

of our attendees felt were the most important sessions
from QCon London 2015 including Microservices, Con-
tainers, Conways Law, and properly implementing agile

Next Generation HTML5
and JavaScript

27 Technical Debt
and Software

This e-mag brings together a number of authors who have

addressed the topic on InfoQ and suggests ways to identify,
monetize, and tackle technical debt in your products.
Modern web developers have to juggle more con-
straints than ever before; standing still is not an option.
We must drive forward and this eMag is a guide to get-
ting on the right track. Well hear from developers in the
trenches building some of the most advanced web ap-
plications and how to apply what theyve learned to our
own work.

Mobile - Recently New

Technology and Already a

This eMag discusses some familiar and some not too fa-
miliar development approaches and hopefully will give
you a helping hand while defining the technology stack
for your next mobile application.