You are on page 1of 107

HOW TO WIN

AT CI/CD AND
INFLUENCE
LEADERS
A guide to Leading the CI/CD Revolution
in your Windows-based organization

Alex Papadimoulis
Copyright © 2019 Alex Papadimoulis

All rights reserved.

No part of this publication may be reproduced in whole or in


part, or stored in a retrieval system, or transmitted in any form
or by any means, electronic, mechanical, photocopying,
recording, or otherwise, without written permission of the
publisher.

iii
Table of Contents
I <> Continuous Integration + Continuous Delivery .. 1
Is Linux a prerequisite for CI/CD? ............................................ 2
Can you switch to Linux?......................................................... 3
Implement CI/CD on Windows, with Windows ........................ 4
Technical Differences Between Linux and Windows CI/CD....... 5
Design Principle: Modular vs. Integrated ................................. 6
Different Software Delivery .................................................. 7
Stepping on the wrong gas pedal.......................................... 8
User Experience: Learned vs. Intuitive..................................... 9
Learn by Using ..................................................................... 10
Linux tools for the Linux mindset ........................................ 11
Cultural differences are important ........................................ 12
Toxic Cultures Create Toxic Results .................................... 13
Like Oil and Vinegar ............................................................ 13
Love the One You’re With ................................................... 13
CI/CD Basics.......................................................................... 14
Seamless Automation ........................................................... 15
The CI/CD Value Proposition ................................................. 16
The Right CI/CD Tool ............................................................. 17
Talking Points ...................................................................... 19
Demonstrating Value: The Proof of Concept.......................... 20
Choose Wisely ..................................................................... 21
A Question of Metrics ......................................................... 22
Manual Releases are for Quitters .......................................... 23
II <> Proof in Practice ................................................ 25
Placeholders ......................................................................... 25
CI/CD Process ....................................................................... 26
Artifacts ............................................................................... 27
Option #1: Build Artifacts within your CI/CD tool ...............28
Option #2: Importing Artifacts from CI Servers ..................30
The MSI Antipattern ............................................................ 32
Option #3: Drop Folders/Paths ........................................... 35
It Doesn't Matter Where You Start ........................................ 36

iv
Best Practice Note: Bypass Roadblocks by Working
Asynchronously ................................................................... 36
III <> Deployment Plans ........................................... 39
The Fundamental Deployment Process ................................. 39
What is a deployment plan? ................................................. 40
Operations: Building blocks of Deployment Plans ..............41
The Server Context.............................................................. 43
One Deployment Plan for Multiple Environments ................. 44
Grouping Servers: Environments and Server Roles ................ 45
Variables and Deployment Plans ........................................... 47
Configuring Variables .......................................................... 48
OtterScript for Deployment Plans ......................................... 49
Which is Better: Visual or Text mode? ................................ 51
PowerShell + OtterScript ..................................................... 52
Manual Tasks: The Ultimate Placeholder............................... 53
Environments and Permissions ........................................... 54
Best Practice: Use Configuration Files for Different
Configuration ....................................................................... 54
What are configuration files?.............................................. 55
Configuration Files and CI/CD ............................................. 57
IV <> Issue Tracking and CI/CD ............................. 61
Issue/Project Tracking: The Rickety Bridge to Business .......... 61
Proper Issue Tracking Process ............................................... 62
The "Bug Tracker" Anti-pattern .......................................... 64
Automation and Issue Tracking ............................................. 65
BuildMaster and Issues/Project Tracking............................... 67
Transitioning Jira Issues ...................................................... 67
Beyond Operations: BuildMaster and Issues ......................68
Issue Visualization & Deployment Blocking ........................68
Issue Tracking: Next Steps..................................................... 69
V <> Integrating the Human Factor ......................... 71
Three Layers of CI/CD ........................................................... 71
Process Automation: Bridging the Technical and Nontechnical
............................................................................................. 72
Pipelines: Your Automated Process ....................................... 74
Effectively Using Pipeline Stages ......................................... 74
Deployment Windows and Approvals................................. 77
Avoiding the Bob Anti-Pattern .............................................. 78

v
Processes, Security, and Auditing .......................................... 79
Placeholders for Process ..................................................... 80
VI <> How to Rollout CI/CD to Everyone ............... 83
Secret to Buy-In: Incremental and Parallel............................. 84
The More You Know ............................................................. 85
VII <> Advanced Topics ............................................ 87
Blue / Green Deployments .................................................... 87
Benefits of Blue/Green Deployments ................................. 89
How to Perform Blue-Green Deployments in BuildMaster.90
Initial BuildMaster Configuration ........................................ 91
Appendix A <> How to Get Support from Inedo ...... 95
Integrations and the Inedo Den............................................. 97
Expanding into Inedo's Other Tools ....................................... 98

vi
I
Continuous Integration
+ Continuous Delivery

Organizations of all sizes, from the leanest startup to the


stodgiest enterprise, have been using CI/CD practices to
improve their ability to produce and deliver software for
business stakeholders. Nearly all of these organizations
have one thing in common: they’re Linux shops that
predominantly use Linux servers and Linux-based
technologies to produce and deliver software.

If you’re a Windows shop, then the solution is not simple


but it is worth pursuing. While I’ve had a lot of personal
success implementing CI/CD practices at Windows
shops, the CI/CD industry is still very Linux-biased. For
the past decade, I’ve engaged engineers at DevOps and
CI/CD conferences and asked how they would do it.

1
How To Win At CI/CD And Influence Leaders

“I wouldn’t,” is a common answer. Also, “I’d never work


for a company that used Windows.”

“Okay,” I prod them, “but hypothetically speaking, let’s


say I paid you infinity billion dollars to lead a CI/CD
transformation at a Windows shop. What would you do?”

“Simple, I would migrate to Linux, then bring in CI/CD.”

Is Linux a prerequisite for CI/CD?


Short answer: No!

CI/CD was originally developed by Linux sysadmins


using Linux-based technologies to produce and deliver
their Linux-based solutions. These sysadmins shared their
success stories with other Linux sysadmins, and before too
long, other Linux-based organizations adopted CI/CD
practices.

The consequence of this history is that most people with


real-world experience implementing CI/CD practices are
from Linux shops, and that experience doesn’t necessarily
translate to Windows shops. In fact, it could actually make
things worse.

2
Alex Papadimoulis

It is a truism that using the wrong tool for a job is a bad


idea. The same is true with CI/CD implementation.
Bringing in the wrong experience means that you’ll not
only waste valuable effort doing the wrong thing, but
you’ll never solve the underlying problem. It could take
years to recover from a bad implementation and cost you
the value of using a CI/CD process.

Can you switch to Linux?


Short answer: Not easily.

Migrating an enterprise to Linux is technically feasible. It’s


been tried, but at a great cost and with loads of bugs and
problems. Ultimately, the business value of switching to
Linux doesn’t justify the cost. The reason is much more
than technical: it’s also a personnel challenge.

It’s hard to find a talented sysadmin, let alone a Linux


sysadmin who’s willing to join a Windows shop and has
the appropriate Windows CI/CD implementation
expertise. It is also unrealistic to expect your Windows
sysadmins to switch to Linux. This is not like having your
Baker switch from cups and ounces to milliliters and

3
How To Win At CI/CD And Influence Leaders

grams. It would be like expecting your Baker to become a


sushi chef.

Sure, they can learn. Given enough time and a ton of failed
sushi rolls, you might just get your bakery to start making
quality sushi. Or, you could just improve how you make
bread.

Implement CI/CD on Windows, with Windows


You do not have to be a Linux shop nor must you switch
to Linux to implement CI/CD and benefit from its
practice. You simply need to implement CI/CD on
Windows, with Windows.

This book will help your existing staff learn how to switch
effectively and Inedo engineers are always available to help
guide the process. You will not need to hire any Linux
CI/CD experts to implement a process that will help your
organization build, test, and release quality software at the
speed required. Other Windows shops have successfully
made this transition and you can too. I will cover some of
the teachable moments from others’ experience so you can
avoid known pitfalls throughout the book.

4
Alex Papadimoulis

Remember, the vast majority of CI/CD users are Linux


shops so be careful when following CI/CD advice. It may
not apply to you. It’s critically important to understand
the differences between Windows-based and Linux-based
CI/CD so you properly apply the right expertise in your
shop.

Technical Differences Between Linux and


Windows CI/CD
Understanding the technical differences between Linux
and Windows-based CI/CD will help you deliver business
value for your Windows shop. The fundamental
difference between the two technologies is in their design:
Modularity/Integrated and user experience.

Understanding these differences will let you relate to the


engineers who have successfully used this technology and
understand which technical solutions work best on
Windows and which work best on Linux. Ultimately, this
understanding will enable you to dramatically increase
release efficiency by adopting CI/CD practices.

5
How To Win At CI/CD And Influence Leaders

Design Principle: Modular vs. Integrated


In and of itself, Linux is just an operating system kernel.
Various third parties build the dozens of components that
make up a usable operating system, from secure remote
access to a basic text editor. There are mountains of
options to choose from for each type of component, and
you can even install multiple types of the same
component. For example, it’s not uncommon to have
both a “Bash” and a “Bourne” command-line on the same
server.

Different third parties, such as Red Hat, Debian, and


Ubuntu will bundle a set of these components into system
packages, include a set of those packages in a
“distribution”, and sysadmins can add, update, or remove
packages as needed.

Windows, on the other hand, is a full-fledged operating


system. At its heart, is the Windows NT kernel, but
Windows is bundled with almost everything most
organizations will need – from a web server (IIS) to
scheduled tasks to DNS – and it’s rare to replace a
Microsoft-built component with a third-party alternative.

6
Alex Papadimoulis

Sysadmins can enable and disable components, but most


can’t be upgraded without replacing the entire operating
system.

This is a serious challenge, but one that Microsoft has well


in hand through backwards compatibility. For example, if
your application ran on Windows Server 2003, it’s likely
running on Windows Server 2019, and will run on
Windows Server 2030. This is why, in the Windows
world, applications don't need a close-tie to server
configuration.

Different Software Delivery

This design difference leads to very different approaches


to producing and delivering business software.

Linux application developers must specify which


operating system packages need to be installed (and what
versions), and the specific configuration of those packages
for their applications to function. The development teams
are often responsible for making these choices, and
sysadmins for validating those choices.

Windows application developers must specify which


components should be enabled for their applications to

7
How To Win At CI/CD And Influence Leaders

function, but they have very few choices. Most Windows


application servers will already have the expected
components (like IIS) enabled, and the developers must
build their applications against those components.

Stepping on the wrong gas pedal

Speeding up software delivery on Linux often involves


giving developers more choices and more control over the
operating system’s configuration. This doesn’t pose too
much of a problem, as Linux developers are accustomed to
recommending these sorts of choices.

Windows application developers, on the other hand, are


used to working with what was provided to them.
Expecting them to make operating configuration choices
may not only slow down delivery of the software, but may
introduce lost productivity, frustration, and ultimately
production failure.

For example, Microsoft’s IIS server provides a dizzying


array of configuration options; understanding exactly how
these options work and what impact they have requires
research and patience.

8
Alex Papadimoulis

When developers face tight deadlines and find themselves


with curiously broken software, they may frantically fiddle
with this advanced configuration until it works.
Unfortunately, the configuration changes they make will
likely have nothing do with the underlying problem, and
may introduce more, unknown problems later.

Instead of providing developers with more configuration


choices on Windows, show them exactly how the
production environments will be configured. This way, if
it’s not working in their environment, they can be certain
it’s not configuration-related.

User Experience: Learned vs. Intuitive


When you log in to a Linux server, you’re presented with
a blinking command line. You need to know which
commands to type in and how to instruct the server to do
the things you need it to. There are plenty of books and
training courses available, but learning Linux is a
prerequisite to using it.

Because Linux has so many different components, built by


so many different third-parties, this learning journey

9
How To Win At CI/CD And Influence Leaders

never ends. Linux users are accustomed to learning how to


use a tool before they start using it.

When you log in to a Windows server, on the other hand,


you’re presented with “server configuration wizard”, and
with just a few clicks of the mouse, you can learn about all
the things that a Windows server can do, and then use
more wizards to configure the server.

Learn by Using

This design choice means that Windows sysadmins have


become accustomed to learning how to use a tool by using
it. They won’t start by reading the manual, instead they’ll
just expect the tool to “make sense” to them. If it looks like
a hammer, and they need to pound in a nail, they’ll just
grab it and start swinging away.

For nearly all consumer products, this approach works just


fine. For example, you’ll have to go out of your way to
make some irreversible mistakes on a new iPhone. You’ll
get a lot of scary warnings before your iPhone even lets you
do something like a factory reset.

But for other products, trying to learn by using may cause


insurmountable problems. Even if you know the basics of

10
Alex Papadimoulis

flying, no one can sit in in a 747’s cockpit and safely


operate the plane. That doesn’t mean 747’s have a bad user
experience, it just means you need to learn how to use it
before trying.

Linux tools for the Linux mindset

Many of the popular DevOps and CI/CD tools were not


only built for Linux technology, but for Linux sysadmins.
This means the tools will generally require a “learn first”
mentality, and they don’t pester the user with lots of scary
warnings. As a result, it’s easy to make mistakes if you
don’t know what you’re doing.

These mistakes usually add up to lost productivity. For


example, typing a “d” instead of a “c” means you chose
“delete” instead “commit.” This equates to a net loss in
productive hours while you redo all the work you
accidently deleted. But sometimes, these mistakes can lead
to serious production outages. As sysadmins scramble to
fix those outages, they tend to make even more mistakes
that quickly cascade into costly downtime.

11
How To Win At CI/CD And Influence Leaders

Cultural differences are important


The sheer popularity of Windows means that it’s the butt
of a lot of jokes, like this:

Q: What do you call Windows Multitasking?

A: Screwing up several things at once!

When you add in the fact that Linux sysadmins prefer a


modular operating system with a learned user experience,
you end up with often hostile, general resentment for
Windows.

Of course, the feeling goes both ways. Even before Linux,


sysadmins created the Unix Haters Handbook, a highly
technical piece that lampooned and berated nearly all of
the design decisions that went into Unix, the predecessor
to Linux.

A lot of the jokes are in good fun and both Windows and
Linux sysadmins will often engage in good-natured
ribbing over beers. However, be warned that these
attitudes can quickly devolve into toxic environments in
the workplace. Engineers are passionate about the

12
Alex Papadimoulis

technologies they’ve mastered and attacking those


technologies could be taken as a personal attack.

Toxic Cultures Create Toxic Results

No one thrives in a toxic work environment, which is


when a job has stress that is beyond what is considered
normal. A toxic work environment causes problems for
employees in their personal life and physical health.
Symptoms of a toxic work place range from sleepless
nights to panic attacks. The cost to an organizations’
bottom-line is obvious: more sick days, turnover rates, and
in fighting instead of healthy collaboration.

Like Oil and Vinegar

Treat Linux and Windows sysadmins like oil and vinegar


They don’t mix very well on their own, but they can work
together to create great results and produce a lot of
business value. In fact, that’s exactly what your emulsifier
needs to be: a focus on business value, not technical
prowess.

Love the One You’re With

Ultimately, both Linux and Windows sysadmins will


become equally productive. Linux sysadmins must

13
How To Win At CI/CD And Influence Leaders

become experts at typing in the command line, while


Windows sysadmins must master GUI navigation and
keyboard shortcuts. No matter what system you’re using,
CI/CD implementation will make your software delivery
process better.

CI/CD Basics
CI/CD is the combination of two time-saving practices
called Continuous Integration (CI) and Continuous
Delivery (CD) that development teams use to produce,
test, and deliver software faster, better, and less buggy.
When CI/CD is implemented well it delivers:

• Better quality of software because fewer defects


make it to production.
• Faster delivery of business ideas to market by
enabling faster release cycles that allow software to
be changed in days or weeks rather than months or
quarters.
• Cheaper implementation across the entire
lifecycle, including less time spent coding,
deploying, and testing software changes.

14
Alex Papadimoulis

If you currently use a tool such as TeamCity or Jenkins


then you are already using CI and that's great because you
will understand the value. If not, then here is a quick
explanation. Software is built in stages and as developers
write code, they have two options. They can merge and
test the new code as soon as it is written or they can wait
until a designated time. Developers utilizing CI don't
wait. The benefit of the continuous merging and testing
of new code is that it prevents the integration nightmare
that happens when voluminous amounts of untested code
is merged for the first time.

CD is an extension of CI, and it means the code is ready


to deliver at a moment's notice, even multiple times per
day. The code is in a ready-state all the time. CD is more
than using a deployment script since deployment is only a
part of Continuous Delivery.

Seamless Automation
Nearly all organizations have a less-than-ideal software
delivery process, which leads to less-than-ideal releases. A
poor process can only result in delays, bugs, and/or a
product that does not match the vision. In today's

15
How To Win At CI/CD And Influence Leaders

marketplace, the pressure to deliver releases faster and


faster is intense. Therefore, the process must be seamless.
Pipelines are your automated CI/CD process that I will
cover in more detail in Chapter II: Pipelines. For now, a
simple pipeline looks like this:

The CI/CD Value Proposition


It should be obvious. CI/CD is a game-changing shift in
how software is tested and delivered and everyone should
be using it. And yet, people in the position to approve
CI/CD adoption resist the improvement. What is wildly
obvious to software engineers is opaque and confusing to
their non-developer bosses.

The developer community is understandably frustrated.


Like any technologically advanced group, developers are
in a knowledge silo with its own language that does not
translate well outside of the silo. Developers have a
difficult time explaining CI/CD in a way that makes sense
to those not immersed in the tech.

16
Alex Papadimoulis

This book teaches how to get CI/CD paid for and fully
implemented at your company. You will learn how to talk
non-developer speak to the higher-ups so they understand
what you already know: CI/CD is a must-have to remain
competitive.

I have included valuable insights that I've learned from our


clients at every level (developers, managers, and
executives). But, the most important thing you will get
from this book is how to create your Proof of Concept and
enjoy the rewards of having your company buy-in to your
improvement efforts.

The Right CI/CD Tool


In order to fully implement CI/CD, it will be necessary to
choose the right tool. For Windows shops, there are two
broad categories of tools:

• Windows-first: these tools were built from the


ground-up with Windows in mind, both from a
technical perspective and cultural perspective, and
offer a learn-by-doing experience.
• Windows-retrofitted: these tools came from the
Linux world, and were adapted to have limited
17
How To Win At CI/CD And Influence Leaders

Windows support, often by installing Linux-based


tooling or scripting.

Ultimately, either tool can get the job done, but using a
retrofitted tool may feel awkward, like driving a car that’s
had its gas tank ripped out and replaced with thousands of
D batteries duct taped together.

Throughout the book, I use Inedo's CI/CD platform,


BuildMaster, for illustration purposes but the advice and
guidance in this book can be used with any CI/CD
platform. At a minimum, the tool you choose should have
the following features:

• Ability to perform continuous integration and import


from other CI servers.
• Contain deployment plans that allow you to execute
scripts like PowerShell.
• Have variables to allow for reusability of these
deployment plans.
• Pipelines that model the release process and show how
far a build is progressing along.
• User approvals so that everything isn’t automatic, or
done by an engineer.

18
Alex Papadimoulis

• Manual tasks/operations so that you can integrate un-


automatable tasks.
• Issue tracking integration so you can see what changed
on your builds through the pipeline.
• Infrastructure with the ability to communicate with
other servers and group those into environments and
roles

Talking Points

The best way to explain CI/CD to non-technical people


is to focus on the value in time saved. Here are some
talking points to use when introducing the concept.

CI/CD is an automation practice that combines two


time-saving practices called Continuous Integration (CI)
and Continuous Delivery (CD). Using a CI/CD tool is
ideal because it provides a unified view from build through
release.

Continuous Integration is the practice of continuously


merging and testing new code automatically. This
prevents the inevitable integration nightmare and
resulting time sink that happens when people wait until

19
How To Win At CI/CD And Influence Leaders

the last minute to merge their changes into the final


release.

Continuous Delivery means the tested code is ready to


deliver at a moment’s notice, even multiple times per day.
No one has to wait for the code to be deployed – it’s in a
ready-state all the time.

Demonstrating Value: The Proof of Concept


Once you get the green light to spend some time on
process automation, the best way to fully implement
CI/CD at your company is to prove its value with a Proof
of Concept (POC). The POC needs to demonstrate all of
the following:

• Benefits of releasing application changes sooner and


more reliably.
• Time savings compared with manual or other release
processes.
• Ease of automating other applications.
• Advantages of sharing the status of deployments with
others, inside your team and beyond.
• How smart you are for using CI/CD!

20
Alex Papadimoulis

Choose Wisely

Your POC will need to demonstrate the technical aspects


of CI/CD to your colleagues, but also the benefits to the
business. Getting buy-in from both parties is equally
important, and both parties will need to understand the
limitations of a POC.

The first application you choose to automate is extremely


important. The right application can prove to
management that CI/CD is a no-brainer investment; the
wrong one could derail your entire improvement plan.
Here are the Proof of Concept Guidelines to steer you in
the right direction:

1. Pick an application that solves a business problem,


something that has demonstrable value to the business
if it's released sooner and more reliably. An internal
tool or library probably won't convince the business
that there's real value in your efforts.
2. Pick the Goldilocks problem to solve. If you pick
something too trivial, like a basic marketing website,
it's not likely to demonstrate to your colleagues or
higher-ups the real value in your efforts. But, don't go
the other extreme and pick that monstrous monolith
21
How To Win At CI/CD And Influence Leaders

of an application as your first effort. You can't run a


marathon without training.
3. Take your time. It's better to deliver a strong POC,
than to rush through the process and deliver a weak
one sooner.

A Question of Metrics

Once you choose the perfect application to automate, take


a moment to consider how you will prove the POC
demonstrates a superior method for your software
releases.

• Measuring Benefit. Why is CI/CD better for this


process? How much time did I save? How much time
did Management save? What risks were mitigated?
How long did the CI/CD process take versus our
current process? What are other benefits achieved by
the POC?
• Scalability. Can we "translate" the CI/CD process to
other applications? How quickly can we move to full
adoption? How much time will we save with a full
implementation? What applications can easily be
moved to CI/CD? Who should maintain the process,
and how will they learn how to do it?
22
Alex Papadimoulis

• Management. Who needs to see the results of the


POC? Who is the decision-maker on committing to
full CI/CD implementation?

Manual Releases are for Quitters


Far too often, people give up on CI/CD before they learn
how to use it.

We've all been in the situation where we believe we are too


busy to implement a system that will ultimately save us
time. At Inedo, I have witnessed our clients losing
precious time on long adoption rates, including some that
took over five years to fully automate. Just think about
that — five years in real time but more than five years in
lost productivity.

Can your organization wait five years for better


CI/CD?

You know it can't. Now, you just have to prove it to the


higher ups and, possibly, your colleagues. In the next
chapter, you will learn practical skills that will make it
easier to create your POC and before you know it, your

23
How To Win At CI/CD And Influence Leaders

company will be delivering releases faster and more


reliably than ever before. Trust the process.

24
II
Proof in Practice

Placeholders
The best way to help the business side of your organization
understand the benefits of CI/CD is by using
"placeholders" as part of the process. Things like
deployment windows and approvals may not impress your
technical colleagues, but the business people will love it —
even if the placeholders are just something you added as
an educated guess, modeled on the existing process.

It is easy to get started with CI/CD, but to avoid an


extended adoption rate, beware the simple technical
hurdles, such as, "the wrong files are being deployed” or
“the registry key isn’t being set.” While that's obviously an
issue, it's not actually a roadblock that needs to be fixed
today.

25
How To Win At CI/CD And Influence Leaders

Ultimately, this is a known problem that you will need to


fix someday, but what's even more important is
discovering unknown problems. This is precisely what
your POC will help identify.

So, let's start by defining a release process.

CI/CD Process
No matter what kind of application you've picked for your
POC, the release process should be repeatable. That is to
say, even though every release has different code and
different changes, it will still follow the same, templated
process. All release processes look something like this:

1. Build or Import Artifacts: these are the files that you


will deploy throughout all stages of the process.
2. Deploy to test environment(s): this involves
deploying the artifacts to a series of pre-production
environments, and notifying the appropriate parties
that they are ready for testing.
3. Receive approvals: once testing is complete, someone
(or something) needs to approve the deployment for
production.

26
Alex Papadimoulis

4. Deploy to production: similar to the test


deployments, but is obviously riskier and will require
preparing for a rollback if something unexpected
happens.

Every step in this process is equally important, and all steps


(including the deployment process) need to be thought-
out and configured in order to have a successful POC.

Artifacts
Artifacts are the mechanism designed to capture build
output into a file, typically with the intention of being
deployed in the future. Artifacts are associated with a
build and may be created or deleted at any point during
the lifecycle of an active build. Artifact files themselves are
not limited to build output, they commonly also contain:

• Documentation files
• Release notes
• Archived source code
• "Frozen" dependencies

Think of artifacts as simply a zip file; it’s not anything


complicated like an MSI (Microsoft Installer). In fact,

27
How To Win At CI/CD And Influence Leaders

with CI/CD, you don’t need MSI files at all, as I’ll explain
later in The MSI Antipattern.

The usage of artifacts forms a more stable and reliable


deployment process by ensuring each stage of deployment
receives equivalent output.

Artifacts are the files you will be deploying and they come
from three different sources. You can build them inside of
your CI/CD tool platform, import them from a dedicated
CI server, pull them from a drop path, or manually upload
them.

Option #1: Build Artifacts within your CI/CD


tool

This is usually the simplest choice, as it allows you to have


a single tool for building and deploying.

It's pretty straightforward:

• Retrieve source code: You can pull the latest code, or


code on a particular branch, or you can use a variable
to determine which to use at runtime. I'll talk more
about variables in Chapter four.

28
Alex Papadimoulis

• Compile/build code: This is performed by a


command-line build tool like MSBuild or Ant, both of
which take a build definition/script file that you write
and run whatever steps you specify.
• Create artifact: This involves pointing to the build
tool's output path, and capturing whichever files are
needed to run the application or component.

If you are using BuildMaster, check out our Building and


Deploying a .NET Web Application Using BuildMaster
tutorial for step-by-step instructions.

Note that there are some pitfalls here, especially if your


project hasn't been automated before. If you're already an
expert in MSBuild or Ant, you can probably work past
these hurdles quickly. It can involve anything from a
mistyped configuration (anyCPU vs. anyCpu) to needing
to rewrite your whole build definition/script.

Remember this is a placeholder step. You can always


manually upload your files, and plan to automate the build
later.

29
How To Win At CI/CD And Influence Leaders

Option #2: Importing Artifacts from CI Servers

In many organizations, the development teams have


already set up continuous integration tools like Jenkins or
TeamCity, and these tools are already creating build
artifacts automatically. These developers typically prefer
to have full control of their CI tools. If you currently use a
CI tool, sometimes it's just easier to work with what's
already there.

There are generally two approaches you can take with


importing artifacts within your CI/CD tool.

1. Import the artifacts from a specific build (or perhaps


the latest, successful build); you can even do this on an
automatic or recurring basis, to have builds be
automatically deployed
2. Trigger or queue a build, then import the build
artifacts from that build if successful; this will allow
you to pass in variables to do things like make special,
release-quality builds

For example, if you are using Jenkins and BuildMaster,


you can use a combination of the Jenkins::Queue-Build

30
Alex Papadimoulis

operation and the Jenkins::Import-BuildArtifact


operation to create either of these workflows.

How NOT to Deploy Artifacts


Over the years, I've seen a lot of deployment anti-patterns,
which is the opposite pattern you should be using. Here
are the worst anti-patterns for deploying artifacts:

• Piecemeal (cherry-pick) Deployment: this involves


looking at a set of files for an application, picking the
one or two files that have the changes you want to
deploy, then deploying only those files.
• Branch-based Deployment: this involves
maintaining multiple branches in source control (Dev,
31
How To Win At CI/CD And Influence Leaders

Test, Prod), merging "good" code changes between


branches, then building, and directly deploying from
that branch.

The piecemeal deployments are less popular these days,


but branch-based deployments have seen a comeback with
Git. (This is not what Gitflow means, and it's not a good
way to deploy artifacts.)

Both of these anti-patterns start with good intentions, to


create smaller and easier-to-deploy releases more
frequently. But they both have at least one major
problem. In each environment, you're using different
software than the previous environment.

The MSI Antipattern

I’ve seen a lot of Windows shops fall into the trap of the
MSI (Microsoft Installer) Antipattern for deployment.
This involves having the development or release team
building an MSI file containing the application or
components, and then handing it off to the operations
team to install on the target servers.

While this might seem simple at first, it causes the


following problems that should be avoided.

32
Alex Papadimoulis

• Hidden complexity: MSI is the wrong tool for the job


because it was originally designed by Microsoft for
their operating systems team to patch Windows.
There is a tremendous amount of underlying
complexity with MSI, so much that even other teams
on Microsoft stopped using it to install their software
• Lack of visibility: an MSI is effectively opaque, and
there’s no effective way for operations teams to know
what’s happening when they install it, which means
they will have a very difficult time of supporting the
applications on their servers
• Productivity killer: an MSI can’t do everything that’s
needed to deploy an application, which means the
operations team must complete many steps, not just
running the installer component. This is more than
inconvenient, it kills productivity.

Build Once, Deploy Often


If you think about it, the whole point of having multiple
environments for an application is to have multiple levels
of testing for that application. And what you're testing is
that changes to the application work as expected before
moving on to the next step. But to properly test, you need

33
How To Win At CI/CD And Influence Leaders

to test the exact same software. When you do Piecemeal or


Branch-based deployments, you're essentially creating
new software for each environment.

This is where artifacts come in. An artifact is just a zip file


that contains the entire application or component that
will be deployed to each environment. All of the files! Not
just the ones that changed.

When you deploy an application or component, deploy


the entire artifact. Not just the "changed files."

Optimize Deployments
In a world of manual deployments, the file-by-file
technique made sense. Deploying all of the files took
longer and seemed unnecessary. Why deploy the entire
application if all that changed was a logo file?

With CI/CD, that's no longer your problem. Now you


will deploy the entire artifact every time there is a change.
The automation tool detects unchanged files and doesn’t
waste time transferring or writing those file bytes.

If you're concerned about disk space, you can always


configure your artifacts to use differential storage, so that

34
Alex Papadimoulis

actual disk space usage is much less than you might expect.
And you could always create scripts to purge old artifacts,
or a feature built-in to your CI/CD tool like
BuildMaster's retention policies to define rules that
automate this process.

Basically, there's no reason to do anything but deploy the


entire application to each environment in your pipeline.
Be like a Marine; never leave a file behind.

Option #3: Drop Folders/Paths

Many release processes involve developers creating their


release, and then "dropping" it in some sort of network
drive or share. This is actually a great starting point for
automation, even though it starts manually.

Different CI/CD tools will approach this differently, but


in BuildMaster you can simply point the Create-Artifact
operation to any folder you'd like; it will seem just like you
built or imported artifacts from an automated process.

While this isn't an ideal long-term solution for fast-paced


releases, it's fine for getting past the roadblock of getting
artifacts into your CI/CD process. It will strongly
reinforce the "build once, deploy often" rule, and it may

35
How To Win At CI/CD And Influence Leaders

also encourage the development team to move towards


automating the process of creating these artifacts.

At a minimum, you need to Create an Artifact from the


drop folder location. You can also use a variable and
release template to determine from where to pull the
artifact.

It Doesn't Matter Where You Start


In the end, it doesn't matter how you get your artifacts.
You can always change it later. Remember, this is just an
input into the release process, and if you're struggling just
move on, and focus on releasing those artifacts.

Best Practice Note: Bypass Roadblocks by


Working Asynchronously

As far as the overall release process is concerned, the


contents of the artifacts are immaterial, as are where the
files came from. If your artifact contains the wrong files,
you can still deploy those files and define the rest of the
process. Or, you can zip up the right files, delete the
existing artifact, and manually upload the new artifact.

36
Alex Papadimoulis

If you're having a hard time getting your CI/CD tool to


integrate with source control or compile your application,
don't worry about that step just yet. You can manually
upload the artifact for now, or import from your existing
CI server.

If you're struggling to get your CI/CD tool to


communicate with another server to deploy files to it, then
just work around that temporarily.

For example, in BuildMaster you would normal connect


to a another server using the Inedo Agent, SSH, etc; but
instead you can use the "Local Agent" (which connects to
the same server BuildMaster is running on), but then
name that server the same way you would a real server.

By creating "fake servers", you can still configure your


environment and "fake deploy" until the connectivity
problem is fixed.

Nervous Boss? Try this: If the business is nervous about


a real production deployment, then plan to deploy to a
"fake" production server or even a different folder on a real
production server. This way, you can give them
confidence that the process works in every test

37
How To Win At CI/CD And Influence Leaders

environment, and you just need to "flip a small switch" to


go to production. If you get stuck, ask your vendor for
support or check with the community. Inedo's support
options are in Appendix A.

38
III
Deployment Plans

This chapter covers deployments in a larger context, and


how you can use variables, server groupings, and
configuration files in a deployment plan.

The Fundamental Deployment Process


Deployments should follow the same basic process and, at
a minimum, they should look like this:

1. Stop Application/Services: You cannot overwrite


code that is currently executing, at least not without
causing headaches later. This could simply mean
stopping an IIS application pool, a Windows service,
or something like that.

39
How To Win At CI/CD And Influence Leaders

2. Deploy Files: Copy the new version of the code on


top of the old version, deleting any files that aren't
used.
3. Start Application/Services: follow the reverse of the
first step, and restart the code.

There are advanced techniques that include more than


these three steps, like "Blue/Green Deployments" (a
continuous delivery term), that involve multiple servers
and more complexity. I go through a real world example
of how the "Blue/Green Deployments" process works in
practice using BuildMaster as the backdrop in Chapter
VI: Advanced Topics.

What is a deployment plan?


A deployment plan is a series of discrete steps that perform
specific tasks, usually on remote servers. Think of your
deployment plan as another placeholder. It doesn't need to
be perfect (or even complete) to continue your progress in
automating your releases. In fact, it's better to first
understand the bigger picture of where and how these
deployment plans execute, or the deployment context.

40
Alex Papadimoulis

Deployment plans are run against a multitude of things:


server, target application, pipeline, environment, release,
build, and so on. All of these things combined are called
the deployment context. Understanding how to use and
reference this context will let you make plans simpler and
more reusable, particularly across different environments.

Reusability is the goal. You can create deployment plans


that you can reuse across environments and this ensures
that the process you use in Production has been tested just
like the software. An important note: some CI/CD tools
will create different deployment plans for different
environments, which causes problems when you deploy in
production using an untested deployment plan.

Operations: Building blocks of Deployment


Plans

Each CI/CD tool will use different words for the discreet
steps and specific tasks in a deployment plan. BuildMaster
uses the term "Operations", which includes everything
from sending an email to getting code from a git
repository.

41
How To Win At CI/CD And Influence Leaders

Operations are typically grouped together in logical


structures that provide control flow (if/else) or error
handling. It is different than regular scripting or
programming because Operations are not general-
purpose. It is specifically designed for deployment.

To perform general purpose tasks, use an Operation to


execute a script (like PowerShell) instead.

Just about everything in a deployment plan is


accomplished with Operations, and CI/CD tools ship
with a large variety of them. Users can also write their
Operations using an SDK.

Here are a few common types of Operations included in


BuildMaster.

• Deploy Artifact Operation: This is the most


important Operation in BuildMaster, as it's used to
transfer the files to target servers. This is done
differentially, which means only the files that changed
actually get copied over the wire.
• Stop/Start for Service/AppPool Operation: A set
of four different Operations used to stop and start
Windows Services and IIS Application Pools.

42
Alex Papadimoulis

• Ensure Operations: A broad category of Operations


that are used to ensure that an application pool, or file,
or service, is configured exactly as you require.

The Server Context

When you write a deployment plan, keep in mind that it


will be running against a target server. This doesn't mean
that code literally runs on a server, but that there is a
specific server targeted (this is called "in context"), and
that the operations will run against that server.

For example, in BuildMaster, if you say "IIS::Start-


AppPool ProfitCalcAppPool", the ProfitCalcAppPool
will be stopped on whatever server is in context.

You can explicitly set the server in context at any time in a


deployment plan using the "for server" statement. As an
example, if you wanted to stop the application pool on
devwebsv1, you could do this:

for server devwebsv1


{
IIS::Start-AppPool
ProfitCalcAppPool;
}

43
How To Win At CI/CD And Influence Leaders

You can also use loops and variables to set server context,
but the ideal place is your deployment pipeline. I'll talk
about that more in Chapter seven, but for now it's fine to
explicitly set the server context to whichever server you
need to deploy to.

One Deployment Plan for Multiple


Environments
Even though every release has different code and different
changes, it should still follow the same, templated process.
The same holds true for deploying your application to
different environments. Even though each deployment
goes to different servers with different configurations, it
should still follow the same, templated process.

Create your deployment plan such that it can be run


against any server. This saves time and is more efficient
since you don't have to create a new deployment plan
every time. Here is an example of how this works.

Just create a deployment plan called: deploy-ProfitCalc-


app -- and then just have your CI/CD tool deploy a release
by running:

44
Alex Papadimoulis

• deploy-ProfitCalc-app against intwebsv1


• deploy-ProfitCalc-app against testwebsv1
• deploy-ProfitCalc-app against prodwebsv1 +
prodwebsv2

Note that you can create a deployment plan that's


intended for multiple applications (these are called global
deployment plans in BuildMaster and are a huge time
saver), but this requires that all of the applications be built
and deployed in the same way. It can get a bit complicated
so consider doing that after your POC.

Grouping Servers: Environments and Server


Roles
Although your applications – and all software – are
eventually run by a specific server, you may wish to use a
higher level of abstraction when you start having loads of
named servers that look and work the same.

This is where environments and server roles come in.


They are a way to group servers into a logical unit, which
allows you to think of them as a set.

45
How To Win At CI/CD And Influence Leaders

1. Environments group servers vertically, and they are


intended to describe a particular location or quality,
such as UAT, Integration, or Production. Servers
should only be in a single environment.
2. Server Roles group servers horizontally, and they are
intended to describe the software that's running on
them, such as accounts-web-app or hdars-indexer.
Servers often have more than one role, particularly in
lower environments.

Once you've grouped servers, you can simply deploy to a


"role + environment" combination without worrying
about the actual server names. For example, you could
instruct BuildMaster to run:

• deploy-profitcalc-app against Integration servers with


the profitcalc-web role
• deploy-profitcalc-app against Production servers with
the profitcalc-web role

While this configuration is certainly more complicated, it


allows you to scale quite trivially by adding servers to those
roles and environments. If you are using BuildMaster, you
can synchronize it's infrastructure configuration with

46
Alex Papadimoulis

Otter (our server provisioning and configuration


management tool), and let Otter further manage the
configuration of these roles.

The important thing to remember is that you have the


flexibility to make deployment plans that can run in
multiple contexts.

Treat your deployment plans as placeholders that can be


run in any environment.

Variables and Deployment Plans


You can define configuration variables (i.e. a key/value
pair) on a number of different things: environments,
servers, applications, releases, pipelines, builds, etc. These
are called scopes.

When a deployment plan runs in a particular context, the


key/value pairs that are in scope will be available to your
deployment plans as configuration variables. In
BuildMaster you can define the same key on different
scopes, and the values will "cascade" to use the closest
scope.

47
How To Win At CI/CD And Influence Leaders

For example, let's say you created two different


configuration variables:

• an application-scoped variable,
$DeployToHttps=false
• a pipeline-scoped variable at the Production stage,
$DeployToHttps=true

In this case, when the deploy-ProfitCalc-app deployment


plan runs, the value of $DeployToHttps will be false in all
deployment stages except Production (where it will be
true).

This is a simple example and there's a lot of other ways you


can use configuration variables.

Configuring Variables

Typically, unique environments will have different


configurations. And, having different configurations
often means that you'll need to deploy differently. For
example, let's say that your existing process dictates that
production applications run in HTTPS but your testing
applications use HTTP, then you'll need to deploy it to a
different folder. You can use a variable with an If/Else
Block to automate the process like this:
48
Alex Papadimoulis

if $DeployToHttps
{
Deploy-Artifact AccountsWeb
(
To:
C:\SecureWebsites\Accounts
);
}
else
{

Deploy-Artifact AccountsWeb
(
To: C:\Websites\Accounts
);
}

OtterScript for Deployment Plans


Although you can apply the Fundamental Deployment
Process to deployment plans in any CI/CD tool, the way
you do that will be tool-dependent. The above example
uses OtterScript, which is the domain-specific language
that BuildMaster uses.

49
How To Win At CI/CD And Influence Leaders

The language is pretty straightforward, and you can


reasonably understand what's happening simply by
reading the code. What's unique about OtterScript is that
you can switch back-and-forth between visual- and text-
modes.

First, here's a look at the visual mode:

And here's the text-mode:

IIS::Ensure-AppPool
ProfitCalcAppPool
(
State: Stopped
);

Deploy-Artifact ProfitCalcAppPool
(

50
Alex Papadimoulis

To: c:\Websites\ProfitCalc
);

IIS::Start-AppPool
ProfitCalcAppPool;

We'll use only text throughout this book, because it's


pretty easy to follow.

Which is Better: Visual or Text mode?

We have found that about half of users prefer using text-


mode, and the other half prefers using visual-mode. This
may seem strange at first, but there's a good reason for this:
everyone learns differently.

• Learn by Doing – you can just drag-and-drop


statements using visual-mode, and quickly discover
what options are available to you; this is common for
Windows users.
• Learn by Reading – you can read a lot of documents
on OtterScript usage and syntax, and it's easy to pick
up if you already know the basics of coding.

With OtterScript, your team has both options. Everyone's


happy!

51
How To Win At CI/CD And Influence Leaders

PowerShell + OtterScript

Because PowerShell is the standard for automating


configuration on Windows servers, OtterScript was
designed to seamlessly integrate with it — whether that
means running your existing scripts across dozens of
servers, leveraging scripts built by the community, or any
customization needed.

Here is an example script called inline script execution,


which lets you "drop down" to PowerShell to execute
anything on the server in context:

psexec >>
# delete all but the latest 3 logs in
the log directory
Get-ChildItem
"E:\Site\Logs\$ApplicationName" |
Sort-Object $.CreatedDate -
descending |
Select-Object -skip 3 |
Remove-Item
>>;

52
Alex Papadimoulis

There's a lot more to see in the PowerShell & Shell


Scripting section of our docs, but you can also store scripts
in BuildMaster and run them with PSCall, or even just
evaluate a PowerShell literal expression, integrating with
your own OtterScript variables:

set $milliseconds = $PSEval($minutes *


60 * 1000);

Manual Tasks: The Ultimate Placeholder


Placeholders are life-savers and there is a great one built-in
to BuildMaster called the Manual Operation, an
invaluable tool when automation is impractical. For
example:

• Updating a version number on a WordPress page as a


new release is being published.
• Enabling a VPN to a customer site for a short time
during a deployment.
• Running an ancient GUI-based deployment for part
of an application.

It may seem strange but it's simple. When executed in a


deployment plan, the execution will halt until an assigned

53
How To Win At CI/CD And Influence Leaders

person indicates that a specified task has been completed.


Manual Operations are also great for debugging, and serve
as a sort of breakpoint to allow you to inspect the state of
a server.

Environments and Permissions

Environments also serve another important purpose.


They are used in security and access controls to permit and
restrict users from performing various tasks. For example,
you could permit "QA Users" to deploy applications to the
Testing environment, while restricting them from
deploying to the Production environment.

You can also define access controls by application, so that


some users can deploy some applications to Production,
but not others.

Best Practice: Use Configuration Files for


Different Configuration
The idea of running the same application in different
environments is a decades-old concept.

In the Unix/Linux world, this is generally managed


through the operating system's "environment variables,"

54
Alex Papadimoulis

which are a set of key-value pairs that system


administrators configure on a server. The modular nature
of Linux means that environment variables can get very
complex, and thus a big part of CI/CD on Linux is
managing this type of variable.

Windows supports environment variables, but it’s largely


an MS-DOS relic. The Windows equivalent is the
Registry, which is a massive configuration database that’s
used for everything from centralized component
registration to web browser settings to a list of recently-
opened Word documents.

Both environment variables and the Windows registry


tend to be opaque and overly complex to manage. Instead,
configuration files are a much better option.

What are configuration files?

A configuration file contains environment-specific


configuration parameters – for example, a database
connection string or a web service URL, or other and the
application reads those values from the file at runtime.

Configuration files are human readable, which makes


reading and changing this environment-configuration

55
How To Win At CI/CD And Influence Leaders

simple. And, since all of the modern platforms (.NET,


Java) have libraries to access this configuration, it's trivial
to program.

.NET’s Conflated Configuration Files


One of .NET’s major design oversights is that it
encourages developers to conflate application host
configuration and environment-specific configuration.

Application-host configuration tells the application


host (such as IIS) how your application will behave. For
example, it designates the type of authentication your
application uses or how the application will respond to
error messages. These are “configurable” by a developer,
but it’s effectively code, and should be treated as such.
This means, it must be the same in every environment.

Environment-specific configuration are the things that


are different between environments, and what operations
and release engineers need to be concerned about.

In a .NET application, both types of configuration will


often be conflated in a single file like web.config. Out of
the hundreds of different configuration settings in this

56
Alex Papadimoulis

file, only one or two might be actual environment-specific


configuration.

When moving towards CI/CD, it’s critical to use .NET’s


built-in “external file” feature of configuration files. This
allows you to externalize sections of configuration so that
you can have two files (web.config and
web_appSettings.config), one for application host
configuration and the other for environment-specific
configuration.

<appSettings
file="Web_appSettings.config">

<add
key="ValidationSettings:UnobtrusiveVal
idationMode" value="None"/>

</appSettings>

Configuration Files and CI/CD

Managing configuration files within CI/CD can get a bit


tricky. First, these configuration files need to be managed
independently from other files in the artifact, as the
contents will change from environment to environment.
But they also contain sensitive data, such as connection
57
How To Win At CI/CD And Influence Leaders

strings, third-party service URLs, API keys, etc., so these


files can't be stored in source control.

There are two general strategies to managing


configuration files in CI/CD.

1. Keep configuration files in your CI/CD tool, and


deploy them right after deploying artifacts.
2. Keep configuration files in your build artifact, and
use a text templating/replacement tool to edit the
files after deploying the artifact.

Storing Configuration Files in your CI/CD Tool


This strategy is particularly useful if tracking
configuration files are a big part of your release, you need
everyone to have visibility of what changes are made, and
you may need to edit configuration files outside of the
normal release/code change cycles.

Another advantage to this approach is that you can use


environments to restrict which instances of your
configuration file (Dev, Test, Prod, etc.) can be read,
written, and deployed by which users. This further allows
you to secure sensitive values.

58
Alex Papadimoulis

In BuildMaster, these are called Configuration File Assets,


and are designed to store either an entire configuration file
(or just a portion of it). Like source control, changes are
tracked as well as information about when and where the
file was deployed. These files can be deployed as part of a
deployment plan or independently (manually) from the
UI.

Text Templates and Source Control.


This strategy uses a text replacement or templating engine.
It is designed to let you store the entire configuration file
in source control and use configuration variables to
replace the values in that file. In BuildMaster, this uses the
text-templating operations of the Inedo Execution
Engine.

This tends to be simpler to use and manage (particularly


for developers), but it doesn't give as much visibility or
traceability into changes. You also can't easily secure the
values or restrict deployments of the file.

Ultimately, it doesn't really matter which process you use,


just so long as you're aware of how you can add value to
the release process by managing configuration files. But

59
How To Win At CI/CD And Influence Leaders

for now, you can think of this step as – you guessed it – a


placeholder.

60
IV
Issue Tracking and CI/CD

Issues are important to identify because it could prevent a


build from moving forward in your pipeline. Then, the
whole process will come to a standstill.

Issue/Project Tracking: The Rickety Bridge to


Business
Issue and project tracking tools like Atlassian's Jira have
become mainstream for software development. Even the
smallest of development teams are not only regularly using
these tools, but have made them a critical part of their
release process. And for good reason, issue trackers allow
anyone and everyone to track the changes that are
included in a release.

In many of the organizations I've worked with, both


development and business teams use issue tracking tools
to some extent, but their usage is somewhat of a
61
How To Win At CI/CD And Influence Leaders

compromise. Because these different teams have different


requirements, their issue tracking process tends to become
obnoxious and bureaucratic, and ultimately ends up being
a task tracker that no one actually wants to use.

If you integrate issues with your CI/CD process, this


doesn't have to be the case. You can establish a well-
designed issue tracking process that allows the business to
request and see the status of changes through a release
cycle. This will deliver a huge value to the business,
especially if they're not yet using issue tracking tools to
their fullest extent.

Proper Issue Tracking Process


If your organization is like most, then your issue tracking
processes were built around manual release, and are thus
sub-optimal for CI/CD. Before talking about how to
upgrade the process, let's review the basics.

What exactly is an "issue"? An issue describes a change to


your software, which can be a fix, an improvement, a
feature, or some other arbitrary word to describe which
type of change it is.

62
Alex Papadimoulis

• Issue Types. The different types don't really matter


from a CI/CD standpoint, but they can be very
important. Maybe customers only have to pay half rate
for "bug fix" issues, or maybe "improvement" issues
need a different type of testing.
• Issues and Releases. At the end of a release — i.e. once
the software finally makes its way to production —
you will have a list of closed issues associated with that
release. Thus, a list of issues essentially describes what
changed in a release. The list changes throughout the
release lifecycle as issues are added, removed, open, and
closed.
• Issue States. Issues have at least two distinct states:
open and closed. A status field on the issue tracker
shows which state the issue in. But, note that the
meanings are not universal. Sometimes closed means
that it passed testing, and sometimes closed means that
it was deployed to production. This is fine, because
different teams follow different processes, and most
issue tracking tools let you define different states or
status types.

63
How To Win At CI/CD And Influence Leaders

The "Bug Tracker" Anti-pattern

This anti-pattern exists in many companies, and it goes


something like this:

After committing your code to source control, you go to


your issue tracker and check off Issue #1841 (New
Feature: Multi-Management) as complete. Then you go
on your merry way, and work on the next issue.

Weeks later, the business finally gets around to testing the


release, but they notice that the Multi-Management
feature doesn't quite work as expected. Maybe it's a bug,
maybe it's an unclear spec, but ultimately it means that
more code changes are needed. So, someone creates Issue
#1923 (New Feature Fix: Multi-Management), and the
cycle continues.

This is an anti-pattern because it defeats the whole


purpose of change tracking. If someone were to look at a
list of issues at the end of a release (i.e. the release notes),
they'd see something nonsensical like this:

• [1841] New Feature: Multi-Management


• [1923] New Feature Fix: Multi-Management
• [1948] New Feature Fix 2: Multi-Management

64
Alex Papadimoulis

That means no one's going to use the issue tracker as


intended, and they will find a manual workaround
instead. The proper way to handle this is to re-open the
broken change and then start the process over.

Once the release goes to production, don't reopen the


issue. Even if it's fundamentally broken, the software has
already been changed, and you should open a new issue to
track a new change.

Automation and Issue Tracking


At first, it may not seem like there's much to automate
with issue tracking. After all, issues are for people, and
until Skynet comes online, what does automation have to
do with issues?

Because a lot of issue tracking processes are built on top of


manual release processes, that means they're designed for
people and tend to be optimized for efficiency and
accuracy instead of robustness. But with automation,
robustness and even more efficiency and accuracy are
added.

65
How To Win At CI/CD And Influence Leaders

Continuing the earlier example, what if, instead of using


only the basic "open" and "closed" states for issues, you had
a set of issue states that represented an issue's lifecycle, like
this:

By using these states, it's pretty easy to set up some basic


processes and procedures:

• Unless all Issues are "Ready for Production", the


release cannot be deployed to production.
• Immediately after a successful deployment to
Integration, all issues in "Checked In" should be
changed to "Deployed to Integration".

It would be incredibly bothersome to update all of these


statuses, but with automation it's trivial.

66
Alex Papadimoulis

BuildMaster and Issues/Project Tracking


BuildMaster has several integrations for popular issue
trackers, including Jira, YouTrack, and Azure DevOps
(formerly Visual Studio Team Services). Most issue
tracking tool extensions have operations that can be used
in a deployment plan to create, query, and modify issues.

Each issue tracking tool is different, but there are generally


four operations available:

1. Create New Issue


2. Change Status/State
3. Append Comment/Note
4. Query/Find Issues

Transitioning Jira Issues

For example, let's say that you implemented a workflow in


JIRA like the one above. Once a successful deployment to
Integration occurs, all issues that are "Checked In" should
be changed to "Deployed to Integration".

This can be done with a single operation:

Transition-Issues
(

67
How To Win At CI/CD And Influence Leaders

Credentials: MyJira,
Project: ProfitCalc,
From: Checked In,
To: Ready for Test,
FixFor: $ReleaseNumber
);

Beyond Operations: BuildMaster and Issues

BuildMaster has an entire feature dedicated to project and


issue tracking, and you can access it on the "Issues" tab.
This will take you to an internal issue store that is
periodically synchronized with external issue tracking
tools that you've configured.

This issue store has two key usecases:

• Prevent deployment to certain stages until all related


issues are resolved/closed.
• Visualize relevant issues on the release dashboard in
BuildMaster, so you can see all information about your
release in one place.

Issue Visualization & Deployment Blocking

There are two automatic approvals you can configure in a


pipeline that will prevent release packages from being

68
Alex Papadimoulis

deployed to a particular stage unless all issues are closed, or


have a certain status such as "Resolved". Prior to
deployment, these approvals will evaluate each issue
associated with the release, and block the deployment if
the approval is not met.

• Issues in Status: if you specified specific status texts


(such as Resolved and Complete), any issue that
doesn't have one of those statuses will cause the
approval to be not met.
• Require Issues Closed: any non-closed issue will
cause this approval to be not met.

Like all automatic approvals, a release package may be


forced into a stage even if issues aren't closed or in the
appropriate status. However, this requires a special action
and a specific permission.

Issue Tracking: Next Steps


There's a lot you can do with the issue tracking integration
in BuildMaster. One advantage to setting this up is that it
helps you get the business more involved in the release
process, and ultimately in the work you're doing.

69
How To Win At CI/CD And Influence Leaders

You should consider demonstrating it in your POC


because this is exactly what the business will need to
understand CI/CD. If you need a placeholder, you can use
the built-in issue tracker to create trivial issues based on
actual issues, and demonstrate how it works.

70
V
Integrating the
Human Factor

Three Layers of CI/CD


You may not have realized it before, but CI/CD has three
distinct layers. You're already familiar with two of them:

• Build Automation Layer - this layer is often handled


by continuous integration servers (but can be done in
BuildMaster), and its purpose is to automate the
creation of a build artifact.
• Deployment Automation Layer - this layer will
automate the deployment of these build artifacts to a
set of target servers, and involves using a deployment
plan with a deployment context to make the plans
more reusable.

71
How To Win At CI/CD And Influence Leaders

The top-most layer is process automation, and it's one of


the most important layers to get right. Your technical
colleagues already understand automation and will most
certainly see the value in build and deployment
automation. But nontechnical folks don't think in terms
of automation, and because they are not hands-on with
builds and deployments, they aren't going to see the value
that you created until you show it to them.

Expect resistance though. Nontechnical people may


prefer manual processes. In a strange sense, manual
deployments are more flexible than automated ones
because any step can be changed during the process. But,
you need to show them that what flexibility is sacrificed, is
replaced with precision and speed. It's a worthwhile
exchange.

Process Automation: Bridging the Technical


and Nontechnical
Once you've implemented the deployment automation
layer, you will effectively have a one-click deployment
system that lets you deploy a build to a target set of servers.

72
Alex Papadimoulis

Or in BuildMaster terms, "run deployment plan A against


servers X and Y."

But who will click the deployment button and how will
that person decide when the right time to click is?

Typically, this comes in the form of an email or ticket


from someone else. "Hi Ops Team," it will often open
with, "please deploy HDars to production between 4 PM
and 5 PM. Thanks, Bob."

But how did Bob decide when to request deployment?


And who decided that Bob is the decider of such things?

What happens when Bob isn't available? Can someone


else decide, and how?

Answering these questions will inherently define a


repeatable release process, and it's something your
organization most certainly already has even if it's an
undocumented process.

If you are using a CI/CD tool, then the tool will help you
document, automate, and streamline.

73
How To Win At CI/CD And Influence Leaders

Pipelines: Your Automated Process


Pipelines let you build a repeatable release process by
defining the servers and environments that your build
artifacts will be deployed to, as well as the manual and
automatic approvals required at each stage of the process.

Here's what a simple pipeline might look like:

Pipelines can be simple or complex. In fact, a basic web


application might use a pipeline with only two stages
(testing and production), and simply deploy to a different
folder on the same server. Another application may
require a dozen stages, each with multiple targets that go
to different environments, and all sorts of automatic and
human approvals to meet compliance requirements.

Effectively Using Pipeline Stages

You've probably got a pretty decent idea of which


environments and servers your applications are deployed
to, and how those might line up to stages in a pipeline. But

74
Alex Papadimoulis

this may not be how your nontechnical colleagues


understand the release process.

A stage is intended to represent a stage of the testing


process, and it shows up visually on various diagrams and
dashboards throughout the software. Stages should model
the release process that the business is already using.

For example, suppose an application has two distinct types


of testing: Functional and User Acceptance. Testing is
performed by two different groups of testers using the
same servers, in an environment called Test. Once the
Functional testing is complete, the testing manager lets
the end user know that they should verify the changes, and
after the end user verifies the changes, the release is
deployed to a staging environment.

This is the exact process you should model in your CI/CD


tool:

Even though the User Acceptance Testing stage doesn't


deploy anything, this process visualization is key. Now

75
How To Win At CI/CD And Influence Leaders

everyone can see that a particular release is being held up


by User Acceptance Testing, and take appropriate action.

Stages are all about giving technical and nontechnical


people the ability to see and interact with the process.

Stages and Targets


A pipeline is effectively a set of stages, and a stage, in turn,
is a collection of targets. The target describes how and
where the release is deployed, and has a few components:

• Deployment Plan – the discreet steps and tasks that


will be executed.
• Server Targeting – either a server role, or just a list of
servers that the plan will be run against.
• Environment – determines which servers in the
targeted role that the plan will be run against.

The environment is also used for defining permissions. If


a pipeline stage specifies an environment, and you don't
have permission to deploy into that environment, you
won't be able to click the deploy button for deployments
to that stage.

76
Alex Papadimoulis

Most of the time, you'll only need to specify a single target


per stage. But in some cases, you may want to run multiple
deployment plans and/or deploy to multiple roles. For
example, a load-balanced deployment may use the "rolling
pattern", where different sets of servers are deployed
sequentially, and this can be accomplished in BuildMaster
with multiple targets under a single stage.

Deployment Windows and Approvals

Before someone is able to deploy to a particular stage, you


can require that certain conditions be met.

• Deployment Windows let you define the days and


times that a deployment may occur.
• Manual Approvals define the users that must "sign
off", as well as the reason they should sign off.
• Automatic Approvals are things like variable checks
($ReleaseCandidate=True, for example), issue
tracking status tests, and so on, that are automatically
verified.

Of course, you can always bypass these conditions by


forcing a deployment. This requires a special permission
that is usually only given to administrators.

77
How To Win At CI/CD And Influence Leaders

Deploying on Full-Auto Mode


You can configure a stage to automatically deploy to the
next stage once all of these conditions have been met, and
there was a successful deployment to the previous stage.
Using this, you can set up a fully-automated deployment
pipeline that deploys through a series of stages after end
users approve it.

Avoiding the Bob Anti-Pattern


Over the years, I've seen a lot of folks set up automation
using what I refer to as "the Bob Anti-Pattern." It looks
something like this:

• Bob approved the build for deployment.

"Bob's Approval" is not a process; in fact, it's not even a


good placeholder. Bob is not some sort of "brew master"
who's using his impressive sense of smell and years of
tasting experience to make sure the product meets some
arbitrary quality standard.

Instead, Bob is likely following an arbitrary testing


process, based on the assumption that an approval should

78
Alex Papadimoulis

be based on someone certifying that the testing process is


complete.

There is a better way that will make everyone's life easier.


Just show your Bob-equivalent that he or she can simply
log-in to your CI/CD tool and check off "tests certified"
whenever testing is finished. Not only will this make life
easier on Bob but will allow someone else to step in when
Bob is out of the office.

Processes, Security, and Auditing


Your CI/CD tool will play a key role in implementing
compliance-driven process requirements, such as SOX,
HIPPA, and other regulatory frameworks.

There's not a lot that you have to do. Automatic and


manual approvals are already recorded as part of a release
history. As long as you set up a process that aligns with IT
Governance, you'll have the audit trail to prove that a
proper process was followed.

This goes back to the Bob anti-pattern. An auditor doesn't


want to see "Bob approved"; an auditor wants to see "no
changes made that will impact PCI certification."

79
How To Win At CI/CD And Influence Leaders

Auditors also want to see separation of duties, which is


where permissions come in. You can restrict who is
allowed to deploy and force to which environment. This
will give governance confidence that you will pass audits
with ease.

Placeholders for Process

Mapping out the release process in your organization may


be challenging, particularly if it's not well-documented. In
addition, you may not yet have the buy-in to use CI/CD
to automate these processes. That's ok because you already
know a good technique: placeholders.

It's not uncommon to see an operations team that simply


doesn't want anything touching their production servers.
They may do deployment automation to production
someday, but for now, they require an email with a
location for where to download build artifacts, and steps
for how to install those artifacts.

You can and should use CI/CD to do all of this. Simply


model the steps as stages in your pipeline:

• Notify Operations – this stage has a deployment plan


with a single operation, Send Email. It sends a
80
Alex Papadimoulis

templated email to the Operations Team with


deployment instructions, including links to download
the artifacts and instructions on how to click "Deploy"
once they are ready.
• Manual Deploy – this stage is "deployed" to the
Operations team, and it has a plan that sends an email
notifying everyone that the deployment is finished.

It won't be too long before everyone starts seeing the value


in automating this last step.

81
VI
How to Rollout CI/CD to
Everyone

CI/CD is not just a technical solution to a technical


problem; it's equally a business solution. It addresses many
of the underlying challenges and problems that your
organization is facing. CI/CD helps both technical and
nontechnical folks by bringing them together with a
common language, common goals, and common
dashboards.

Your technical colleagues may want to get more technical,


and that's fine. No matter what CI/CD tool you use,
you'll have ample technical options — and if you use
BuildMaster, OtterScript will provide an endless source of
technical options. But don't lose focus on showing the
business value. Until your CI/CD is up and running,

83
How To Win At CI/CD And Influence Leaders

you'll need to be the bridge between technical and non-


technical folks.

The real challenge is change. People don't realize that their


existing process is sub-optimal. They will resist fixing what
is slow but not necessarily broken. Expect to hear things
like, "we already have Jenkins do everything," and "I have a
PowerShell script that deploys to any server," and be
prepared to respond.

Secret to Buy-In: Incremental and Parallel


You already know that CI/CD is a vast improvement, and
it will just take a little convincing to get others on board.

With the power of placeholders, you've learned that you


don't actually have to introduce risk into the process.
There's really no risk in deploying to test but even if there's
a perceived risk, you can just use a placeholder to show what
deploying to test would be like. This means teams can use
CI/CD in parallel with their existing release process,
without the risk of making things worse.

But just showing that it's not worse isn't enough. You'll
need to show that CI/CD is better and faster. The easiest

84
Alex Papadimoulis

way to do this is by dogfooding, or showing them your


successes. And to do that, you'll need to measure what you
were doing, and prove how "the new way" is better and
faster.

The metrics you documented in the Proof of Concept will


tell the story and sell the higher ups and your colleagues on
the benefits of CI/CD.

Best Practice Note: don't make it too complicated. If


they can't understand how your use of CI/CD has
improved your teams' lives, then you may need to rethink
your metrics from chapter one. But, don't lose heart if you
don't get immediate buy-in. Full adoption may be a long-
term, potentially multi-year campaign.

The More You Know


Thank you for reading this book; it shows that you are
willing to invest your time to learn how to improve things
at your organization, even if the payoff is down the road.
Reading a book may not seem like a "difficult" task, but
your coworkers probably aren't making the same
investment.

85
How To Win At CI/CD And Influence Leaders

t least, most of them aren't. They're not going to read


books like this, nor are they going to seek out other ways
to improve the organization unless it benefits them
directly today. You have to able to talk about immediate
benefits of CI/CD and how it can help them.

But in order to do that, you need two things:

• Insight into CI/CD – Check! If you've read this far,


you've got this.
• Insight into your Organization and what motivates
the people running it – This takes times but you have
already started.

Thank you again for taking the time to learn. I am proud


of our CI/CD tool BuildMaster, and the fact that you
invested time into reading about it means you have a leg
up on your counterparts. Congratulations and welcome to
a whole new world with CI/CD.

86
VII
Advanced Topics

Blue / Green Deployments


Blue-green deployments are a Continuous Delivery
pattern designed to effectively eliminate deployment
downtime and make rollbacks nearly instantaneous. This
is accomplished by maintaining two nearly identical
production environments: Blue and Green.

Martin Fowler explains this in more detail:

One of the challenges with automating deployment is the


cut-over itself, taking software from the final stage of
testing to live production. You usually need to do this
quickly in order to minimize downtime. The blue-green
deployment approach does this by ensuring you have two
production environments, as identical as possible. At any
time one of them, let's say blue for the example, is live. As
you prepare a new release of your software you do your

87
How To Win At CI/CD And Influence Leaders

final stage of testing in the green environment. Once the


software is working in the green environment, you switch
the router so that all incoming requests go to the green
environment - the blue one is now idle.

From an end-user's perspective, there really is no Blue or


Green environment: just a single Production
environment. The blue-green deployment model looks
something like this:

From IT's perspective, there are two environments


(Production-Blue and Production-Green), and different
releases will be deployed to these environments:

88
Alex Papadimoulis

Benefits of Blue/Green Deployments

The benefits to using Blue/Green deployments are


numerous:

• Zero downtime!
• No need for dedicated staging environment.
Your Blue and Green environments will serve as a
rotating staging and production environment, so
you won't need to worry about errors arising from
differences between production and staging
environments. They will both be considered
production.
• Instantaneous Rollback after Problems. If you
discover a problem after going live (i.e., swapping
your Blue and Green environments), you can

89
How To Win At CI/CD And Influence Leaders

rollback by simply performing another


environment swap. The old code will already be up
and running on the opposite environment.
• Simple Disaster Recovery. You will already have
two nearly identical environments, and you can
simply use one of those as a disaster recovery. After
you've decided that you don't need to rollback, you
can simply deploy the new release to the other
environment. This will give you a standby
environment in case of disaster.

How to Perform Blue-Green Deployments in


BuildMaster

I created a blue/green sample application (HDars-


BlueGreen) that you are free to navigate. It's also available
as an Importable Application for your own instance. This
application lets you visualize which release is Blue, Green,
or undecided as well as what's in Blue and Green:

90
Alex Papadimoulis

These steps will be based off of this application, but


BuildMaster is very flexible, so you can customize this
approach to your environment's requirements.

Initial BuildMaster Configuration

• Create a standard application with a normal Build and


Deploy plan.

• Create Production-Green and Production-Blue as child


environments of Production.
• Create a SwapBlueGreen plan that uses a variable called
$SwapTo to determine whether to swap Blue to Green,

or Green to Blue.
o This plan will issue instructions to your router or
load-balancer using PowerShell or another
mechanism.
• Create two pipelines:
o Both will start with your standard pre-production
stages (Build, Integration, Test, etc.).
o Blue will deploy to a "Blue" stage, then a "Swap"
stage.
o Green will deploy to a "Green" stage, then a "Swap"
stage.

91
How To Win At CI/CD And Influence Leaders

o Use a $SwapTo pipeline stage variable to indicate


whether to swap to Green or Blue.

Option 1: Create Blue- and Green-targeted Releases


Once you've prepared your application as described above,
you can just create a new release and select the Blue or
Green pipeline as appropriate.

Option 2: Decide Blue or Green Later


You may not know whether a release will target "Blue" or
"Green" until fairly close to release date. In this case, using
a "Blue" or "Green" pipeline will be misleading, because it
will imply that the target has already been decided.

While you can always change the release's pipeline at


anytime, you should consider creating a third pipeline
("BlueGreen") that implies that decision hasn't been made
yet. This pipeline should be configured as follows:

• Use the exact same pre-production (Build,


Integration, Test, etc.) stages,
• Create a final stage called "Staging", and
• Disable the pipeline option to deploy releases once it
reaches the final stage.

92
Alex Papadimoulis

Once a build reaches "Staging", you can simply swap the


releases pipeline to "Blue" or "Green" and continue
deploying.

You can even simplify this process and give the user a
choice when they deploy to "Staging.” To do this:

• Create a release template called "Choose Blue or


Green",
• Configure a Deployment Variable Template for the
"Staging" stage (List: Blue, Green), and
• Create a plan that changes the release's pipeline to the
selected variable.

The HDars-BlueGreen has this release template, pipeline,


and plan already configured for you to see.

How to Roll Back Blue-Green Deployments


If anything goes wrong after deploying, blue/green
deployments make it very easy to quickly roll back. Instead
of having to redeploy the entire build and all of the
artifacts, you can just perform another environment swap.

Because the last stage of both the Blue and the Green
pipelines simply swap environments, you can just navigate

93
How To Win At CI/CD And Influence Leaders

to the build that was deployed in the opposite


environment and redeploy it.

For example, in the above image, a Green-targeted build


was just deployed. To roll back, simply navigate to the
build in Production-Blue (1.0.1 - Build 1), and redeploy.
This will execute the environment swap and immediately
restore the previous release.

Using Blue-Green in Disaster Recovery Scenarios


Because your Blue and Green environments will have
nearly identical hardware, they can also serve as a
production and disaster recovery pair. After deploying to
one environment (say Blue), and verifying that it's stable
and won't be rolled back, you can simply deploy the same
release to the other environment (Green).

This simply involves adding another stage to each of the


pipelines after the "Swap" stage.

94
Appendix A:
How to Get Support
from Inedo

We're constantly working on expanding our BuildMaster


documentation, but if you can't find what you need, please
contact us. Here is how we can help:

• Post a Question to Support Q&A: this is our public-


facing Q&A forum and is frequented by community
and Inedo engineers alike; it's a great way to create a
public-dialog about a question or issue, so that others
may find it later.
• Submit a Ticket: these provide private, one-on-one
support with Inedo engineers. Note: this option is
only available for licensed users. If you're evaluating for
purchase, please consider requesting a trial license.
• Join Our Slack Workspace: Slack is a collaboration
tool used for project announcements, document

95
How To Win At CI/CD And Influence Leaders

sharing, and generally speaking, facilitating chat


conversations. It is organized by workspaces (typically
a company), channels (typically a team, project, or
client), and threads within those channels.

BuildMaster and the Inedo Execution Engine have very


tight integrations with PowerShell and shell scripting.
You may find answers by searching more broadly, like
"how to publish an azure package with PowerShell"
instead of "how to publish an azure package with
BuildMaster.”

Check out The Daily WTF; a website created by our


Founder and CEO, Alex Papadimoulis and is the
definitive how-not-to guide for software development.
Also, Alex is not the usual software company CEO. He is
directly available to clients.

• Alex's email is apapadimoulis@inedo.com and phone


is +1 (440) 391-1845.

• Follow Alex? He posts about his adventures in gaming,


food, and Japan on Facebook and Instagram.

• In Cleveland, OH? Set up a meeting and enjoy a


glorious scotch with the CEO.

96
Alex Papadimoulis

Integrations and the Inedo Den


BuildMaster has many integrations to other tools in the
Inedo Den. There is good balance of first-class, Inedo-
supported extensions that you can download and use right
away, and community extensions that are actively
maintained by other users.

But honestly, many extensions – including our Docker


and Kubernetes – mostly wrap some sort of command-
line tool. Of course, there's a lot of value in that, because it
provides a UI and simplifies things, but if you can't find an
extension that does exactly what you need, you can follow
this same approach.

If someone asks, "does BuildMaster integrate with X," you


can confidently say "yes," so long as "X" has either a
command-line interface or a PowerShell script. You may
build an extension around those, but as a placeholder,
using the Execute Command-line Operation will suffice
for most everything.

97
How To Win At CI/CD And Influence Leaders

Expanding into Inedo's Other Tools


We make a variety of automation tools that work well
together, or with other tools.

• Otter is a server provisioning and configuration


management tool that lets you maintain a "desired
state of configuration" on your servers and detects
when that configuration changes. You may find that
other technical folks – such as your operations
counterparts – are looking for a tool like Otter, and
thanks to BuildMaster's Infrastructure Sync Feature,
they can share the exact same set of agents and servers.
• ProGet helps you package your applications and
components, and hosts third-party packages like
NuGet, Chocolatey, and npm. Developers are likely
using it for the latter purpose, but you may find
Universal Packages to be a great way to package your
own applications. BuildMaster can then easily deploy
these packages with an Operation.

All of our tools share the same execution engine, agents,


and extensions. They also have a common user experience,

98
Alex Papadimoulis

which will make it a lot easier to learn and expand into our
other tools.

99

You might also like