You are on page 1of 6

The Twelve Factors

I. Codebase (Git) VII. Port binding (Non http wsdl)

One codebase tracked in revision control, many Export services via port binding
deploys
VIII. Concurrency (Scale web 3 db 2 )
II. Dependencies (Maven)
Scale out via the process model
Explicitly declare and isolate dependencies
IX. Disposability (Pod Replicas (2,3))
III. Config (Config plan)
Maximize robustness with fast startup and graceful
Store config in the environment shutdown

IV. Backing services (EIS (any DB/JMS)) X. Dev/prod parity(Same DEV/UAT/ PRD)

Treat backing services as attached resources Keep development, staging, and production as similar as
possible
V. Build, release, run (CI / Jenkins)
XI. Logs (ELK)
Strictly separate build and run stages
Treat logs as event streams
VI. Processes (Stateless REST)
XII. Admin processes(No Manual Admin process)
Execute the app as one or more stateless processes
Run admin/management tasks as one-off processes
A codebase is a collection of code which is used to build a particular application.

Modern software should always be kept in a revision control system. Revision control, also known as source
control, allows developers to:

○ Keep track of changes


○ Enable more than one change at a time
○ Undo mistakes.
Revision control also assists in keeping track of what code has been deployed and where. For example, before
deploying code to production, you want test it in a staging environment. Revision control software will allow you
to mark or tag files that go into a release.

With good revision control practices, many developers can work on the same code without stepping on each
others toes. When a developer wants to share code changes with the rest of the team, he or she will commit
those changes to revision control.

As an application grows to the point where you are scaling out different systems, consider each of the
components as a Twelve Factor application in its own right.

While it is OK—even desirable—to share code between applications, each application should have its own
codebase. Break the shared code into a separate repository, which multiple applications can share. This shared
code is also called a library. This also leads us into the next factor, called dependencies.

This is one principle that Cloud Foundry does not implement since this principle is so important for any software
company.

II. Dependencies
The additional code libraries your application uses are known as dependencies. These dependencies might be an
image processing library, a database, or even a command-line tool.

Most languages these days have some form of dependency or package management tool. Dependency managers
allow you to specify which libraries your application relies on. Usually, you can specify particular versions as well.
Use a dependency manager rather than relying on system-wide packages being available.

Some examples of dependency managers:

○ Ruby has bundler


○ Perl has CPAN
○ Node has npm
○ Python has pip.
In Cloud Foundry, buildpacks manage these dependencies.
III. Configuration Data
Configuration data is everything that varies between deployment environments. Each deployment environment
(production, staging, QA development, etc.) will have some unique data.

How you talk to your database should always remain constant, however, which database you use will be different
on each system (most likely). This could be the hostnames, the credentials, the log level and others.

Do not store this configuration data in your code repository. The code repository is all that is constant, in all
environments. Instead, a good practice is to keep an example file, that highlights how to set these variables.

Anything that is sensitive should also be kept out of the repository. A good litmus test for if you are following
these rules, is:

"Can we open source this code repository without exposing anything sensitive or critical?"

Configuration files, like the config/database.yml in a Ruby on Rails application, are vulnerable to
accidental inclusion in the repository. Always set up your revision control tool to ignore sensitive files.

Store your configuration in environment variables. For any process running in a container, they are a named
value that can be set for the environment they are run.

The name for the variables should not be environment-related either. In the bad examples below the
deployment environment is a prefix to the environment variable name. In the good examples, it is clear what the
variable does, regardless of where it is deployed.

Good examples:

○ DB_USER, DB_PASS
○ APP_LOG_LEVEL, DB_LOG_LEVEL
○ AB_FEATURE_LARGE_AVATAR
Bad examples:

○ PROD_DB_USER, STAGING_DB_PASS
○ STAGING_LOG_LEVEL
○ DEV_FEATURE_LARGE_AVATAR
In Cloud Foundry, there are various ways to set the environment variables, such as binding services to
applications, or through the cf set-env command and through the cf push manifest file.
IV. Backing Services
A backing service is anything the application consumes over the network for normal operation.

Here are some examples:

Locally

■ Datastores (e.g. MySQL, Redis)


■ Caching services (e.g. Memcached)
Externally

■ Asset services (e.g. Amazon S3)


■ Logging (e.g. Loggly, New Relic)
■ Mail services (e.g. Postmark)
A Twelve Factor application should be able to swap out the local resources for remote ones with no code
changes. There is no distinction between local and remote resources.

As an example, let's say your database is too small and you need to upgrade it. So, you set up a larger server in
preparation, and import the database to it. And now, to use the new server, change the database URL in your
configuration file and restart the application.

In Cloud Foundry there are comprehensive API endpoints that manage how applications and services connect to
each other.

V. Build, Release, Run


The code is transformed through its deployment process through three stages into the application.

○ Build transforms your code into an executable form.


○ Release combines the executable with available configuration.
○ Run is where the release package gets deployed and executed.
The principle is exactly what the Cloud Foundry buildpacks and the Diego Architecture does. We will discuss
more about Diego in chapter 8.
VI. Processes
Your application needs to be built so it can be run on multiple servers. To make this possible, there are some
rules you must follow. Web applications are stateless. Every request to the web server is seen as a new request.
The web server has no concept of a chain of events, it simply takes a request for a URL and gives a response:

○ Backing services hold state, your application process does not. Use a database, a cache or some
other storage to hold that state.
○ You can't store anything that needs to persist on the filesystem of the web application; use a
backing service.
○ Cookies, user assets, data, etc. Put all of these in backing services.
Let's say you have a three-step sign up process for your site. Each time the user hits next in that sign up process,
the web server sees that as a fresh request. The state of that user's signup should be held in a backing service
with an identifier that is passed in each page request.

Cloud Foundry makes it easy to follow this principle, since a push will create your processes and attach your
services automatically if you bound them already.

VII. Port Binding


Your application process should expose itself over a URL scheme, bound to a port. Specific port numbers are
often used to identify specific services. Here is an example of an HTTP URL scheme used for the local
development port for a Sinatra application.

○ http://localhost:4567/
As computers have become more capable in running more than one program at once, ports have become
necessary on modern networks.

Since Twelve Factor applications are intended to have all logic happen within the application, the requests just
need to know where to go. Think of the URL scheme and port as the address to the application. All the requests
know is how to get to the application because they have this address.

This is what Cloud Foundry does through its HTTP and TCP routes, which we will learn about in the Routes and
Domains, chapter 11.

VIII. Concurrency
Because we build our application with the sixth factor principle of running an application as a stateless process,
we can scale easily. Or, if we have split a larger application into smaller, now separate processes, handling
specific tasks, we can scale each independently.

Let's say your application calls out to a separate image processing service you created. You can scale up the
image processing service, independently of the main application, rather than scale everything together. Each
component is scaled to the required size and no larger.

With Cloud Foundry, scaling up or down is quick and easy, so you do not have to commit resources until you
know you actually need to. This principle is implemented through the cf scale command.
IX. Disposability
When your application starts up faster, it also speeds up your deployment, scaling, and recovery. This allows
your team to fix bugs quicker and deploy more code each day.

Graceful shutdown reduces errors and improves user experience any time you scale down your application or
deploy a new version.

Cloud Foundry supports this principle through the cf push command. Also, Cloud Foundry supports non-
disruptive deployments, which we will learn about in the Zero Downtime Deployment chapter.

X. Dev/Prod Parity
Any difference between environments is an opportunity to be surprised when code which was working fine
suddenly breaks in a new environment. In some respects, it may not be possible to keep environments
identical—if your production application runs on 300 servers, you probably cannot afford to give 300 servers to
each developer—but, the fewer differences you create, the easier your life will be. Resist the urge to create
differences between different deployments of your application and always be on the lookout for ways to remove
the differences you have.

This applies to the libraries you use, configuration management, databases, and any other part of your
application.

Cloud Foundry supports this principle through the release management tool of BOSH.

XI. Logs
Rather than logging directly to a file or to a specific service, your application should send logs to its standard
output streams. This allows you to change logging tools without changing your code, and allows Cloud Foundry
to gather your logs for you. By centralizing logs into a single location, you can track down errors without logging
into each application instance individually.

Cloud Foundry logging systems are event streams.

XII. Admin Processes


Throughout the lifetime of your application you will need to run one-off processes to migrate your database,
clean up data, or run a console for introspection. These should be run in identical environments as the long
running processes, thereby ensuring there are no issues that might occur from a different environment.

These one-off processes run against a release, using the same codebase and the same configuration files as your
application. Do not run migrations directly against a database, for instance.

Cloud Foundry supports this principle either through BOSH errands or through one-off CF web applications.

You might also like