Professional Documents
Culture Documents
Even though Kubernetes is the foundation of the CaaS, there are other building blocks that make the platform
reliable, secure, available, and resilient. This white paper discusses the key attributes of a production CaaS
environment.
Presented by Tigera
Today, there is a buzz all around about containerization and Docker. What exactly is Docker and how it is related to
containerization? What are the top bene its of using Docker? Why did it become so popular? And what are the statistics
and successful case studies related to Docker? In this article, I will answer all these questions.
Docker really makes it easier to create, deploy, and run applications by using containers, and containers allow a
developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship
it all out as one package. By doing so, the developer can be assured that the application will run on any other Linux
machine regardless of any customized settings that machine might have that could differ from the machine used for
writing and testing the code.
In this sense, Docker can help facilitate this type of savings by dramatically reducing infrastructure resources. The
nature of Docker is that fewer resources are necessary to run the same application. Because of the reduced
infrastructure requirements Docker has, organizations are able to save on everything from server costs to the
employees needed to maintain them. Docker allows engineering teams to be smaller and more effective.
As we mentioned, Docker containers allow you to commit changes to your Docker images and version control them. For
example, if you perform a component upgrade that breaks your whole environment, it is very easy to rollback to a
previous version of your Docker image. This whole process can be tested in a few minutes. Docker is fast, allowing you
to quickly make replications and achieve redundancy. Also, launching Docker images is as fast as running a machine
process.
CI Efficiency
Docker enables you to build a container image and use that same image across every step of the deployment process. A
huge bene it of this is the ability to separate non-dependent steps and run them in parallel. The length of time it takes
from build to production can be sped up notably.
Rapid Deployment
Docker manages to reduce deployment to seconds. This is due to the fact that it creates a container for every process
and does not boot an OS. Data can be created and destroyed without worry that the cost to bring it up again would be
higher than what is affordable.
If you need to perform an upgrade during a product’s release cycle, you can easily make the necessary changes to
Docker containers, test them, and implement the same changes to your existing containers. This sort of lexibility is
another key advantage of using Docker. Docker really allows you to build, test, and release images that can be deployed
across multiple servers. Even if a new security patch is available, the process remains the same. You can apply the patch,
test it, and release it to production.
Multi-Cloud Platforms
One of Docker’s greatest bene its is portability. Over last few years, all major cloud computing providers, including
Amazon Web Services (AWS) and Google Compute Platform (GCP), have embraced Docker’s availability and added
individual support. Docker containers can be run inside an Amazon EC2 instance, Google Compute Engine instance,
Rackspace server, or VirtualBox, provided that the host OS supports Docker. If this is the case, a container running on
an Amazon EC2 instance can easily be ported between environments, for example to VirtualBox, achieving similar
consistency and functionality. Also, Docker works very well with other providers like Microsoft Azure, and OpenStack,
and can be used with various con iguration managers like Chef, Puppet, and Ansible, etc.
Isolation
Docker ensures your applications and resources are isolated and segregated. Docker makes sure each container has its
own resources that are isolated from other containers. You can have various containers for separate applications
running completely different stacks. Docker helps you ensure clean app removal since each application runs on its own
container. If you no longer need an application, you can simply delete its container. It won’t leave any temporary or
con iguration iles on your host OS.
On top of these bene its, Docker also ensures that each application only uses resources that have been assigned to
them. A particular application won’t use all of your available resources, which would normally lead to performance
degradation or complete downtime for other applications.
Security
The last of these bene its of using docker is security. From a security point of view, Docker ensures that applications
that are running on containers are completely segregated and isolated from each other, granting you complete control
over traf ic low and management. No Docker container can look into processes running inside another container. From
an architectural point of view, each container gets its own set of resources ranging from processing to network stacks.
ADP
ADP is one of those companies that keep using Docker to better manage their application infrastructure. ADP is the
largest global provider of cloud-based human resources services. From payroll to bene its, ADP handles HR for more
than 600,000 clients, which caused a challenge in terms of security and scalability.
To solve the security issue, ADP uses Docker Datacenter. Docker Content Trust enables their IT ops team to sign images
and ensure that only signed binary will run in production. They also perform automated container scanning. Using
multiple Docker Trusted Registries enables them to build a progressive trust work low for their applications
development process
development process.
To solve the scalability issue, the company relies on Universal Control Plane/Swarm. Swarm gives their team the ability
to irst start small and have each application made up of many small Docker engine swarms instead of one swarm per
application. Then the swarms will merge over time, becoming larger and, in the end, each application will have its own
swarm. One day, a swarm could potentially span across public and private infrastructure and across applications. This
will enable the business to make the best inancial decision for the company. With Docker containers, ADP plans to
containerize the most dynamic parts of their applications irst making it easier to change and re-deploy them moving
forward, while leaving the other areas of the application for a later time. Containerizing with Docker enables ADP to
have a hybrid strategy. They will have a mix of big and small containers for any application, which creates an
evolutionary path forward to micro services
The vision and goal of ADP is to get to microservices, but the reality is that no company will get there overnight. Not all
applications will be refactored at the same rate and the platform needs to be lexible to accommodate a variety of
application architectures. Now, by slowly isolating services into separate containers, ADP is able to slowly grow into a
microservices architecture using Docker, rather than doing it all overnight.
Spotify
A digital music service with millions of users, Spotify is running a microservices architecture with as many as 300
servers for every engineer on staff. The biggest pain point Spotify experienced managing such a large number of
microservices was the deployment pipeline. With Docker, Spotify was able to pass the same container all the way
through their CI/CD pipeline.
From build to test to production, they were able to ensure that the container that passed the build and test process was
the exact same container that was in production.
Now the company can guarantee that all of their services remain up and running, providing a great user experience for
their customers. They also built a new platform called Helios based on Docker containers to deploy their containers
across their entire leet or servers and maintain their development ecosystem.
ING
As one of the top ten inancial services companies in the world, ING operates on a global scale. The IT organization in
the Netherlands alone comprised of 1,800 people created unique challenges of coordinating change across large
groups of people, processes, and technology and it led to poor quality software.
Now, ING is able to move faster with their CD pipeline running in Docker containers. Key areas accelerated are
provisioning build servers, provisioning and publishing tests, deployment automation, and in the functional integration
testing environment across their 180 teams. Additionally, the increasing levels of automation were starting to strain
their infrastructure resources and Docker helped to greatly reduce that utilization and ultimately hard costs, especially
within some of their biggest development efforts.
As a conclusion, I want to say that Docker containers share their operating system so they run as isolated processes
regardless of the host operating system. As Docker proudly admits, this means that its containers can "run on any
computer, on any infrastructure, and in any cloud." The portability, lexibility, and simplicity that this enables is a key
reason why Docker has been able to generate such strong momentum. We are big fans of using Docker
at Apiumhub and we believe that it will continue growing.
Published at DZone with permission of Ekaterina Novoseltseva . See the original article here.
Opinions expressed by DZone contributors are their own.
To understand the current and future state of DevSecOps, I gathered insights from 29 IT professionals in 27 companies.
Here’s who shared their thoughts:
The most important elements of a successful DevSecOps implementation are automation, shifting left, and
p p p g
collaboration. Eliminate as many manual steps as possible so security becomes a irst-class citizen of DevOps
work lows. The goal is to embed security early-on into every phase of the development and deployment lifecycle.
By designing a strategy with automation in mind, security is no longer an afterthought. This ensures security is
ingrained at the speed and agility of DevOps without closing business outcomes.
Shift security left in the engineering lifecycle. Security should be imbibed as a design mentality instead of being an
afterthought. If you don’t make DevSecOps part of the lifecycle from the planning, design, code, and rollout
perspective, you will miss opportunities to expose vulnerabilities that could expose the business to risk.
A successful DevSecOps implementation starts with a strong alignment between development and security teams.
Engage the security side throughout the value stream. Have more collaboration with shared goals around security.
While each team has its own unique needs and constraints, collaborating enables the groups to leverage their
respective strengths to work around constraints and achieve business goals.
The attributes of a successful DevSecOps culture are how security becomes ingrained throughout the
organization, greater collaboration, and the use of metrics to determine progress. Everyone becomes concerned
about security and sees security as their own responsibility and not someone else’s problem. There should be
proactive security across every team. InfoSec is embedded with data security and good processes for the architect
and development team to develop good code.
There is a focus on partnership rather than blame. Everyone from executives to project teams are on the same
page regarding the importance of security. There is a trust and shared responsibility of security outcomes with a
culture of visibility and transparency so everyone involved can learn what works and what doesn’t.
There are clear security-driven metrics. Security is measured at each stage of the engineering lifecycle. The culture
is governed by shared security KPIs and by bringing in the right people.
DevSecOps solves problems around velocity, risk, security consciousness, and software quality. Embracing DevSecOps
maintains innovation velocity that translates to business goals without skimping on security. It helps ensure that
security is integrated into the fast-moving environment. With security integrated into developer work lows, there are
faster, more secure releases without sti ling developer innovation. It accelerates bringing secure applications to market,
while signi icantly reducing the time to respond to increasing threats.
Risk is reduced by designing with security baked in from the beginning. Databases with security baked in are inherently
less risky. Vulnerabilities will be detected earlier, making them easier to ix, and reducing the chance of the vulnerability
escaping into production and exposing the organization and its clients to risk.
Developers become more security conscious and this results in better software. DevSecOps ensures security is a norm
and not an afterthought. Security becomes part of the ongoing engineering process and results in better software that’s
easy to operate and provides a better user and customer experience (CX).
1. Compliance is one of the more frequent use cases for DevSecOps. DevSecOps helps comply with security
standards and auditing systems – anywhere an audit trail is needed for information. It makes it easier to meet the
standards for security assessments, system authorization, and implementation of cloud services for departments
across the federal government. Customers are applying security/operational/compliance policies to the
Kubernetes desired-state model.
2. The most common DevSecOps fails are related to culture, collaboration, and adoption/change. Changing the
culture and mindset is not easy. The executive team needs to create a built-in security culture realizing that lack of
security is a predictor of business failure. The security team must move past being the department of “no” by
y p y p g p y
embracing DevOps, adding value without adding friction. Lack of collaboration is one of the biggest impediments
to DevSecOps. There are cultural and organizational barriers to collaboration between security, development, and
operations that must be overcome to build a continuous security mindset. Finger-pointing and a lack of
enthusiasm for common goals of the team are generally early warning signs that the DevSecOps initiative is not
going well. There needs to be a mindset change throughout the organization. If not, policies around security are
instituted but are viewed as a burden. If the policies are not part of the core way the operation runs, people will
take shortcuts. Be open to new ways of doing things. Institutionalize security as part of the process.
3. Concerns around the current state of DevSecOps are the culture and the term itself. There is little discussion
about collaboration, shared ownership, KPIs for governance, and the cultural change necessary for successful
DevSecOps. More discussion will lead to more mutual understanding, collaboration, and change. The fact there’s a
name for DevOps plus security shows that security is still an outsider to DevOps. We need to reach a point where
developers truly accept that security can’t be separated from their work, and where security professionals accept
that they have to be part of the solution. A failure of leadership is often the root cause of the issue. Many
organizations still have not embedded security throughout their deployment pipelines. The space is still very
immature with a small percentage of teams having implemented DevSecOps. Automated security is only being
done by a select group (5%) of advanced companies.
4. The future of DevSecOps is greater adoption, security being integrated into the enterprise and the culture, and
AI/ML being used to automate and improve the security posture of the enterprise. We’ll see DevOps shift to
become DevSecOps. Business leaders will come to view DevSecOps as a fundamental requirement to operate in
the digital world. Successful implementations will detect and address security threats at greater speed and with
less human intervention. Security will become an implicit element in DevOps. Security will gain more in luence as it
is integrated with product teams. Metrics will drive effectiveness and ef iciency. Security and compliance controls
will be embedded earlier in the DevOps lifecycle. Security will create less friction and serve as a catalyst for
innovation in existing security tools. Automation will be key to success. AI-driven applications with machine
learning to improve DevOps. AI/ML will indicate where to focus time on vulnerability management. AI-based
solutions will predict an identify patterns to unearth security vulnerabilities before they are found by hackers.
5. With regards to DevSecOps, developers need to consider their productivity, the OWASP Top 10, education,
processes, and best practices. An established DevSecOps methodology and culture means less work for
developers. A DevOps team working with security spend 50% less time going back to remediate security issues.
Developers trained I cybersecurity are rare, and therefore more valuable. Incorporating security throughout
development and deployment is less disruptive and yields better results than ignoring security until the end. Start
with OWASP to lean security best practices. Understand how your code can be vulnerable. Learn how to defend
against attackers. Check out e-learning resources and read. Secure coding is not taught in any university. If
developers really care, they need to take it upon themselves to learn how to code securely. It may be
uncompensated time to understand SQL injection, code injection, and the OWASP Top 10, but it will save a lot of
time and make you more money over the course of your career. Follow a process and embrace it. Take security
seriously for your own development as a professional. Keeping a good mind about security goes hand-in-hand
with other development best practices. Learn the best practices, get metrics, improve, get the playbook for how
other teams are developing secure code. Developers should play a role in ensuring their enterprise’s security
solution adapts to changes in the application it protects.
Further Reading