Professional Documents
Culture Documents
• Incident Response
• Conclusion
BORIS ZAIKIN
SOFTWARE AND CLOUD ARCHITECT AT NORDCLOUD GMBH
Are containers insecure? Not at all. Features like process isolation with In this Refcard, we will introduce three OWASP rules that should
user namespaces, resource encapsulation with cgroups, immutable be implemented in projects that operate via Docker containers,
images, and shipping minimal software and dependencies reduce the specifically:
attack vector by providing a great deal of protection.
1. Add –no-new-privileges flag
Container security tools are becoming hot topics in the modern IT space 2. Limit capabilities (Grant only specific capabilities needed by a
as early adoption fever is evolving into a mature ecosystem. Security is container)
an unavoidable subject to address when planning to change how we
3. Keep Host and Docker up to date
architect our infrastructure.
CI/CD AND PRE-DEPLOYMENT
This Refcard will lay out the basics of container security, assess key
SECURITY
challenges, provide hands-on experience with basic security options,
Pre-deployment security in CI/CD is quite essential. You can detect
and introduce some more advanced workflows.
vulnerabilities before they end up in your application with the following:
FREE TRIAL
REFCARD | DOCKER SECURIT Y
without using any trust and authenticity mechanism, you are basically • Enforce mandatory signature verification for any image that is
running arbitrary software on your systems. going to be pulled or run on your systems.
# cat Dockerfile
FROM alpine:latest
Also, you can achieve the same result using the docker scan command
to scan your local images. You can run the following example and see Build the image:
the standard report:
# docker build -t /alpineunsigned
docker scan backend-app:1.1.3
Log into your Docker Hub account and submit the image:
However, if you need an extended report, you can run the following
# docker login
example:
[…]
• Deploy a container-centric trust server using some of the Docker # export DOCKER_CONTENT_TRUST=1
registry servers.
Now, try to retrieve the image you just uploaded: images: “Actually, 7% of the images contained at least one
secret.”
# cat Dockerfile
FROM alpine:latest • Deploy a Docker credentials management software if your
deployments get complex enough. Do not attempt to create
Using default tag: latest your own "secrets storage" (curl-ing from a secrets server,
Error: remote trust data does not exist for docker. mounting volumes, etc.) unless you really know what you are
io/<youruser>/alpineunsigned:
doing.
notary.docker.io does not have trust data for
docker.io/<youruser>/alpineunsigned Examples: First, let’s see how to capture an environment variable:
You should receive the following error message: # docker build --disable-content-trust=false -t /
alpinesigned:latest .
Using default tag: latest # cat Dockerfile
Error: remote trust data does not exist for docker. FROM alpine:latest
io/<youruser>/alpineunsigned:
notary.docker.io does not have trust data for
# docker run -it -e password='S3cr3tp4ssw0rd' alpine
docker.io/<youruser>/alpineunsigned
sh
/ # env | grep pass
Now that DOCKER_CONTENT_TRUST is enabled, you can build the
password=S3cr3tp4ssw0rd
container again and it will be signed by default:
# docker build --disable-content-trust=false -t / It's that simple, even if you are a regular user:
alpinesigned:latest .
/ # su user
# cat Dockerfile
/ $ env | grep pass
FROM alpine:latest
password=S3cr3tp4ssw0rd
Now, you should be able to push and pull the signed container without
Nowadays, container orchestration systems offer some basic secrets
any security warning. The first time you push a trusted image, Docker
management. For example, Kubernetes has the secrets resource.
will create a root key for you. You will also need a repository key for the
Docker Swarm also has its own secrets feature, which will be quickly
image. Both will prompt you for a user-defined password.
demonstrated below:
This is a minimal proof of concept. At the very least, now your secrets key: secret-key
are properly stored and can be revoked or rotated from a central point
of authority. CONTAINER RESOURCE ABUSE
Containers are much more numerous than virtual machines on average.
HASHICORP VAULT EX AMPLE They are lightweight and you can spawn big clusters of them on modest
You can use different secret providers. Hashicorp Vault allows you to hardware. Software bugs, design miscalculations, or a deliberate
store SSL Certificates, passwords, and tokens. It supports symmetric malware attack can easily cause a Denial of Service if you don’t properly
and asymmetric keys. To use it in Docker, you need to pull the official configure resource limits.
Hashicorp Vault image from the Docker Hub. Then you can run the Vault
development server using the following command: To add to the problem, there are several different resources to
safeguard: CPU, main memory, storage capacity, network bandwidth,
$ docker run --cap-add=IPC_LOCK -d --name=dev-vault
I/O bandwidth, swapping, etc. There are some kernel resources that are
vault
not so evident, and even more obscure resources such as user IDs (UIDs)
exist.
To run vault in server mode for non-dev environments, you can use the
following command: CORE PR ACTICES:
Limits on these resources are disabled by default on most
$ docker run --cap-add=IPC_LOCK -d --name=dev-vault
containerization systems; configuring them before deploying to
vault
production is basically a must. There are three fundamental steps:
As an alternative to the Hashicorp Vault, you can use Azure Key Vault 1. Use the resource limitation features bundled with the Linux
and environment variables to pass secrets to the Docker container. In kernel and/or the containerization solution.
AWS, you can use secrets manager. To pass parameters to a Docker 2. Try to replicate the production loads on pre-production. Some
container, you can use AWS SDK. For example: people use synthetic stress tests, and others choose to "replay"
aws --profile <YOUR PROFILE FROM ~/.aws/credentials> the actual real-time production traffic. Load testing is vital to
--region <REGION eg. us-east-1> secretsmanager get- knowing where the physical limits are and where your normal
secret-value --secret-id <YOUR SECRET NAME> range of operations is.
apiVersion: apps/v1
Example: Control groups, or cgroups, are a feature of the Linux kernel
kind: Deployment
that allow you to limit the access processes and containers have to
metadata:
system resources. We can configure some limits directly from the
name: back-end-project
spec: Docker command line, like so:
replicas: 1
# docker run -it --memory=2G --memory-swap=3G ubuntu
selector:
bash
matchLabels:
app: back-end-project
template: This will limit the container to 2GB main memory, 3GB total (main +
spec: swap). To check that this is working, we can run a load simulator; for
containers: example, the stress program present in the Ubuntu repositories:
- name: back-end-project
root@e05a311b401e:/# stress -m 4 --vm-bytes 8G
image: <path to docker container>
ports:
You will see a "FAILED" notification from the stress output. If you tail flaw was fixed long ago upstream, but not in your local image. This
the syslog on the hosting machine, you will be able to read something kind of problem can go unnoticed for a long time if you don’t take the
similar to: appropriate measures.
• Use a vulnerability scanner. There are plenty out there, both free
and commercial. Try to stay up to date on the security issues
READ-ONLY MODE FOR FILESYSTEM AND of the software you use subscribing to the mailing lists, alert
VOLUMES services, etc.
You should enable Read-only mode to restrict usage of filesystem or
• Integrate this vulnerability scanner as a mandatory step of your
volumes. To achieve this, you can use a --read-only flag. For example:
CI/CD and automate where possible; don’t just manually check
Example: There are multiple Docker image registry services that offer
You can also temporarily enable access if your app needs to save some
image scanning. For this example, we decided to use CoreOS Quay,
data. You can use the flag --tmpfs:
which uses the open-source Docker security image scanner Clair. While
docker run --read-only --tmpfs back-end Quay is a commercial platform, some services are free to use. You can
create a personal trial account by following these instructions.
If you use Kubernetes, you can combine one of these tools with
kubebench. Therefore, you will cover not only the containers side of
things but Kubernetes configuration security as well.
Now, from the command line, we log into the Quay registry and push a
CORE PR ACTICES:
local image:
• Do not use runtime protection as a replacement for any other
# docker login quay.io static up-front security practices: Attack prevention is always
# docker push quay.io/<your_quay_user>/<your_quay_ preferable to attack detection. Use it as an extra layer of peace-
image>:<tag> of-mind.
• Having generous logs and events from your services and hosts
Once the image is uploaded into the repo, you can click on its ID
— correctly stored, easily searchable, and correlated with any
and inspect the image security scan, ordered by severity, with the
change you do — will help a lot when you have to do a post-
associated CVE report link and upstream patched package versions.
mortem analysis.
There are several tools that can help you to prevent this issue:
You can find additional detail in the Lab 2 “Push the example container the following components:
Terraform) for vulnerabilities. Trivy is quite easy to use. Install this, perform the following actions:
the binary file and run the following command: sudo apt-get update
sudo apt-get installcurl apt-transport-https lsb-
$ trivy image back-end-app:1.2.2
release gnupg2
Azure has a service called Azure Defender. It can scan containers in the • Kubernetes
Azure Container Registry and in Azure Kubernetes Service. • Infrastructure as a Code
Azure Defender for Docker containers in the Azure registries has several • Network
triggers. For example, containers can be scanned when Docker images
First, you should install Minikube (or use cloud services like AKS)
are pushed to your registry. Below, you can see an image with the Azure
and Istio. Then follow this step-by-step guide from the IBM example
Defender Analysis report:
repository.
INCIDENT RESPONSE
POST-MORTEM ANALYSIS
Incident response becomes harder in container environments because
of their ephemeral nature. A bad actor can delete a container after an
attack to remove all traces of any file access or network activity.
CORE PR ACTICES:
• Capture network, file, system call, and user data from inside the
container around any policy violation.
DETECTING AND BLOCKING ATTACKS ON • Set up additional security policies with the admission controller
CONCLUSION
Copyright © 2022 DZone, Inc. All rights reserved. No part of this publication
In this Refcard, we’ve walked through the core practices of security may be reproduced, stored in a retrieval system, or transmitted, in any form or
by means of electronic, mechanical, photocopying, or otherwise, without prior
before containers even enter production environments all the way written permission of the publisher.
up to the more complex analysis of a rootkit installation. We’ve used
components baked into the Docker runtime, as well as open-source
tools like Quay, Forensics-toolkit, and Istio to analyze real-world use
cases such as greedy containers or mapping network communication.
As you can see, Docker security can start very simply yet grow more
complex as you take containers into production. Get experience early
and then grow your security sophistication to what your environment
requires.