You are on page 1of 15

 

Docker 
Friday, May 11th, 2018 

 
 
I believe that my total number of hours working with Docker has (recently) now surpassed 600 hours…. Both with at work and also just having fun with Docker at home. 
 
In my free time - when I'm not elbow deep in our own code - I am currently working on my own build for a Minecraft server and also with a fork of a containerized NES emulator
from the supreme overlord of containerizing anything, Jesse Frazelle. 
 
When I first interviewed for my internship I was asked for a brief explanation of Docker. I said, "Think about going on a trip. What do you need and what can you not afford to
forget in the packing process? Toothbrush, comb, clothes, etc. Now think about your horror when you get to your hotel and there is nothing but an empty suitcase. Even worse,
think about how you would react if you forgot the charger for your phone or laptop but only brought the cable. If you had your mom packing for you, none of that would
happen. Simply put, Docker is mom. Docker will allow us to take all of what an application needs to run right out of the box (container in this case) without issues." 
 
 
Overview 
Docker is a computer program that performs operating-system-level virtualization also known as containerization. It is developed by Docker, Inc.  
 
Docker is primarily developed for Linux, where it uses the resource isolation features of the Linux kernel such as cgroups (control groups) and kernel namespaces 2, and a union-
capable file system 1 such as OverlayFS 3 and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and
maintaining virtual machines (VMs). The Linux kernel's support for namespaces mostly isolates an application's view of the operating environment, including process trees,
network, user IDs and mounted file systems, while the kernel's cgroups provide resource limiting for memory and CPU. 
 
A very limited Windows version of Docker is also available. -- hence the reason we don't use Windows for anything other than a scapegoat to tease and make fun of. 
 
"To understand is to know" 
To know about Docker means we have a responsibility to know where and how it began. 
 
History 
 
Solomon Hykes (the master of Linux containers - seriously, I challenge you to watch a video of this guy and then try and tell me that his brain isn't incredible!) started Docker in
France as an internal project within dotCloud, a platform-as-a-service company, with initial contributions by other dotCloud engineers including Andrea Luzzardi and Francois-
Xavier Bourlet. Docker represents an evolution of dotCloud's proprietary technology, which is itself built on earlier open-source projects such as Cloudlets. 
The software debuted to the public in Santa Clara at PyCon in 2013. 
 
Docker was released as open source in March 2013. On March 13, 2014, with the release of version 0.9, Docker dropped LXC as the default execution environment and replaced
it with its own libcontainer library written in the Go programming language. 
 
A January 2017 analysis of LinkedIn profile mentions showed Docker presence grew by 160% in 2016. The software has been downloaded more than 13 billion times as of
2017. 
 
Operation 
 

 
 
Docker can use different interfaces to access virtualization features of the Linux kernel. 
 
As actions are done to a Docker base image, union file-system layers are created and documented, such that each layer fully describes how to recreate an action. This strategy
enables Docker's lightweight images, as only layer updates need to be propagated (compared to full VMs, for example). 
 
According to a Linux.com article: 
 
Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server. This helps enable flexibility and portability on where
the application can run, whether on premises, public cloud, private cloud, bare metal, etc. 
 
 
1  
Union mounting is a way of combining multiple directories into one that appears to contain their combined contents. 

Namespaces are a feature of the Linux kernel that partitions kernel resources such that one set of processes sees one set of resources while another set of processes sees a
different set of resources 

OverlayFS is a union mount filesystem implementation for Linux. 
 
 
Docker is one of the first tools within our field that I legitimately fell in love with. There is so much you can do with a container! From creating a home server all the way to
disguising or hiding your online presence. It gives people access to Linux servers that usually would not. Specific to us, we do all our testing locally so we are fully dependent on
Docker to containerize Mongo, ElasticSearch, S4, among others. The beauty of Docker is that, since it is built layer by layer, a user can always revert to the last working state of
an application (granted they have committed their container image this far!). 
 
Take me for example: 
The term, "Pulling a Jeff" did not come from scoring "own-goals" in foosball, but instead came from using Docker to fully break my system through experimentation. On more
than one occasion. But from "pulling a Jeff" came what our boss likes to refer to as "opportunities". Trust me, as the resident "Docker Guy", I've had days of nothing but
"opportunities". Using that, I have developed a system to use Docker as fast and efficiently.  I have made myself countless cheat sheets and alias files to speed up my own
workflow. 
________________________________________________________________________________________________________________________________________________
______________________ 
 
USING DOCKER: 
 
Before anything, Docker needs root access to run properly! 
 
To run the Kadira container from image, use [docker-compose up] 
 
To run the meteor app, cd into proper directory and type [meteor] 
 
To attach to a running container, use [docker attach <CONTAINER_ID>] 
- add bash at the end of docker command to give access to container CLI 
- Example for docker attach: 
- $ sudo docker attach 665b4a1e17b6 #by ID 
OR 
- $ sudo docker attach loving_heisenberg #by Name 
- be careful when leaving container as ctrl-C will kill the container process 
- Example for docker attach with bash shell: 
- $ sudo docker exec -i -t 665b4a1e17b6 /bin/bash #by ID 
OR 
- $ sudo docker exec -i -t loving_heisenberg /bin/bash #by Name 
 
If you wish to run a container with port forwarding, you must tag it each time you run it. 
- Example: 
- $ sudo docker run --name kadira-nginx -p 80:80 nginx 
OR 
- $ sudo docker run --name k3-nginx -p 80:80 nginx 
- by using this method you will need to give a new name each time you launch a  
container from the same image. 
 
If you are trying to figure out a container's origin but it has no user given name, 
use docker history to scan through it's build commands. 
- Example 1: 
- $ sudo docker history 31f92e1e07b3 (will return the following) 
-IMAGE               CREATED             CREATED BY                                      SIZE    
31f92e1e07b3        25 hours ago        /bin/sh -c #(nop)  ENTRYPOINT ["/bin/sh" "...   0B                   
485abebe2806        25 hours ago        /bin/sh -c #(nop)  EXPOSE 27017/tcp             0B                   
ee8214a1a977        25 hours ago        /bin/sh -c mkdir -p /data/db                    0B                   
fa9174ca2aa7        25 hours ago        /bin/sh -c apt-get update && apt-get insta...   326MB                
cc75a3fcf62f        25 hours ago        /bin/sh -c echo 'deb http://downloads-dist...   71B                  
928cee99ef20        25 hours ago        /bin/sh -c apt-key adv --keyserver hkp://k...   26.3kB               
a728f34e1621        25 hours ago        /bin/sh -c #(nop)  MAINTAINER Jeffrey D'Si...   0B                   
14f60031763d        13 days ago         /bin/sh -c #(nop)  CMD ["/bin/bash"]            0B                   
<missing>           13 days ago         /bin/sh -c mkdir -p /run/systemd && echo '...   7B                   
<missing>           13 days ago         /bin/sh -c sed -i 's/^#\s*\(deb.*universe\...   2.76kB               
<missing>           13 days ago         /bin/sh -c rm -rf /var/lib/apt/lists/*          0B                   
<missing>           13 days ago         /bin/sh -c set -xe   && echo '#!/bin/sh' >...   745B                 
<missing>           13 days ago         /bin/sh -c #(nop) ADD file:96db69a1ba6c80f...   120MB     
 
- if you wish to see more, add --no-trunc after history 
- Example:  
- $ sudo docker history --no-trunc 31f92e1e07b3 
 
 
*** when using [docker ps] or [docker ps -a] use --no-trunc to see more running processes *** 
- Example: docker ps -a --no-trunc 
 
CLEARING PORTS 
 
If you run into errors regarding ports already in use, this will allow you to search and clear the ports: 
 
- Example: 
- $ sudo lsof -t -i:<PORT> 
-this will return the PID using the port 
- then if it is safe, type: 
-sudo kill $(sudo lsof -t -i:<PORT>) 
 
 
 
________________________________________________________________________________________________________________________________________________
______________________ 
 
Package software into standardized units for development, shipment and deployment 
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system
libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software
from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the
same infrastructure. 
 
 
 
 
To speed up my process and general workflow in respect to Docker, I have created an alias file which I am going to share with you in hopes that it will help you too. Aliasing
many of the long and verbose commands in Docker has become invaluable to me as, in my opinion, a smart phone with a contact phonebook. Why would I spend even 5
minutes trying to type out a string, not even mentioning how much we as devs have to remember already, when I can literally type out 3 to 5 letters in my console and have
everything spin up for me. 
 
 
# BASH ALIASES 
 
# Docker run container command. Container will run in the foreground. Keep bash shell open to receive logs. 
alias drun='docker run' 
 
# Docker run container in background (daemonized). 
alias drd='docker run -d' 
 
# Remove Docker image. 
alias dri='docker rmi -f' 
 
# List all Docker images on local machine. 
alias dim='docker images' 
 
# List all running Docker containers. 
alias dps='docker ps' 
 
# Exec into a Docker container. 
# alias dex='docker exec <container ID> -it /bin/bash OR /bin/sh 
alias dex='docker exec -it ' 
 
# Safely stop a Docker container 
alias ds='docker stop' 
 
# Kill Docker container. 
alias dk='docker kill' 
 
# Docker-compose command for any docker-compose.yml file. 
alias dcu='docker-compose up' 
 
# Docker-compose down command. This will clean up created network stack. 
alias dcd='docker-compose down' 
 
# List all Docker networks. 
alias dnet='docker network ls' 
 
# Inspect docker network. >>> "dni <network name>" 
alias dni='docker network inspect' 
 
# Remove Docker network. >>> "dnr <network name>" 
alias dnr='docker network rm' 
 
# Display the PID of a running container. >>> "dpid <container name or ID>" 
alias dpid="docker inspect --format '{{ .State.Pid }}' "$@"" 
 
# Display the IP address of a running container. >>> "dip <container name or ID>" 
alias dip="docker inspect --format '{{ .NetworkSettings.IPAddress }}'" 
 
# Strip Dockerfile from pre-built Docker image. 
alias dfimage='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage' 
 
# Create portainer volume. Step 1 of 2.. 
alias p1='docker volume create portainer_data' 
 
# Run command to launch Portainer. Step 2 of 2. Navigate to http://localhost:9000 
alias p2='docker run -d -p 9000:9000 --restart always -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/portainer:/data portainer/portainer' 
 
# Remove all stopped containers. 
alias rmstop='docker rm $(docker ps -a -q)' 
 
# Remove all dangling images not associated with running containers. 
alias rmuti='docker rmi -f $(docker images | grep "^<none>" | awk "{print $3}")' 
 
# cd and start ElasticSearch locally 
alias es-local='cd elasticsearch-5.6.8 && ./bin/elasticsearch' 
 
# Display the current number of running Mongo instances. 
alias mo?="ps -ef | grep mongod | grep -v grep | wc -l | tr -d ' '" 
 
############## 
 
# Run Shore-Mongo 
alias met-mongo='meteor npm run mongo' 
 
# Run Shore-Es 
alias met-es='meteor npm run elasticsearch' 
 
# Run Oplog-Monitor 
alias met-op='meteor npm run oplog-monitor' 
 
# Run S4 
alias met-s4='meteor npm run s4' 
 
##################################################################################################################### 
 
# Run mongo, elasticsearch, and s4 containers, from any directory then display to confirm. 
alias mongo-es4='cd ~/shore && meteor npm run mongo && meteor npm run elasticsearch && meteor npm run s4 && dps' 
 
# Run Meteor Shore from any directory 
alias run-shore='cd ~/shore && meteor npm start' 
 
# Launch all 3 images as containers then start meteor. 
alias all-shore=' cd ~/shore && meteor npm run mongo && meteor npm run elasticsearch && meteor npm run s4 && dps && meteor npm start' 
 
# Display current ip 
alias myIp='ifconfig | grep 192' 
 
##################################################################################################################### 
 
 
 
https://www.infoworld.com/article/3204171/linux/what-is-docker-linux-containers-explained.html 
 
 
 
 
Docker Cheat Sheet 
 
A Docker Cheat Sheet 
 
Introduction 
 
Docker makes it easy to wrap your applications and services in containers so you can run them anywhere. As you work with Docker, however, it's also easy to accumulate an
excessive number of unused images, containers, and data volumes that clutter the output and consume disk space. 
 
Docker doesn't provide direct cleanup commands, but it does give you all the tools you need to clean up your system from the command line. This cheat sheet-style guide
provides a quick reference to commands that are useful for freeing disk space and keeping your system organized by removing unused Docker images, containers, and volumes. 
 
How to Use This Guide: 
 
    This guide is in cheat sheet format with self-contained command-line snippets 
    Jump to any section that is relevant to the task you are trying to complete. 
 
The command substitution syntax, command $(command), used in the commands is available in many popular shells such as bash, zsh, and Windows Powershell. 
Removing Docker Images 
Remove one or more specific images 
 
Use the docker images command with the -a flag to locate the ID of the images you want to remove. This will show you every image, including intermediate image layers. When
you've located the images you want to delete, you can pass their ID or tag to docker rmi: 
 
 
List: 
 
    docker images -a 
 
Remove: 
 
    docker rmi <imageID> 
 
Remove dangling images 
 
Docker images consist of multiple layers. Dangling images are layers that have no relationship to any tagged images. They no longer serve a purpose and consume disk space.
They can be located by adding the filter flag, -f with a value of dangling=true to the docker images command. When you're sure you want to delete them, you can add the -q flag,
then pass their ID to docker rmi: 
 
Note: If you build an image without tagging it, the image will appear on the list of dangling images because it has no association with a tagged image. You can avoid this situation
by providing a tag when you build, and you can retroactively tag an images with the docker tag command. 
 
List: 
 
    docker images -f dangling=true 
 
Remove: 
 
    docker rmi $(docker images -f dangling=true -q) 
 
Removing images according to a pattern 
 
You can find all the images that match a pattern using a combination of docker images and grep. Once you're satisfied, you can delete them by using awk to pass the IDs to
docker rmi. Note that these utilities are not supplied by Docker and are not necessarily available on all systems: 
 
List: 
 
    docker ps -a |  grep "pattern" 
 
Remove: 
 
    docker images | grep "pattern" | awk '{print $1}' | xargs docker rm 
 
Remove all images 
 
All the Docker images on a system can be listed by adding -a to the docker images command. Once you're sure you want to delete them all, you can add the -q flag to pass the
Image ID to docker rmi: 
 
List: 
 
    docker images -a 
 
Remove: 
 
    docker rmi $(docker images -a -q) 
 
Removing Containers 
Remove one or more specific containers 
 
Use the docker ps command with the -a flag to locate the name or ID of the containers you want to remove: 
 
List: 
 
    docker ps -a 
 
Remove: 
 
    docker rm ID_or_Name ID_or_Name 
 
Remove a container upon exit 
 
If you know when you’re creating a container that you won’t want to keep it around once you’re done, you can run docker run --rm to automatically delete it when it exits. 
 
Run and Remove: 
 
    docker run --rm image_name 
 
Remove all exited containers 
 
You can locate containers using docker ps -a and filter them by their status: created, restarting, running, paused, or exited. To review the list of exited containers, use the -f flag
to filter based on status. When you've verified you want to remove those containers, using -q to pass the IDs to the docker rm command. 
 
List: 
 
    docker ps -a -f status=exited 
 
 
________________________________________________________________________________________________________________________________________________
______________________ 
 
When I started here at I.W.A. the Docker documentation, honestly sucked; for me it just added to the confusion. It was written with the assumption that you're
already very familiar with container-based infrastructure and simply didn't explain much of anything, nor did it have any congruency with trying to find what I was
looking for. 
I am using this cross-training session for your benefit as I want to highlight some of the “obvious" things that are very simple to understand once explained, but
they aren't easy to discover without spending time with trial and error. Hopefully this will make moving to, and understanding Docker a lot easier for you. 
 
1. Containers 
Docker uses a very different kind of virtualization called containers. A container can be thought as a completely self-contained machine, for all intents and purposes it has its own
OS, its own file system and anything else you would expect to find in a virtualized machine. But the catch is that a container only runs one program†. For example you may have
a MySQL server running in a container and Redis running in a separate container. 
Even though each container works as a self-contained OS it does not require the same resources as a dedicated virtualized OS. Many containers can share the same physical
resources (like CPU and memory) against a host. 
† A container can actually run more than one process. However you probably wouldn't do this unless you had a special reason to. In almost all cases it's best to run one process
or service in a single container. (the shore-oplog-monitor runs multiple microprocesses so I simply wrote a shell script to automate spinning up those processes, thus making life
easier for the ever-bug-fixing devs here at I.W.A) 
 
2. Images 
Images are a snapshot of the file system, however they are always based on a another image. For example if we took an image of the container and it was 200mb, then installed
10mb worth of software and took another image, it would only be 10mb for that image because it only contains the changes since the previous base image. 
The image does not contain the kernel, so it's not uncommon for images to be just a few megabytes. 
Images are cached which makes rebuilding containers very, very fast. (the uppermost layer (top layer) is the writable layer) 
 
3. Stateless 
Each container can have directories (zero or more) mounted to it from the host. For example if you were running an Apache web server container you would not load the source
files onto the container itself. Rather you would mount a directory of the host operating system (containing the files for the web server) to a directory of the container, like: 
/Users/jdsilva/Development/mywebsite -> /var/www/html 
This makes the containers (and images) stateless. Containers can be restarted and images can be destroyed without affecting the application. It also makes the images much
smaller and reusable. Another advantage is that several containers can share the same mounted directory. For example, if you had several web servers serving the same files. 
 
4. The Dockerfile 
A Dockerfile is a text file (usually held in the root of your project) that has the steps required to build an image. This is akin to the bash script that you would use to install
software or setup environment variables. A Dockerfile looks like this: 
 
FROM node 
 
RUN apt-get update && \ 
    apt-get install -y xpdf-utils zip unzip antiword unrtf tesseract-ocr nano redis-server && \ 
    mkdir /src && cd /src && npm install 
 
RUN npm install -g forever 
 
WORKDIR /src 
 
ADD . /src 
COPY runAll.sh /src/ 
 
RUN chmod +x /src/runAll.sh 
RUN chmod -R +x /src/utils 
RUN chmod -R +x /src/ 
 
RUN npm install 
 
EXPOSE 6379 
ENTRYPOINT /src/runAll.sh 
 
^ this is the Dockerfile for our oplog-monitor 
 
Each line is a command. The first line is always a FROM command that specifies the base image which we build upon. Each step creates a new image, but each image only
contains the changes since the last snapshot (previous command). 
If your containers are stateless then you should be able to change the Dockerfile and rebuild the containers very quickly and easily. 
 
5. Multiple Containers 
It's fairly unlikely that your application will only require a single container. Usually you will have several containers for other services like a database, web service, background
tasks, etc. For this we use the docker-compose command. 
docker-compose uses a very simple YAML file to build multiple containers. Each container can have its own Dockerfile that customizes the individual container but docker-
compose will build all the containers and put them into the same virtual network. (this can be especially helpful with AWS Fargate as you can't always access your containers to
network them) 
 
Here is an example of a docker-compose.yml (usually in the root directory of the project) to build a redis application: 
 
version: '3' 
services: 
  app: 
    image: lagden/cep_consulta:5.0.0 
    command: ["node", "index.js"] 
    environment: 
      - NODE_ENV=production 
      - RHOST=redis 
    ports: 
      - 1235:3000 
    networks: 
      - redis-net 
    depends_on: 
      - redis 
 
redis: 
    image: redis:4.0.5-alpine 
links: - db     
command: ["redis-server", "--appendonly", "yes"] 
    hostname: redis 
    networks: 
      - redis-net 
    volumes: 
      - redis-data:/data 
networks: 
  redis-net: 
volumes: 
  redis-data: 
 
Running docker-compose up will create two containers called app and redis. They will be put into the same virtual network. 
 
6. Container Networking 
Containers that are built with docker-compose are put into the same virtual network. This can be configured however you like (from the YAML file) but there are some things to
understand: 
1. Containers (if permitted) use the name of the container as the hostname. For example if the redis container wanted to connect to the db container it would use db as
the hostname. 
By default containers are not able to connect to other containers unless configured to do so. To allow the redis service to access the db service it has to be explicitly stated in
the links property. 
2. By default the host operating system cannot access ports on a container. Since redis   needs access we need to map the internal port 1235 to a local port on the host
of 3000. This means when we put localhost:8080 into the browser of the host operating system we can see our redis application. 
 
Summing Up 
This was an extremely brief overview. I hope to explore individual topics in more detail in future cross-training sessions, so stay tuned! 
 
 
 
https://derickbailey.com/2017/01/30/10-myths-about-docker-that-stop-developers-cold/ 
 
 
 

You might also like