Professional Documents
Culture Documents
By Eric Gregory
1
Copyright © 2022 Eric Gregory
All Rights Reserved
First edition
2
Welcome!
3
Table of Contents
4
Chapter 1: What is a Container?
6
developers, as well as a canvas that is easy to standardize
across an organization.
What is Docker?
7
How to install Docker
That’s it for now. In the next chapter, we’ll start working with
Docker to create, observe, and delete containers.
8
Chapter 2: Creating, Observing, and Deleting
Containers
9
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
59bf1c3509f3: Already exists
Digest:
sha256:21a3deaa0d32a8057914f36584b5288d2e5ecc9843
80bc0118285c70fa8c9300
Status: Downloaded newer image for alpine:latest
Hello World
Observing containers
10
Docker Desktop if we’re on Mac or Windows, but it’s useful to
be able to bring up this information quickly in the terminal.
% docker container ls -a
Since we used the -a flag, this will list all our containers,
whether they are running or not—we should see both our
echo and our ping containers, along with some useful
information such as their names and container IDs. Make a
note of the ping container’s ID. Mine, for example, is
5480aa85d1c5.
% docker container ls
11
ID, we’ll get information about the processes running within
the container from the perspective of the host system:
12
The ping running as PID 1 within the container is the very
same process as the one signified by PID 3509 on the wider
system, but because the container has its own isolated kernel
namespaces, its system doesn’t see the wider world outside.
Removing containers
Now let’s clean up. We can stop our running container (again,
replacing the numeric ID with your own) using:
13
Chapter 3: Building Container Images from
Dockerfiles
... and much more besides. As we’ll see over the course of this
book, the modularity fostered by container images can
transform the way you develop and deploy software.
15
Let’s see what this looks like in action. Make sure your
container engine is running, and then bring up the terminal.
Try entering the following command:
16
FROM alpine:latest
RUN apk update
RUN apk add curl
% docker image ls
…we should see it in our image listing. The images listed will
vary depending on your environment, but you should see
curler among them, and the output should look something
like this:
17
REPOSITORY TAG IMAGE ID CREATED
curler latest a46b2fdd95c9 1 minute ago
nginx latest 605c77e624dd 6 weeks ago
alpine latest c059bfaa849c 2 months ago
...
<HTML><HEAD><meta http-equiv="content-type"
content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
Next time, we’ll dive deeper into the use and management of
images.
18
Chapter 4: Using an Image Registry
In our first chapter, we noted that Docker Hub isn’t the only
container image registry. There are public registries managed
by other entities, and it is also possible for organizations to
create private registries using tools like Mirantis Secure
Registry. Anyone who needs to establish a secure software
supply chain will need to be able to trust the provenance and
contents of their container images, and should therefore use a
private registry of some form.
20
Docker Hub also provides another label for a Verified
Publisher. Docker has confirmed that images with this label
are published and maintained by the entities that produced
them. For example, users of Amazon Web Services (AWS) can
download a container image for their command-line interface
(CLI) that is confirmed to come from Amazon.
% docker login
21
% docker run -it python bash
% apt update
% apt install nano
% nano d6.py
#import module
from random import randint
% python d6.py
2
% docker container ls
% docker image ls
The top of your image listing should look something like this:
Let’s take that one step further and upload our new image to
Docker Hub. We’ll start by tagging our image for upload, and
then push it online:
The command line output will show you each of the layers
being pushed:
24
The push refers to repository [docker.io/<Your
Docker ID>/d6]
e06d6e649287: Pushed
51a0aba3d0a4: Mounted from library/python
e14403cd4d18: Mounted from library/python
8a8d6e9f7282: Mounted from library/python
...
25
Chapter 5: Volumes and Persistent Storage
27
Now let’s try creating a volume and mounting it to multiple
containers.
% ls
bin boot d6.py d6app dev etc home lib
lib64 media mnt opt proc root run sbin
srv sys tmp usr var
% cd d6app
% ls
<nothing>
29
While we’re here, we’ll create an empty text file:
% touch d6log.txt
Now we'll go back into the container’s root directory and open
the Python app we wrote last chapter. (We won’t need to
download nano this time—it’s part of the image now.)
% cd ..
% nano d6.py
Let’s add some functionality to our die-rolling app. It’s all well
and good to get randomized die rolls, but wouldn’t it be nice if
we could record those rolls, so we can keep a log of our
amazing (or terrible) luck? Update the d6.py file contents to
look like this…
30
#import module
from random import randint
When we run d6.py now, the program should open the text
file in the d6app volume, convert the randomized output to a
string, and record that string (and a line break) for posterity.
Let’s test it out.
% python d6.py
5
31
% nano d6app/d6log.txt
Try running the app a few more times and check the file
again.
32
But what happens when we stop the container? Well, first
let’s commit and push this updated version of the d6 app to
Docker Hub. With the container still running, open another
terminal session and enter:
Now we’re going to start a new container with nearly the same
command that we used at the beginning of this chapter:
33
Read-only?
This is a good place for us to pause and note that not all
containers need to mount volumes with read-write access,
and as a general rule, we don’t want to give containers any
more privileges than they require. To mount a volume with
read-only access, we simply append :ro to the name of the
volume directory within the container. For the command
above, this would look like: d6app:/d6app:ro
% nano d6app/d6log.txt
% python d6.py
% nano /d6app/d6log.txt
35
Chapter 6: Container Networking and
Opening Container Ports
36
What is container networking?
172.17.0.2
37
Containers’ IP addresses are created on a local subnet. That
means initially, the assigned IP addresses will only “make
sense” to one another: you can’t access them—using those
addresses, at least—from another machine or even the host
machine.
38
Wait—I want to create my own container network!
172.17.0.2:8000
% docker container ls
40
Hmm. Well, what if we look up the IP address of the
container and try to access it that way?
41
Your browser will try to load the address, but to no result.
42
can connect to external networks through virtual ports, a
similar idea is in play here.
43
Success! Now we can access the containerized application on
our host machine. From here, we could serve it to the outside
world with the right configuration, though we would likely
use a system like a container orchestrator to deploy our apps
to production. We will discuss those systems in more detail in
the final chapter.
44
Chapter 7: Running a Containerized App
46
Choose your language and Continue.
47
On the database configuration screen, choose SQLite and
specify the directory linked to our volume for the data
directory—in this case: wiki-data/
Now you’ll need to name the wiki and create a username and
password for the administrator account. From here, you can
go ahead and complete the installation. You should get a
congratulations screen which will automatically download a
file called LocalSettings.php to your machine. If it doesn’t
do this automatically, click the link.
48
This file takes many of the settings you’ve chosen—the
database type, server, and credentials, for example—and
associates them with variables used by the larger app.
49
% root@e9537d7c33e3:/var/www/html# ls -1a
.
..
CODE_OF_CONDUCT.md
COPYING
CREDITS
FAQ
HISTORY
INSTALL
README.md
RELEASE-NOTES-1.37
SECURITY
UPGRADE
api.php
autoload.php
cache
composer.json
composer.local.json-sample
docs
extensions
images
img_auth.php
includes
index.php
jsduck.json
languages
load.php
maintenance
mw-config
opensearch_desc.php
resources
rest.php
50
skins
tests
thumb.php
thumb_handler.php
vendor
wiki-data
% touch LocalSettings.php
Download nano:
% apt update
% apt install nano
And paste in the contents. Once we’ve saved the file with
CTRL+O and exited to the shell with CTRL+X, we can delete
nano with apt remove nano. (This is being very assiduous
about optimizing the footprint of our image, but it’s a good
habit, and even small differences add up at scale.)
The only thing that will be different this time is that we’re
creating our new container from the solo-wiki base image
we just committed, with a LocalSettings.php file
included–and our volume already has configuration data
inside.
52
I recommend logging in with your administrator account and
making changes to the main page. If you stop and restart the
container…
53
Chapter 8: Multi-Container Apps on
User-Defined Networks
User-defined networks
55
What about linking?
57
That’s a pretty hefty docker run command, so let’s break it
down. We’re…
58
% docker run --name nginx-test -d nginx
675eeead7df8d23fbb388826c58403223fd64cf21b9d44917
dfb38091d1b6e7f
% docker inspect --format='{{range
.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
nginx-test
172.17.0.2
% docker inspect --format='{{range
.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
wiki-mysql
172.19.0.2
59
Move forward to the Connect to database screen just like
you did previously, and now select “MariaDB, MySQL, or
compatible.”
For Database host, you can simply enter the name of the
database container—in this case, wiki-mysql. If these
containers are restarted, they’ll still be able to interact with
one another as configured, even if those future instances have
different IP addresses.
For Database name, you can choose any name. You don’t
need to enter anything for the table prefix, and for the
Database password, you’ll enter the password we set via
environment variable (same as username, “root”) when we
created the database container.
60
On the next screen, click Continue.
61
Finalize your administrator information, and then go through
the installation process.
62
If we want to deploy this app repeatedly or at scale, we might
wish to go through a little less manual configuration. In our
final chapter, we’ll learn how to streamline the deployment of
multi-container applications.
63
Chapter 9: Docker Compose
64
# MediaWiki with MySQL
#
version: '3'
services:
mediawiki:
image: mediawiki
restart: always
ports:
- 8000:80
volumes:
- /var/www/html/images
database:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
% docker-compose -f wiki.yml up
67
% docker-compose -f wiki.yml up
To recap…
68
Chapter 10: Building a Web App as
Containerized Services
Since we’re bringing all the pieces together and building our
own app, this chapter will take a little longer than five
minutes—but likely no more than thirty. So settle in with a
coffee (or refreshment of your choice) and let’s get started.
69
Network, volume, and database
70
● The --network argument specifies that the container
is going to use our user-defined test-net network.
% mkdir test-app
72
● The -w argument defines a working directory inside
the container–so that’s where we’ll land when we run
bash.
Inside the container, we can use ls, and we should see our
test-app directory. This is our actual directory, not a copy,
so anything we do here will be reflected on the host machine.
Let’s hop into our app directory…
% cd test-app
% npm init -y
That’s it for our initial setup. We can type exit to exit out of
the container shell session. If we check the project directories
through our host system, we’ll see all the new files we’ve
created.
% cd test-app
% touch index.js
74
// Dependencies
const db = mysql.createConnection({
host: "test-mysql",
user: "root",
password: "octoberfest",
});
db.connect((err) => {
if (err) {
console.log("Error!", err);
} else {
dbStatus = "Connected to MySQL";
console.log(`${dbStatus}`);
}
});
75
Note: we’re able to use the hostname of the test-mysql
container, and it will resolve just fine via DNS. That’s a nice
model that we can scale pretty easily when deploying to a
container orchestrator. Make sure to be careful of your
passwords, though–we’re keeping things simple here for the
sake of a quick walkthrough, but in practice you’ll want to
make sure that sensitive passwords aren’t hanging out in the
open on unsecured git repositories. In real-world
deployments, you should use the Secrets functionality of your
container orchestrator.
76
Everything is working nicely–the containerized app
connected with the containerized database. But we want to
take our app template a little further and bring in the
front-end. So in another terminal tab, we’ll stop the test-app
container.
Let’s add a little more logic to our back-end now. We’ll simply
add the code below after what we’ve written previously in
index.js:
77
// Express config
app.listen(PORT, () => {
console.log(`Server listening on port
${PORT}`)
});
Let’s save these updates, and then we’ll run the back-end on a
container again, this time with the detached flag and some
port mapping.
78
Since we’re mapping the container’s port 3001 to localhost,
we should be able to check out our API at localhost:3001/api/
79
Perfect. Everything is running, and as it says here on the
default landing page, we’ll want to edit App.js in the source
folder in the client directory. So let’s open that and make a
few changes. Delete the contents of the file and replace them
with the code below:
80
import React from "react";
import logo from "./logo.svg";
import "./App.css";
function App() {
const [data, setData] = React.useState(null);
React.useEffect(() => {
fetch("/api")
.then((res) => res.json())
.then((data) => setData(data.message));
}, []);
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo"
alt="logo" />
<p>{!data ? "Checking connection..." :
data}</p>
</header>
</div>
);
}
81
then pass it onto the frontpage. But there’s one
wrinkle–there’s no API running here. This is a dedicated
front-end service. To deal with that, we’ll open the
package.json file for the client and add a line establishing a
proxy at test-app port 3001.
"proxy": "http://test-app:3001"
With that, our pieces are all in place. Let’s save and run the
client container again.
82
of quality of life improvements we might want to add. But
there’s one major efficiency we should definitely talk about,
and that’s Docker Compose. You’re not going to want to have
to launch all of these services independently with a bunch of
unwieldy arguments every time you work on your app.
Fortunately, we can create a Docker Compose file that does all
of that for us.
services:
test-app:
image: test-app
hostname: test-app
networks:
- test-net
expose:
- "3001"
83
ports:
- "3001:3001"
volumes:
- ./test-app:/usr/src/app
working_dir: /usr/src/app
command: node index.js
test-client:
image: test-client
hostname: test-client
networks:
- test-net
expose:
- "3000"
ports:
- "3000:3000"
volumes:
- ./test-client:/usr/src/app
working_dir: /usr/src/app
command: npm start
test-mysql:
image: test-mysql
hostname: test-mysql
networks:
- test-net
restart: always
volumes:
- test-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: octoberfest
volumes:
test-data:
external: true
84
name: test-data
networks:
test-net:
external: true
name: test-net
% docker-compose up
Where do we go next?
87
● Docker Compose: a Docker tool that simplifies
deployment of multi-container apps, also well-suited
to development environments on one host
88