You are on page 1of 20

Docker:

Virtual Machines vs. Containers:


containers share OS and don't need OS to separate environments while virtual machines work on
supervisors. VMs only share hardware whereas Containers (dockers) also share OS.

What is Container:
Executable unit of software that encapsulate everything necessary to run dependencies and can
everywhere.

Docker launched in 2013 and popularized containers.

Vocabulary: Docker Image, Docker Container, Image, Container.

Container Runtime:

Uses of Containers:
 Microservices: Loosely coupled and independently deployable services.
 DevOps: Build, Ship, and Run software.
 Hybrid and Multi-cloud: Run consistently across different environments.
 Application modernizing and migration.

Development of a container:
Docker file serves as a blueprint for the Image and outlines the steps to build and image.

Image is a read-only file that contains everything needed to run an application. Images serve as templates
for containers.

Container is a running image which puts a Write-Layer above the read-only image.

docker run -p 8080:8080 myimage:v1


This will run our container (node.js in our case) on port 8080. No need to install node and dependencies
and this is the usefulness of containerizations (with docker).

This Docker file of node.js became a microservice.

Docker file is text file having docker instructions: FROM, ENV, ADD/COPY, RUN, CMD

Docker CLI: build, tag, push, pull

 Containerization is for Cloud-Native apps.


 Containers are portable and platform independent.
 Containers can be of several megabytes.
 Docker is a toolset for containerization. Rocket and CRI-O are other toolsets.
 Kubernetes is for container orchestration.
 IMB cloud registry is a container-host.

Docker Steps:
1) Dockerfile
FROM ........
COPY . .
RUN ......
EXPOSE 3000
CMD ["NPM", "start"]

Question: Why do we use the FROM instruction at the top of a Docker file?
To specify a base image to build our application on top of and for all subsequent instructions to follow on
from

2) docker build.
docker-compose.yml
version: '3'
services:
app: #first service is our app
build: ./app
ports:
- "3000:3000" #bind port 3000 on local machine to port 3000 on docker
restart: always
links:
- mongo
mongo: #there is no build here since we don't have local image of DB.
image: mongo
ports:
- "27017:27017"
Kubernetes:
When number of containers becomes large, it becomes harder to manage and handle it.
1. Use the kubectl CLI
2. Create a Kubernetes Pod
3. Create a Kubernetes Deployment
4. Create a Replica Set that maintains a set number of replicas
5. Witness Kubernetes load balancing in action

Kubernetes can provide replica sets of pods. So, the container gets replicated and this provide load
balancing and fault tolerance.

Kubernetes Components and API:

Kube-API-server, etc, cluster, nodes, pods, container runtime

etcd is a key-value store of configuration.


When you deploy Kubernetes, you get a cluster.

A Kubernetes cluster consists of a set of worker machines, called nodes, that run
containerized applications. Every cluster has at least one worker node.

The worker node(s) host the Pods that are the components of the application. The control
plane manages the worker nodes and the Pods in the cluster.

The container runtime is the software that is responsible for running containers.

Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any
implementation of the Kubernetes CRI (Container Runtime Interface).

Container Lifecycle (Docker and Kubernetes):

What is Kubernetes:
In Kubernetes, usually:
 Cluster is stored on a device, many replica-sets and pods are stored on 1 device.
 Replica-sets use threads for serving requests
 Scale up: add new pods (...running) to serve our requests (horizontal scaling).
 Scale down: terminate some pods (…terminating)
 Docker image doesn't run itself but run by Kubernetes pod (or by docker run without
kubernetes)

Manage Kubernetes applications:


Rolling up in Kubernetes is needed when having a new version of our app but first we need to
containerize it and deploy the new version.

Rollback is also available


Storing configuration: It is a good practice not to hardcode the configuration variable in code. We keep
them separated so any changes don't need key changes.

Non-sensitive information like test, dev, and prod. Sensitive like API keys and account ids.

Secrets are same as config-maps but written secretly.

In .yaml file, images field is for docker image tag. Then, we apply .yaml using kubectl and then deploy.

A Kubernetes deployment is a resource object in Kubernetes that provides declarative updates to


applications. A deployment allows you to describe an application's life cycle, such as which images to use
for the app, the number of pods there should be, and the way in which they should be updated.

OpenShift:
Cloud native meaning: Cloud native is an approach to building and running applications that exploits the
advantages of the cloud computing delivery model. When companies build and operate applications
using a cloud native architecture, they bring new ideas to market faster and respond sooner to customer
demands.

Kubernetes is an API.

OpenShift is extension API to Kubernetes


Kubernetes ecosystems:

Introduction to Red Hat OpenShift:


Objectives:
Use the OC (OpenShift CLI).
Use the OpenShift web console.
Build and deploy an application using s2i
Inspect a Build-Config and an Image-Stream.

Kubernetes and Openshift:


 OpenShift projects are Kubernetes namespaces with additional administrative functions.
 OC: recall that OC comes with a copy of kubectl, so all the kubectl commands can be run with oc.

Openshift/IBM from catalog => NodeJS


Final Project:
Multi-tier application.

Objectives:
Build and deploy a simple guestbook application.

Use OpenShift image streams to roll out an update.

Deploy a multi-tier version of the guestbook application.

Create a Watson Tone Analyzer service instance on IBM Cloud.

Bind the Tone Analyzer service instance to your application.

Auto-scale the guestbook app.

IBM Watson Tone Analyzer is a service on the IBM Cloud that enables you to analyze emotions and tones
in written content
Guestbook is a simple, multi-tier web application that we will build and deploy with Docker and
Kubernetes. The application consists of a web front end, a Redis master for storage, a replicated set of
Redis slaves, and an analyzer that will analyze the tone of the comments left in the guestbook. For all of
these we will create Kubernetes Deployments, Pods, and Services.
Microservices and serverless:
Vendor lock-in is when someone is essentially forced to continue using a product or service
regardless of quality, because switching away from that product is so costly.

Microservices Vs. Serverless:

Serverless:
Serverless Types:
FaaS, Object Storage, Streaming and messaging, API Gateways

Faas:
Object Storage:

Messaging and Streaming:


API Gateways:

You might also like