You are on page 1of 66

IBM Cloud Private BP Workshop

Philippe THOMAS
ph_thomas@fr.ibm.com

Cloud Architect
Ecosystem Advocacy Group
IBM0
Philippe THOMAS

Cloud Architect

IBM @Nice
Ecosystem Advocacy Group
European Technical Team

+33 675899197
ph_thomas@fr.ibm.com
www.linkedin.com/in/phthom

Cloud Architectures
Private & Public Clouds
API Management
Docker/Kubernetes
Cloud Automation
Business Intelligence
Monitoring/Perf.
DevOps
DB(SQL/NoSQL)
Mainframe
Messaging/Queuing
1
IBM Cloud Private
Technical Overview and Architecture

2
IBM Cloud Private - Technical Overview
• Compute Options
• Definition
• Architecture
• Functionalities

3
IBM Cloud Architecture

https://www.ibm.com/w3-techblog/ 4
Where to run your application : Compute Options

Developer
Productivity,
Choice,
Control, &
Consistency

5
IBM Cloud Private - Technical Blueprint

6
IBM Cloud Private Solution Overview

7
IBM Cloud Private Topologies Worker Node
Docker

Pod Pod Pod


• Simple Container Container Container
Container Container Container
– Single machine install (master is a
worker)
– Great for testing and learning about Worker Node
the platform Master Docker
Master
• Standard Master Pod Pod Pod
Container Container Container
– Single master (single master, 3 Container Container Container
workers, 1 proxy)
– Great for non-production testing
environment Worker Node
Proxy Node Docker
• High Availability Proxy Node
– Multiple masters (3 masters, 3+ Proxy Node Pod Pod Pod
Container Container Container
workers, 3 proxy) Container Container Container

– Production installation
8
Architecture
IBM Cloud Private
Components

9
Architecture
IBM Cloud Private
Components

With Management Node

10
IBM Cloud Private ~ 50 Components

https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.3/getting_started/components.html 11
IBM Cloud Private – Types of Nodes
• Boot node : A boot or bootstrap node is used for running installation, configuration, node scaling, and cluster updates. Only one
boot node is required for any cluster. You can use a single node for both master and boot.
• Master node : A master node provides management services and controls the worker nodes in a cluster. Master nodes host
processes that are responsible for resource allocation, state maintenance, scheduling, and monitoring. Because a high availability
(HA) environment contains multiple master nodes, if the leading master node fails, failover logic automatically promotes a different
node to the master role. Hosts that can act as the master are called master candidates.
• Worker node : A worker node is a node that provides a containerized environment for running tasks. As demands increase, more
worker nodes can easily be added to your cluster to improve performance and efficiency. A cluster can contain any number of
worker nodes, but a minimum of one worker node is required.
• Proxy node : A proxy node is a node that transmits external request to the services created inside your cluster. Because a high
availability (HA) environment contains multiple proxy nodes, if the leading proxy node fails, failover logic automatically promotes a
different node to the proxy role. While you can use a single node as both master and proxy, it is best to use dedicated proxy
nodes to reduce the load on the master node. A cluster must contain at least one proxy node if load balancing is required inside
the cluster.
• Management node : A management node is an optional node that only hosts management services such as monitoring, metering,
and logging. By configuring dedicated management nodes, you can prevent the master node from becoming overloaded. You
can enable the management node only during IBM Cloud Private installation.
• Vulnerability Advisor : node is an optional node that is used for running the Vulnerability Advisor services. Vulnerability Advisor
services are resource intensive. If you use the Vulnerability Advisor service, specify a dedicated VA node.

12
Calico
• A new approach to virtual networking and network
security for containers, VMs, and bare metal services,
that provides a rich set of security enforcement
capabilities running on top of a highly scalable and
efficient virtual network
§ The calico/node Docker container runs on the
Kubernetes master and each Kubernetes node in the
cluster
§ The calico-cni plug-in integrates directly with the
Kubernetes kubelet process on each node to discover
which pods have been created, and adds them to Calico
networking
§ The calico/kube-policy-controller container runs as a
pod on top of Kubernetes and implements the
NetworkPolicy API
§ https://docs.projectcalico.org/v3.1/reference/architecture/data-path
13
Calico Data Path: IP Routing and iptables

14
Proxy Nodes and Ingress resources
• Ingress: Typically, services and pods have IPs only routable by the cluster network. All traffic
that ends up at an edge router is either dropped or forwarded elsewhere.
• An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
• It can be configured to give services externally-reachable URLs, load balance traffic, terminate
SSL, offer name based virtual hosting etc.
• Users request ingress by POSTing the Ingress resource to the API server
• Ingress Controller: Responsible for fulfilling the Ingress, usually with a load balancer, though it
may also configure your edge router or additional frontends to help handle the traffic in an HA
manner.

15
Network services components
• DNS: (kube-dns, Cluster DNS) K8s DNS schedules a DNS pod and service on the cluster, and
configures the kubelets to tell individual containers to use the DNS service’s IP to resolve DNS
names. Every service defined in the cluster (including the DNS server itself) is assigned a DNS
name. By default, a client pod’s DNS search list will include the pod’s own namespace and the
cluster’s default domain.
• VIP and UCarp: UCarp allows a couple of hosts to share common virtual IP (or floating IP)
addresses in order to provide automatic failover.

16
Authentication components
• Authentication Manager: Provides an HTTP API
for managing users. Protocols are implemented
in a RESTful manner. Keystone is used for
authentication. Pass-through is used for external
LDAP integration.
• Keystone: The OpenStack provided identity
service currently supporting token-based authN
and user-service authorization.
• MariaDB: An open source relational database
made by the original developers of MySQL. In
this case it is used to back-end Keystone.
• OIDC: OpenID Connect is an authentication layer
on top of OAuth 2.0, an authorization framework
• Mongo DB: (instead of Cloudant) Database that
is used by metering service.

17
Images and Registries
• You create a Docker image
and push it to a registry
before referring to it in a
Kubernetes pod
• There will likely be many
registries used in your
deployment

18
User Interfaces
• Cluster Management Console: (ICP
component) Use to manage,
monitor, and troubleshoot your
applications and cluster from a
single, centralized, and
secure management console.
• K8S Web UI: Can use to deploy
containerized applications to a
Kubernetes cluster, troubleshoot
your containerized application, and
manage the cluster itself along with
its attendant resources.
• kubectl: A command-line interface
for running commands against
Kubernetes clusters.

19
Application Center components
• Application center or Catalog provides a
centralized location from which you can
browse, and install packages in your
cluster.
• Helm: A tool for managing Kubernetes
charts. Charts are packages of pre-
configured Kubernetes resources.
• Helm Repository: A Helm
chart repository is a location where
packaged charts can be stored and
shared.
• Tiller: Runs inside of the cluster, and
manages releases (installations) of your
charts. 20
Logging Components : Elastic Stack
• The easiest and most embraced logging method for
containerized applications is to write to standard out and
standard error
• Elastic Stack (also known as ELK stack)
• Filebeat: A log data shipper for local files. Filebeat monitors the
log directories or specific log files, tails the files, and forwards
them either to Elasticsearch and/or Logstash for indexing.
• Elasticsearch: An open source full-text search engine based on
Lucene. It provides HTTP web interface and schema-free JSON
documents.
• Logstash: A open source tool for collecting, parsing, and
storing logs for future use.
• Heapster: The Kubernetes network proxy runs on each node.
• Kibana: An open source data visualization plugin for
Elasticsearch. Users can create bar, line and scatter plots, or
pie charts and maps on top of large volumes of data.

21
Monitoring Components: Prometheus and Grafana
• Prometheus: An open-source systems
monitoring and alerting toolkit originally built
at SoundCloud. Since its inception in 2012,
many companies and organizations have
adopted Prometheus, and the project has a
very active developer and user community. It
is now a standalone open source project and
maintained independently of any company.
• Grafana: An open-source, general purpose
dashboard and graph composer, which runs
as a web application.

22
Persistent storage components
• Traditionally Containers: stateless, ephemeral in nature
Storage exists within the container
The container goes away and so goes the storage
• Some applications desire state and thus persistent storage:
Specific aspects of configuration
Database (structured and unstructured)
Application data (website definitions, etc.)
• Storage must be universally accessible across the K8s environment
• ICP Persistent Storage Support: HostPath, NFS, GlusterFS, vSphereVolume
• Access Modes:
• ReadWriteOnce – the volume can be mounted as read-write by a single node
• ReadOnlyMany – the volume can be mounted read-only by many nodes
• ReadWriteMany – the volume can be mounted as read-write by many nodes
23
Gluster FS
• Network Attached Storage (NAS)
• POSIX-Compliant distributed file-system
• Differentiator -> No central metadata server
• Aggregator of storage and metadata
• Flexible scaling for capacity and performance
• NOTE: Red Hat Gluster Storage (formerly Red
Hat Storage Server) is built upon the open source
GlusterFS at gluster.org

24
What's new in version 2.1.0.3
• Kubernetes upgraded to 1.10
• Storage
• MongoDB replaces IBM Cloudant as the backend Database for IBM Cloud Private.
• You can now configure a vSphere Cloud Provider after you install IBM Cloud Private.
• Container Storage Interface (CSI) is now available as Beta.
• General Data Protection Regulation (GDPR)
• Technology previews
• Cluster federation - In multiple cluster environments, you can use a federation to
deploy and manage common services across multiple Kubernetes clusters.
• Running kube-proxy in IP Virtual Server (IPVS) mode.
• Manage and secure microservices with Istio.
• Pod auto scaling by using customer metrics.
• Using containerd as a runtime for cluster nodes. 25
What's new in version 2.1.0.3
• Catalog
• You can now view release notes information for most Helm charts in the Catalog. This
release notes contains information about each Helm chart, such as the version, what's
new, and any fixes, or enhancements added.
• Some IBM Helm charts might not have a release note. Release notes are not available
for third party Helm charts that are added to the Catalog.
• Also filtering from the Catalog is now enhanced to cover more options.
• Metering
• You can now collect data from and report on resource consumption of IBM products
that are running outside of your IBM Cloud Private environment. This environment can
be bare metal, a VM, or another cloud platform.
• Docker images that you build can include labels that identify the product identifier,
name and version for the offering. These labels can be discovered by the metering
daemon and used to associate metered runtime metrics with the instance of the
offering that is running. 26
What's new in version 2.1.0.3
• Security
• You can manage Secret passwords with the IBM Cloud Private CLI. You can enforce
password requirements, change passwords, and restart the required pods and
containers by using the IBM Cloud Private CLI.
• You can manage service IDs, API keys, and policies for access to specific services that
are needed by an application.
• Authentication and authorization audit logs are now available in IBM Cloud Private.
• You can enable end-to-end TLS encryption for the IBM Cloud Private Elasticsearch,
Logstash, and Kibana (Elastic stack). When enabled, it secures all communication
between every product in the stack, it does so with PKI-based mutual authentication.

27
Supported system configurations (ICP 2.1.0.3)
Specs Support Statement
OS x86 RHEL 7.3, 7.4, 7.5, Ubuntu 16.04 LTS, SUSE 12 SP3
Power RHEL 7.3, 7.4, 7.5, Ubuntu 16.04 LTS, SUSE 12 SP3
IBM Z RHEL 7.3, 7.4, 7.5, Ubuntu 16.04 LTS, SUSE 12 SP3; deployed as workers zVM, zKVM or LPAR
Browsers Windows Edge, Firefox and Chrome : latest version
Linux Firefox and Chrome : latest version
MacOS Safari, Firefox and Chrome : latest version
Docker x86 17.12.1-ce Or customer installed: 1.12 to 17.12.1 CE & EE
Power 17.12.1-ce Or customer installed: 1.12 to 17.12.1 CE & EE
Z (workers) 17.12.1-ce Or customer installed: 1.12 to 17.12.1 CE & EE
Storage Built-in storage options: GlusterFS + Heketi, vSphere vVol
External data stores: NFS 4, GlusterFS 3.10.1, Spectrum Scale (via NFS or Hostpath) + all Kubernetes
supported storage types
Networking Calico (default), NSX-T 2.0 (optional)

Specs Support Statement


Node specs (minimum) Master 2 cores @2.4 GHz,8 GB RAM,40 GB Disk
Worker 1 core @2.4 GHz, 4 GB RAM, 40 GB Disk
# of Nodes (PM or VM) Minimum 1 (single node install supported)
Maximum 1,000 Power nodes, 1,000 x86 nodes (tested limits, not hard limits)
28
# of Pods 20,000 (tested limit, not hard limit)
1K nodes

https://medium.com/ibm-cloud/journey-to-1000-nodes-for-ibm-cloud-private-5294138047d5 29
Labs Prerequisites
Prepare your environment

30
Lab Prerequisites
• To get access to the IBM network : OpenVPN + user/password
• An Ubuntu VM has been prepared for you for the installation lab
• ssh root@ip-address
• password

• Test your connection with a terminal console (SSH or Putty)


• Basic knowledge of Linux commands is required

• Hardware and OS prequisistes at minima


[ ] one host (physical or virtual)
[ ] CPU = 8 cores
[ ] RAM = 10 GB (10,240 MB)
[ ] Storage = 40 to 80 GB
[ ] Ubuntu 16.04.04 - 64 bits + packages
31
Installation Labs
How to install IBM Cloud Private

32
Installation Lab
• Purpose : installation of a single node IBM Cloud Private from scratch
• Prerequisites checks for installing ICP-ce (community edition)
• Possible installations
– on big laptops,
– on VMs in IBM locations
– on IBM Cloud Infra using VSI (Virtual Server Instance)
– on others VMs with other cloud providers
• 2 possibilities :
– Ubuntu 16.04.04 (with or without Vmware)
– Or Vagrant (prereq VirtualBox)

https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.2/installing/installing.html 33
ICP Topologies Worker Node
Docker

• Simple Pod Pod Pod


Container Container Container
– Single machine install (master is a Container Container Container
worker)
– Great for testing and learning about
the platform Worker Node
Docker
• Standard
Master Pod Pod Pod
– Single master (single master, 3 Container Container Container
workers, 1 proxy) Container Container Container

– Great for non-production testing


environment
Worker Node
• High Availability Docker
Proxy Node
– Multiple masters (3 masters, 3+
Pod Pod Pod
workers, 3 proxy) Container Container Container

– Production installation Container Container Container

34
Installation Overview
• Container-based installer:
– Download the installation container for CE or EE and execute the install
– The installer pulls down additional containers from Docker Hub for CE, local repo for EE
• Supported for RHEL and Ubuntu on X, POWER and Z (workers)
• Basic installation steps:
– 1. Configure OS
– 2. Modify installation configuration files and run the installer
• Overall installation should take < 4 hours depending on scenario
– (90% System Config, 10% Installation)

35
Installation Steps (1 hour) : ICPInstallationLab
• Task 1 : Update the system
• Task 2 : Add Docker’s official GPG key
• Task 3: Add a repo to get the Docker
• Task 4: Get Docker
• Task 5: Download IBM Cloud Private
• Task 6: SSH Keys setup
• Task 7: Customize ICP file
• Task 8: Install ICP (30 minutes)
• Task 9: Install CLIs and Persistent storage
• End of installation
https://github.com/phthom/IBMCloudPrivate 36
Containers
Flexibility, portability and Light Weight

37
Market Dynamics and Use Cases
• Container Adoption Drivers
– Microservice Patterns
– Cloud Native Applications
– Hybrid Cloud
– CD/CI in DevOps
– Modernizing Applications

• All industries are impacted

38
Containers

• A standard way to package an


application and all its
dependencies so that it can be App
moved between environments
and run without changes. Libs
Bins Container
• Containers work by isolating
the differences between
Config
applications inside the
container so that everything OS
outside the container can be
standardized.
39
Why Customers are interested in Containers
• #1 : Application Portability
• Isolated containers package the application, dependencies and configurations
together. These containers can then seamlessly move across environments and
infrastructures.
• Ship More Software
• Accelerate development & deployment, CI and CD pipelines by eliminating headaches
of setting up environments and dealing with differences between environments. On
average, Docker users ship software 7X more frequently1.
• Resource Efficiency
• Lightweight containers run on a single machine and share the same OS kernel while
images are layered file systems sharing common files to make efficient use of RAM and
disk and start instantly.

40
Container History

cgroups warden rocket


2007 2011 2014

1979 2008 2012 2016

chroot LXC Docker, Garden,


A chroot on Unix operating systems is
an operation that changes the apparent
root directory for the current running
LXC (Linux Containers) is an operating-
system-level virtualization environment
for running multiple isolated Linux
lmctfy runC
process and its children. A program that systems (containers) on a single Linux
is run in such a modified environment control host.
cannot name (and therefore normally
cannot access) files outside the …
designated directory tree. … The
modified environment is called a "chroot LXC combines kernel's cgroups and
jail". support for isolated namespaces to
provide an isolated environment for
https://en.wikipedia.org/wiki/ChrootChr applications.

https://en.wikipedia.org/wiki/LXC
41
Multiple De Facto Container Standards
• Docker
• The most common standard, made Linux containers usable by
masses
• Rocket (rkt)
• An emerging container standard from CoreOS, the company that
developed etcd
• Garden
• The format Cloud Foundry builds using buildpacks
• Open Container Initiative (OCI)
• A Linux Foundation project developing a governed container
standard

42
VMs, Containers and Docker

VM Docker

Docker = Linux namespaces + cgroups + overlay (union) file system + image format
43
Containers and VMs Together

Containers on Virtual Machines

Containers on Bare Metal

44
Docker Containers
• Docker uses a copy-on-write (union) filesystem
• New files (& edits) are only visible to current/above layers (used for reuse)

45
Docker Terminology
• Image
• A read-only snapshot of a container stored in a registry to be used as a
template for building containers. At rest.
• Container
• The image when it is ‘running.’ The standard unit for app service
• Registry
• Stores, distributes and manages Docker images
• Engine
• The software that executes commands for containers. Networking and
volumes are part of Engine. Can be clustered together.
• Control Plane
• Management plane for container and cluster orchestration
46
Docker Commands (CLI)

47
Most Useful Docker Commands
• docker build used to build the image with the help of the Dockerfile
• docker push push the image into a registry
• docker images list images in a registry
• docker run run the container or a command in a container
• docker ps list containers
• docker kill kill one or more container
• docker exec run a command in a running container
• docker top display the running processes in a container
• docker container manages container details
• docker network manages networks for containers
48
Docker Supply Chain (local)

Dockerfile

Code Docker build Docker run

Config

Docker Engine

OS
49
Docker Supply Chain (with a registry)

Dockerfile Docker
pull

Docker Docker Docker


Code
build push run

Config

Docker Engine

OS
50
Registries
• Hosting image repositories
• You can define your own registry
• A registry is managed by a registry container
• Public and Private registries
• Public Registry like Docker Hub
• https://hub.docker.com
• Login into the registry
• Docker login domain:port

51
Dockerfile
• Build an image automatically
• Specifies base image and instructions:
• FROM <existing image>
• ADD <local file> <path inside image>
• RUN <cmd>
• EXPOSE <port>
• ENV <name> <value>
• CMD <cmd>

52
Dockerfile > Build > Run > Push > Run
1 3

Configuring
Dockerfile Building
image
53
Dockerfile > Build > Run > Push > Run
4 Listing
images
5
Running
the image
6 locally

Tagging
the image
7
Pushing
the image
54
IBM Software on Docker Hub

55
Docker Networking – One Host

Bridge Mode Host Mode Container Mode No Networking

docker run -d --net=bridge nginx:1.9.1


docker run -d –P --net=host ubuntu:14.04
docker run -it --net=container:anothercontainer ubuntu:14.04 ip addr …
docker run -d --net=none ubuntu:latest

56
Docker Networking for Multiple Hosts
• SDN = Software Defined Network
• L2 solution:
• Docker Networking (default)
• Flannel
• Weave Net
• Open vSwitch
• OpenVPN

• Project Calico (L3 solution & BGP)

57
Docker-Compose
docker-compose.yml
• Compose is a tool for defining
and running multi-container
Docker applications.
• With Compose, you use a YAML
file to configure your application’s
services.
• Then, with a single command,
you create and start all the
services from your configuration.
• docker-compose up

58
IBM and Docker Partnership

• Strategic partnership announced December, 2014


o https://www-03.ibm.com/press/us/en/pressrelease/45597.wss

• Partnership extended February, 2016


o IBM initially only partner to resell and support Docker Datacenter

• Objective: Deliver next generation enterprise-grade, portable,


distributed applications that are composed of interoperable Docker
containers
o Enables hybrid cloud use cases for the enterprise
o IBM Cloud (Public) Container Service since 2014

• Initiatives Underway especially with IBM Cloud Private 59


Advantages of Containers
• Containers are portable
• Any platform with a container engine can run containers
• Containers are easy to manage
• Container images are easy to share, download, and delete
– Especially with Docker registries
• Container instances are easy to create and delete
• Each container instance is easy and fast to start and stop
• Containers provide “just enough” isolation
• Processes share the operating system kernel but are segregated
• Containers use hardware more efficiently
• Greater density than virtual machines
• Especially Docker containers, which can share layers
• Containers are immutable 60
IBM Cloud (Public) – Kubernetes Service

61
Books, eBooks and links
• Mastering Docker (second edition)
• Learning Docker Networking
• Container Networking
• Monitoring Docker

• https://docs.docker.com/

62
Docker Labs
Labs

63
Labs
• https://github.com/phthom/IBMCloudPrivate

DockerLab
• Working with Docker
• Building Docker images
• Running Web Application

64
IBM Cloud Private
Wrap up

65

You might also like