You are on page 1of 85

Openshift

Javier Arce - Solution Architect


jarceort@redhat.com

Isaac Galvan - Architect Manager


igalvanl@redhat.com
Basic Concepts
Basic Concepts

The basic element on OpenShift Container


Platform, it's the Pod.

Two things must be remembered about the


Pod:

1. A Pod is an
abstraction of a
Container
2. Pod has resources
associated.
Basic Concepts

Eventually, you might have a choice to switch


to a different Container Technology like
Rocket.
Basic Concepts

Either way, you don't have to worry tomorrow


a new technology comes up. By handling this
layer of abstraction you can switch to a
different technology layer without concern of
break up your environment.
Basic Concepts

Here are some examples of


Resources available for the
Pod, each responsible for
dealing with in a given
moment.
Basic Concepts

apiVersion: v1 A very simple Resource available is called:


kind: Pod
metadata: Pod (also the name of a Resource).
name: appserver
spec:
containers:
This Resource will pull a container named
- image: jboss/wildfly:latest "jboss/wildfly" and make it available on
imagePullPolicy: IfNotPresent port 8080.
name: appserver
ports:
- containerPort: 8080 Please note other information like
protocol: TCP "restartPolicy"
dnsPolicy: ClusterFirst
restartPolicy: Always
Basic Concepts

apiVersion: v1 DeploymentConfig is another example of a


kind: DeploymentConfig
metadata: Resource is very frequently used.
name: appserver
labels:
name: appserver
It's responsible for pulling a container and
spec: it's responsible in deal with changes of
replicas: 1 versions of the same container.
selector:
app: appserver
deploymentconfig: appserver For example: What happens when we face
template: a new version of JBoss ? How are we
metadata:
labels: going to change the old for a new one ?
app: appserver
deploymentconfig: appserver
spec:
Notice a property named "replicas" which
containers: says the number of instances this
- image: jboss/wildfly:latest container must have at all times.
name: appserver
ports:
- containerPort: 8080
protocol: TCP
Basic Concepts

apiVersion: v1 Service Resource is another very


kind: Service
metadata: important one and very common often
labels: found.
name: appserver
name: appserver
spec: It provides a Pod with a new IP and a
ports: hostname, that it can be accessed through
- port: 8080
protocol: TCP
the entire OpenShift's Cluster.
targetPort: 8080
selector:
app: appserver
deploymentconfig: appserver
sessionAffinity: None
type: ClusterIP
Basic Concepts

apiVersion: v1 Route Resource is always important,


kind: Route
metadata: specially if you're looking to expose your
labels: Pod to the outside world as an Web
name: appserver
name: appserver
application.
spec:
host: This resource binds the information
myserver2.cloudapps.testdrive.com
port:
provided by the Service Resource and
targetPort: 8080 offers an external way to get access
to: through a valid http/https address.
kind: Service
name: appserver
weight: 100 In our example, this Pod can be accessed
by typing:

http://myserver2.cloudapps.testdrive.com

The https protocol along with a valid


certificate is also available.
Basic Concepts

verb resource

api
# oc get pods
web
Rule = Verb + Resource
Role = Rule1 + Rule2 + Rule3 + ...
User Role1
Group of BINDING Role2
User Role3
Basic Concepts

NAME READY STATUS RESTARTS AGE


jboss-1-xyz 1/1 Running 0 23m
Basic Concepts: Logical Perspective

Once a Pod concept is understood, we


must find a way to group together several
Pods in order to offer a concise solution.
Basic Concepts: Logical Perspective

We do that by group in a concept called


Project (or Namespace).
Basic Concepts: Logical Perspective

Inside our Project, we can add a Database


Pod (for example) like MySQL.
Basic Concepts: Logical Perspective

Then add a second Pod like a JBoss


Application Server.
Basic Concepts: Logical Perspective

And finally, a third Pod for handling static


content like a Apache.

There is no limit in how many Pods a


Project can handle. Our stack can be as
big as we might want it.
Basic Concepts: Logical Perspective

However, our Pods are working pretty


much isolated. In order to each other be
able to communicate, we need to add a
Service Resource in each Pod.

OpenShift will assign a valid IP and a


hostname that be accessed through the
entire Cluster.

(it's possible to isolate the communication


of each Pod only inside the Project. In
order to do that, we need to perform a
Multi Tenant installation).
Basic Concepts: Logical Perspective

Now, JBoss can connect to a Database


named "mysql" a fetch all the necessary
tables.
Basic Concepts: Logical Perspective

Finally, we need to expose one of the


Pods to the outside world.

We do this by creating another resource


called "Route", which gives an valid URL
to our Pod.
Basic Concepts: Logical Perspective

Our solution is ready to be accessed by a


WebBrowser by typing the URL indicated.
Basic Concepts: Mechanics

OpenShift creates another layer on top of


several hosts that has some kind of
Container technology.

In our example, we're using Docker


Container Technology.
Basic Concepts: Mechanics

By adding new new layer, we're going to


turn each host and represent as a red box
for simplification purposes.
Basic Concepts: Mechanics

Once each host is ready to run Docker


Containers, it means we're ready to host
Pods in each of those hosts.
Basic Concepts: Mechanics

The set of those hosts we're going to call:


Cluster
Basic Concepts: Mechanics

In one of those hosts within the Cluster,


I'm going to talk to someone who has the
"brains" and can tell me where are the
resources, how many Pods I can create
and so on.
Basic Concepts: Mechanics

This special host is called: The Master

We can use 3 different ways to


communicate with the Master:

1. Using an API calls using a valid and


sign certificate.
2. Using an Client Application
available for major Operating
Systems (named OpenShift Client or
OC).
3. Using a Web Console which give us
a Visual Way to handle the Pods.
Basic Concepts: Mechanics

Every remaining host available will hold


our Pods.

Those hosts are called:


Nodes
Basic Concepts: Mechanics

Although is possible to have Pods running


in the Master, we need to avoid that in
order to provide the Master with the
maximum performance possible.

We don't want anything interfere with the


Master's capability in managing the entire
cluster.
Basic Concepts: Mechanics

One very important Pod that comes in a


OpenShift's installation it's the Registry.

The Registry is responsible to serve


internal requests to get a Pod up and
running within the Cluster.
Basic Concepts: Mechanics

Another very important Pod that comes


with OpenShift is the Router.

The Router is responsible for handling


external requests and forward to a specific
Pod.
Basic Concepts: Mechanics

We usually keep both of those important


Pods (among others) located in a single
Host.

By hosting Pods of a Infrastructure nature,


we usually call this Host:
Infra Node
Basic Concepts: Mechanics

Now that we've got a fully working


OpenShift, we're ready to create a Pod
called "MyPod".
Basic Concepts: Mechanics

When someone needs to get access to


our Pod, by typing an URL....
Basic Concepts: Mechanics

This request must first ask a DNS Server


where I can find this particular resource.
Basic Concepts: Mechanics

So the DNS, it's going to point to a Host


where the Router is running.

In our example, the Pod is located at the


Infra Node.
Basic Concepts: Mechanics

Once the request arrives at our Router, it's


going to forward the request to the
appropriate Pod in the cluster.
Basic Concepts: General Architecture

ROUTING LAYER

SERVICE LAYER

NODE NODE NODE PERSISTENT


SCM
MASTER STORAGE
(GIT)
C Cc
API/AUTHENTICATION

C C C
CI/CD DATA STORE
RHEL RHEL RHEL

SCHEDULER NODE NODE NODE REGISTRY

EXISTING C C C C
HEALTH/SCALING
AUTOMATION
TOOLSETS

C
RED HAT
ENTERPRISE LINUX
RHEL RHEL RHEL

PHYSICAL VIRTUAL PRIVATE PUBLIC HYBRID


Laboratory

OpenShift Installation
TestDrive: OpenShift@Ops
OpenShift Installation
We're going to start by installing software needed which it's responsible to provide all the necessary
packages for the Operating System.

Let’s become root


# su -
Password: type: r3dh4t1!
Last login: Tue May 16 16:16:07 EDT 2017 on pts/0

# yum install -y docker device-mapper-libs device-mapper-event-libs bind bind-utils


ansible lvm2
OpenShift Installation
By using the credentials given by email, please run an Ansible Playbook to update
First the Master host need to be able to access each server in the cluster through SSH (including its
own). For that, we’re going to create a SSH key to do that.
# su -u
Password: r3dh4t1!

# touch /var/named/dynamic/cloudapps.testdrive.com.db
# ansible-playbook /root/setup_openshift_ips.yaml

OpenShift's Master IP: <YOUR MASTER IP>


OpenShift's InfraNode IP: <YOUR INFRA IP>

# ssh root@master.testdrive.com
The authenticity of host 'xxxxxx' can't be established.
ECDSA key fingerprint is
SHA256:BKX42ou0CXlIj2z/qknkilMW+cP+MxbLjl1bKzUt600.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xxxxx' (ECDSA) to the list of known hosts.
root@xxxx password: r3dh4t1!
OpenShift Installation
First the Master host need to be able to access each server in the cluster through SSH (including its
own). For that, we’re going to create a SSH key to do that.
# cd .ssh/
# ssh-keygen -t rsa -b 4096
Then press ENTER in all process.
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a5:61:e7:22:81:22:46:f8:50:e1:e5:eb:b7:e6:3d:8a root@master.testdrive.com
...
..
.
OpenShift Installation
Now copy the generated public key to authorized_keys file, so master can ssh on itself:
# cat id_rsa.pub >> authorized_keys

And we’re going to copy this file to each host that we’re going to use. Be sure to have your hosts right,
because it’s different for each attendee (in my example, my hosts are named: maltron-master,
maltron-infra and maltron-node):
# for host in <YOUR PREFIX>-infra <YOUR PREFIX>-node; do scp
authorized_keys ${host}:/root/.ssh; done

Because we’re trying to do for the very first time, each host will ask for a password (in our example, it
will be r3dh4t1!)
scp authorized_keys ${host}:/root/.ssh; done
The authenticity of host ' <YOUR PREFIX>-infra (192.168.0.3)' can't be
established.
ECDSA key fingerprint is e3:18:6e:8f:3d:dc:4a:48:22:a4:91:5b:68:19:29:1a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ' <YOUR PREFIX>-infra,192.168.0.3' (ECDSA) to
the list of known hosts.
root@<YOUR PREFIX>-infra's password:
OpenShift Installation
You may test if everything worked, by typing
# ssh <YOUR PREFIX>-master
The authenticity of host ' <YOUR PREFIX>- master (192.168.0.2)' can't be
established.
ECDSA key fingerprint is e3:18:6e:8f:3d:dc:4a:48:22:a4:91:5b:68:19:29:1a.
Are you sure you want to continue connecting (yes/no)?

and then, try again and it should be straightforward


# ssh <YOUR PREFIX>-master
Last login: Sun May 21 21:01:33 2017 from
<YOUR PREFIX>- master.c.testdrive-de-openshift.internal
OpenShift Installation
Now, try to log into infra’s and node’s host like
# ssh <YOUR PREFIX>-infra
Last login: Sun May 21 17:38:32 2017 from 177.73.70.108
[root@<YOUR PREFIX>- infra ~]# exit
logout
Connection to <YOUR PREFIX>- infra closed.
# ssh <YOUR PREFIX>-node
Last login: Sun May 21 17:38:32 2017 from 177.73.70.108
[root@<YOUR PREFIX>- node ~]# exit
logout
Connection to <YOUR PREFIX>- node closed.
#

Every single attempt to access each host must happen without asking for any password at all.
OpenShift Installation
Now, it’s important to install some tools in All Nodes, which it’s going to play a big part during the
installation process:
# yum -y install wget git net-tools bind-utils iptables-services
bridge-utils vim bash-completion httpd-tools kexec-tools sos psacct

… and update it, so everything will be up-to-date:


# yum -y update --skip-broken
OpenShift Installation
Now, we need to prepare each host to be able to run Docker and for that, we need to ssh into each
one of them and create a Docker storage data.
First, let’s make sure that on the Master, by understand which disks are available for us:
# fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
...
...

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

We’re going to realize that /dev/sdb is available for us to use as partition.


OpenShift Installation
Hence, we’re going to create a new partition on this disk to use as our Docker storage data:
# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0xd1cdf640.

The device presents a logical sector size that is smaller than


the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.

Command (m for help):


OpenShift Installation
We’re going to use the whole partition for docker storage;

Command (m for help): n ENTER


Partition type:
p primary (2 primary, 0 extended, 2 free)
e extended
Select (default p): ENTER
Using default response p
Partition number (1-4, default 1): ENTER
First sector (2048-125829119, default 2048): ENTER
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-125829119, default 125829119): ENTE
Using default value 125829119
Partition 1 of type Linux and of size 60 GiB is set

Command (m for help): w


The partition table has been altered!
OpenShift Installation
We’re going to synchronize all disks
# partprobe

Create a pvcreate and vgcreate necessary for our docker storage data
# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
# vgcreate "docker-vg" /dev/sdb1
Volume group "docker-vg" successfully created

and finally, setup a script named “docker-storage-setup” and running this script to get our docker
storage data right in place
# echo "VG=docker-vg" >> /etc/sysconfig/docker-storage-setup
# yum -y install docker
# docker-storage-setup
Using default stripesize 64,00 KiB.
Rounding up size to full physical extent 24,00 MiB
Logical volume "docker-pool" created.
Logical volume docker-vg/docker-pool changed.
OpenShift Installation
Finally, we’re going to install docker, start it and enable it:

# systemctl start docker


# systemctl enable docker
Created symlink from
/etc/systemd/system/multi-user.target.wants/docker.service to
/usr/lib/systemd/system/docker.service.
TEST
Now, repeat the same tasks for your infra and node hosts.
Repeat from here.
OpenShift Installation
Now, we’re going to install the tool responsible for OpenShift’s installation. This should only be
installed in the Master:
# yum -y install atomic-openshift-utils

Loaded plugins: product-id, search-disabled-repos, subscription-


: manager
This system is not registered to Red Hat Subscription Management. You can
use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package atomic-openshift-utils.noarch 0:3.5.71-1.git.0.128c2db.el7
will be installed
--> Processing Dependency: openshift-ansible-playbooks = 3.5.71 for
package: atomic-openshift-utils-3.5.71-1.git.0.128c2db.el7.noarch
--> Processing Dependency: python-click for package:

...
OpenShift Installation
With all tooling in place, we’re going to create an Ansible’s inventory for our installation:

First, we're going to download an example of inventory by typing:


# curl
https://raw.githubusercontent.com/latam-tech-office/testdrive-openshift/master/labs/ansi
ble_inventory -o /etc/ansible/hosts
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1049 100 1049 0 0 5438 0 --:--:-- --:--:-- --:--:-- 5435

Then, we're going to edit and replace all <YOUR PREFIX>, <OPENSHIFT MASTER IP> and
<OPENSHIFT INFRA IP> for the prefix indicated on the spreadsheet
# vi /etc/ansible/hosts

Be sure to replace all <YOUR PREFIX> marks


OpenShift Installation
This is what an inventory looks like when you're editing:
...
[OSEv3:children]
masters
Nodes
etcd

[masters]
<YOUR PREFIX>-master openshift_public_hostname=<OPENSHIFT MASTER IP>.xip.io

[nodes]
<YOUR PREFIX>-master openshift_public_hostname=master.testdrive.com
<YOUR PREFIX>-infra openshift_node_labels="{'host': 'infra'}"
<YOUR PREFIX>-node openshift_node_labels="{'host': 'apps'}"

[etcd]
<YOUR PREFIX>-master

#### default subdomain to use for exposed routes


<OPENSHIFT INFRA IP>.xip.io
openshift_master_default_subdomain=

<OPENSHIFT INFRA IP>.xip.io
openshift_metrics_hawkular_hostname=hawkular-metrics.
...
OpenShift Installation
We're going to run a playbook to perform all necessary check-ups

# ansible-playbook
/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml

PLAY [Initialization Checkpoint Start]


**************************************************************************
*
...

PLAY RECAP
****************************************************************
localhost : ok=11 changed=0 unreachable=0
failed=0
one-infra : ok=63 changed=17 unreachable=0
failed=0
one-master : ok=74 changed=18 unreachable=0
failed=0
one-node1 : ok=63 changed=17 unreachable=0
failed=0
OpenShift Installation
Finally, in order to start the installation, we’re going to run an Ansible’s playbook:

# ansible-playbook
/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

PLAY [Initialization Checkpoint Start]


**************************************************************************
*

TASK [Set install initialization 'In Progress']


**************************************************************************
*
ok: [one-master]

...

That should take on average 40 minutes to complete


OpenShift Installation
After the installation is done, you should see the following message:

INSTALLER STATUS
*********************************************************************************
Health Check : Complete (0:00:34)
etcd Install : Complete (0:01:11)
Master Install : Complete (0:02:44)
Master Additional Install : Complete (0:01:29)
Node Install : Complete (0:07:26)
Hosted Install : Complete (0:01:51)
Web Console Install : Complete (0:00:57)
Metrics Install : Complete (0:01:58)
Service Catalog Install : Complete (0:02:33)
OpenShift Installation
Use the OC (OpenShift’s Client) command and check if the nodes are Ready. It should be look like
this:
# oc get nodes
NAME STATUS AGE
<YOUR PREFIX>-infra Ready 32m
<YOUR PREFIX>-master Ready,SchedulingDisabled 32m
<YOUR PREFIX>-node Ready 32m

and we can check if all the necessary pods are up and running:
# oc get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default docker-registry-1-v4g2g 1/1 Running 0 4m
default registry-console-1-vpmhq 1/1 Running 0 3m
default router-1-4g7tz 1/1 Running 0 6m
openshift-infra hawkular-cassandra-1-6vc7k 0/1 Running 0 1m
openshift-infra hawkular-metrics-f72n8 0/1 Running 0 1m
openshift-infra heapster-5ppdh 0/1 Running 0 1m
OpenShift Installation
Now, we need to be able to login into the cluster. First we need to create a file where all the passwords
will be sitting. Let’s create the file with a user named “demo”:
# htpasswd -c /etc/origin/master/openshift-passwords demo
New password: type: r3dh4t1!
Re-type new password: type: r3dh4t1!
Adding password for user demo
OpenShift Installation
Now, edit the file /etc/origin/master/master-config.yaml the change the section:
...
login: true
mappingMethod: claim
name: deny_all
provider:
apiVersion: v1
kind: DenyAllPasswordIdentityProvider
masterCA: ca-bundle.crt
masterPublicURL: https://master.testdrive.com:8443
masterURL: https://maltron-master:8443

...
OpenShift Installation
For this part, which we’re telling the Master where the authentication should happen:
...
login: true
mappingMethod: claim
name: htpasswd
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider
file: /etc/origin/master/openshift-passwords
masterCA: ca-bundle.crt
masterPublicURL: https://master.testdrive.com:8443
masterURL: https://maltron-master:8443

...
OpenShift Installation
After that, restart the Master
# systemctl restart atomic-openshift-master-api
atomic-openshift-master-controllers

and try to log in your WebBrowser using the following URL:


https://master.testdrive.com:8443
OpenShift Installation

Because you’re accessing for the very first time


OpenShift’s WebConsole, which works in a secure
endpoint https, you need to accept the certificate:
OpenShift Installation

Be sure to make the address “master.testdrive.com”


trustable:
OpenShift Installation

Now, you can get access to OpenShift’s


WebConsole:
First Steps
TestDrive: OpenShift@Ops
First Steps
Let's create a new project to get started:

# oc login
and enter user: demo , pass: r3dh4t1! ; you can have web console page and
terminal window to see how the changes are made.

# oc new-project <userid>-myserver
Now using project "myserver" on server
"https://master.testdrive.com:8443".

You can add applications to this project with the 'new-app' command. For
example, try:

oc new-app
centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.


First Steps
In our first project, we're going to create our first Pod.
# oc create -f -<<EOF
apiVersion: v1
kind: Pod
metadata:
name: appserver
spec:
containers:
- image: jboss/wildfly:latest
imagePullPolicy: IfNotPresent
name: appserver
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: ClusterFirst
restartPolicy: Always
EOF
pod "appserver" created
First Steps
You can type this command to see the available Pods:
# oc get pods
NAME READY STATUS RESTARTS AGE
appserver 1/1 Running 0 1m

Or you can see how that evolves by type:


# oc get pods --watch
NAME READY STATUS RESTARTS AGE
appserver 0/1 ContainerCreating 0 1s
NAME READY STATUS RESTARTS AGE
appserver 1/1 Running 0 45s
First Steps
You can see also how each of the Pods developed over time by see each state
# oc get events
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON
SOURCE MESSAGE
1m 1m 1 appserver Pod Normal Scheduled
{default-scheduler } Successfully assigned appserver to testdrive-dev-node2
1m 1m 1 appserver Pod spec.containers{appserver} Normal Pulling
{kubelet testdrive-dev-node2} pulling image "jboss/wildfly:latest"
1m 1m 1 appserver Pod spec.containers{appserver} Normal Pulled
{kubelet testdrive-dev-node2} Successfully pulled image "jboss/wildfly:latest"
1m 1m 1 appserver Pod spec.containers{appserver} Normal Created
{kubelet testdrive-dev-node2} Created container with docker id d60e1a6f38b9; Security:[seccomp=unconfined]
1m 1m 1 appserver Pod spec.containers{appserver} Normal Started
{kubelet testdrive-dev-node2} Started container with docker id d60e1a6f38b9
First Steps
And once the Pod is actually running, we can see the logs.
# oc logs appserver
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/jboss/wildfly
JAVA: /usr/lib/jvm/java/bin/java

...

23:11:08,185 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025:


WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final) started in 11375ms -
Started 331 of 577 services (393 services are lazy, passive or on-demand)
First Steps
So, what happens if I want to know which server is running my Pod. I can use "oc describe"
# oc describe pod/appserver
Name: appserver
Namespace: myserver
Security Policy: anyuid
Node: maltron-node/192.168.0.8
Start Time: Sun, 04 Jun 2017 19:51:00 -0400
Labels: <none>
Status: Running
IP: 10.130.0.7
Controllers: <none>
Containers:
appserver:
Container ID: docker://6c8aa8234853fc47125dee29bf823734a3ff1c9b5f384a631a61be1b61f54633
Image: jboss/wildfly:latest
Image ID:
docker-pullable://docker.io/jboss/wildfly@sha256:56d1c87ae563289c6f2260915b1adec14c5288d7a14be7b375000070c3c
b0728
Port: 8080/TCP
State: Running
Started: Sun, 04 Jun 2017 19:51:44 -0400
Ready: True

...
First Steps
So, let's see our Pods available
# oc get pods
NAME READY STATUS RESTARTS AGE
appserver 1/1 Running 0 52m

...and then, try to delete our running Pod.


# oc delete pod/appserver
pod "appserver" deleted

...by checking again, you will see there isn't any more pods available.
# oc get pods
No resources found.
First Steps
Now, instead of using a Pod Resource, we're going to create a DeploymentConfig instead.
# oc create -f -<<EOF
apiVersion: v1
kind: DeploymentConfig
metadata:
name: appserver
labels:
name: appserver
spec:
replicas: 1
selector:
app: appserver
deploymentconfig: appserver
template:
metadata:
labels:
app: appserver
deploymentconfig: appserver
spec:
containers:
- image: jboss/wildfly:latest
name: appserver
ports:
- containerPort: 8080
protocol: TCP
EOF
deploymentconfig "appserver" created
First Steps
Let's see how this particular Resource develops into a Pod
# oc get pods -w
NAME READY STATUS RESTARTS AGE
appserver-1-deploy 0/1 ContainerCreating 0 4s
NAME READY STATUS RESTARTS AGE
appserver-1-deploy 1/1 Running 0 4s
appserver-1-821ml 0/1 Pending 0 0s
appserver-1-821ml 0/1 Pending 0 0s
appserver-1-821ml 0/1 ContainerCreating 0 0s
appserver-1-821ml 1/1 Running 0 5s
appserver-1-deploy 0/1 Completed 0 10s
appserver-1-deploy 0/1 Terminating 0 10s
appserver-1-deploy 0/1 Terminating 0 10s
First Steps
And now that we've got our first DeploymentConfig (or DC) available, we can list it
# oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
appserver 1 1 1 config
First Steps
Like the previous example, let's list our pods
# oc get pods
NAME READY STATUS RESTARTS AGE
appserver-1-x4njm 1/1 Running 0 16s

...and then, try to delete it


# oc delete pod/appserver-1-x4njm
pod "appserver-1-x4njm" deleted

Even though the Pod was deleted, see what happens next.
# oc get pods -w
NAME READY STATUS RESTARTS AGE
appserver-1-05p72 0/1 ContainerCreating 0 1s
appserver-1-x4njm 1/1 Terminating 0 31s
NAME READY STATUS RESTARTS AGE
appserver-1-x4njm 0/1 Terminating 0 32s
appserver-1-x4njm 0/1 Terminating 0 32s
appserver-1-x4njm 0/1 Terminating 0 32s
appserver-1-05p72 1/1 Running 0 7s
First Steps
Next, we're going to create a Service Resource
# oc create -f -<<EOF
apiVersion: v1
kind: Service
metadata:
labels:
name: appserver
name: appserver
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: appserver
deploymentconfig: appserver
sessionAffinity: None
type: ClusterIP
EOF
service "appserver" created
First Steps
Like DC, we can always list all Service Resources available
# oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
appserver 172.30.45.14 <none> 8080/TCP 1m
First Steps
And finally, we're going to create a Route that allows us to access this Pod externally
# oc create -f -<<EOF
apiVersion: v1
kind: Route
metadata:
labels:
name: appserver
name: appserver
spec:
host: myserver-<iduser>.<change-ip-infra>.xip.io
port:
targetPort: 8080
to:
kind: Service
name: appserver
weight: 100
EOF
route "appserver" created
First Steps
Next, we make sure the route is available for us
# oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
appserver myserver.cloudapps.testdrive.com appserver 8080 None
First Steps

By opening our WebBrowser and


typing our URL created in a
Route, we should see this:
Recursos Extra
Ambiente Estudio: https://learn.openshift.com

Pagina Eventos/Material de Descarga: https://developers.redhat.com/

Libro Microservicios:
https://developers.redhat.com/promotions/building-reactive-microservices-in-java/

Aplicaciones Reactivas con Microservicios: bit.ly/reactivemicroservicesbook

Despligue de OpenShift https://www.openshift.com/deploying-to-openshift/

You might also like