Professional Documents
Culture Documents
1. A Pod is an
abstraction of a
Container
2. Pod has resources
associated.
Basic Concepts
http://myserver2.cloudapps.testdrive.com
verb resource
api
# oc get pods
web
Rule = Verb + Resource
Role = Rule1 + Rule2 + Rule3 + ...
User Role1
Group of BINDING Role2
User Role3
Basic Concepts
ROUTING LAYER
SERVICE LAYER
C C C
CI/CD DATA STORE
RHEL RHEL RHEL
EXISTING C C C C
HEALTH/SCALING
AUTOMATION
TOOLSETS
C
RED HAT
ENTERPRISE LINUX
RHEL RHEL RHEL
OpenShift Installation
TestDrive: OpenShift@Ops
OpenShift Installation
We're going to start by installing software needed which it's responsible to provide all the necessary
packages for the Operating System.
# touch /var/named/dynamic/cloudapps.testdrive.com.db
# ansible-playbook /root/setup_openshift_ips.yaml
# ssh root@master.testdrive.com
The authenticity of host 'xxxxxx' can't be established.
ECDSA key fingerprint is
SHA256:BKX42ou0CXlIj2z/qknkilMW+cP+MxbLjl1bKzUt600.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xxxxx' (ECDSA) to the list of known hosts.
root@xxxx password: r3dh4t1!
OpenShift Installation
First the Master host need to be able to access each server in the cluster through SSH (including its
own). For that, we’re going to create a SSH key to do that.
# cd .ssh/
# ssh-keygen -t rsa -b 4096
Then press ENTER in all process.
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a5:61:e7:22:81:22:46:f8:50:e1:e5:eb:b7:e6:3d:8a root@master.testdrive.com
...
..
.
OpenShift Installation
Now copy the generated public key to authorized_keys file, so master can ssh on itself:
# cat id_rsa.pub >> authorized_keys
And we’re going to copy this file to each host that we’re going to use. Be sure to have your hosts right,
because it’s different for each attendee (in my example, my hosts are named: maltron-master,
maltron-infra and maltron-node):
# for host in <YOUR PREFIX>-infra <YOUR PREFIX>-node; do scp
authorized_keys ${host}:/root/.ssh; done
Because we’re trying to do for the very first time, each host will ask for a password (in our example, it
will be r3dh4t1!)
scp authorized_keys ${host}:/root/.ssh; done
The authenticity of host ' <YOUR PREFIX>-infra (192.168.0.3)' can't be
established.
ECDSA key fingerprint is e3:18:6e:8f:3d:dc:4a:48:22:a4:91:5b:68:19:29:1a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ' <YOUR PREFIX>-infra,192.168.0.3' (ECDSA) to
the list of known hosts.
root@<YOUR PREFIX>-infra's password:
OpenShift Installation
You may test if everything worked, by typing
# ssh <YOUR PREFIX>-master
The authenticity of host ' <YOUR PREFIX>- master (192.168.0.2)' can't be
established.
ECDSA key fingerprint is e3:18:6e:8f:3d:dc:4a:48:22:a4:91:5b:68:19:29:1a.
Are you sure you want to continue connecting (yes/no)?
Every single attempt to access each host must happen without asking for any password at all.
OpenShift Installation
Now, it’s important to install some tools in All Nodes, which it’s going to play a big part during the
installation process:
# yum -y install wget git net-tools bind-utils iptables-services
bridge-utils vim bash-completion httpd-tools kexec-tools sos psacct
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Create a pvcreate and vgcreate necessary for our docker storage data
# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
# vgcreate "docker-vg" /dev/sdb1
Volume group "docker-vg" successfully created
and finally, setup a script named “docker-storage-setup” and running this script to get our docker
storage data right in place
# echo "VG=docker-vg" >> /etc/sysconfig/docker-storage-setup
# yum -y install docker
# docker-storage-setup
Using default stripesize 64,00 KiB.
Rounding up size to full physical extent 24,00 MiB
Logical volume "docker-pool" created.
Logical volume docker-vg/docker-pool changed.
OpenShift Installation
Finally, we’re going to install docker, start it and enable it:
...
OpenShift Installation
With all tooling in place, we’re going to create an Ansible’s inventory for our installation:
Then, we're going to edit and replace all <YOUR PREFIX>, <OPENSHIFT MASTER IP> and
<OPENSHIFT INFRA IP> for the prefix indicated on the spreadsheet
# vi /etc/ansible/hosts
[masters]
<YOUR PREFIX>-master openshift_public_hostname=<OPENSHIFT MASTER IP>.xip.io
[nodes]
<YOUR PREFIX>-master openshift_public_hostname=master.testdrive.com
<YOUR PREFIX>-infra openshift_node_labels="{'host': 'infra'}"
<YOUR PREFIX>-node openshift_node_labels="{'host': 'apps'}"
[etcd]
<YOUR PREFIX>-master
# ansible-playbook
/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
PLAY RECAP
****************************************************************
localhost : ok=11 changed=0 unreachable=0
failed=0
one-infra : ok=63 changed=17 unreachable=0
failed=0
one-master : ok=74 changed=18 unreachable=0
failed=0
one-node1 : ok=63 changed=17 unreachable=0
failed=0
OpenShift Installation
Finally, in order to start the installation, we’re going to run an Ansible’s playbook:
# ansible-playbook
/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
...
INSTALLER STATUS
*********************************************************************************
Health Check : Complete (0:00:34)
etcd Install : Complete (0:01:11)
Master Install : Complete (0:02:44)
Master Additional Install : Complete (0:01:29)
Node Install : Complete (0:07:26)
Hosted Install : Complete (0:01:51)
Web Console Install : Complete (0:00:57)
Metrics Install : Complete (0:01:58)
Service Catalog Install : Complete (0:02:33)
OpenShift Installation
Use the OC (OpenShift’s Client) command and check if the nodes are Ready. It should be look like
this:
# oc get nodes
NAME STATUS AGE
<YOUR PREFIX>-infra Ready 32m
<YOUR PREFIX>-master Ready,SchedulingDisabled 32m
<YOUR PREFIX>-node Ready 32m
and we can check if all the necessary pods are up and running:
# oc get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default docker-registry-1-v4g2g 1/1 Running 0 4m
default registry-console-1-vpmhq 1/1 Running 0 3m
default router-1-4g7tz 1/1 Running 0 6m
openshift-infra hawkular-cassandra-1-6vc7k 0/1 Running 0 1m
openshift-infra hawkular-metrics-f72n8 0/1 Running 0 1m
openshift-infra heapster-5ppdh 0/1 Running 0 1m
OpenShift Installation
Now, we need to be able to login into the cluster. First we need to create a file where all the passwords
will be sitting. Let’s create the file with a user named “demo”:
# htpasswd -c /etc/origin/master/openshift-passwords demo
New password: type: r3dh4t1!
Re-type new password: type: r3dh4t1!
Adding password for user demo
OpenShift Installation
Now, edit the file /etc/origin/master/master-config.yaml the change the section:
...
login: true
mappingMethod: claim
name: deny_all
provider:
apiVersion: v1
kind: DenyAllPasswordIdentityProvider
masterCA: ca-bundle.crt
masterPublicURL: https://master.testdrive.com:8443
masterURL: https://maltron-master:8443
...
OpenShift Installation
For this part, which we’re telling the Master where the authentication should happen:
...
login: true
mappingMethod: claim
name: htpasswd
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider
file: /etc/origin/master/openshift-passwords
masterCA: ca-bundle.crt
masterPublicURL: https://master.testdrive.com:8443
masterURL: https://maltron-master:8443
...
OpenShift Installation
After that, restart the Master
# systemctl restart atomic-openshift-master-api
atomic-openshift-master-controllers
# oc login
and enter user: demo , pass: r3dh4t1! ; you can have web console page and
terminal window to see how the changes are made.
# oc new-project <userid>-myserver
Now using project "myserver" on server
"https://master.testdrive.com:8443".
You can add applications to this project with the 'new-app' command. For
example, try:
oc new-app
centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
...
...
First Steps
So, let's see our Pods available
# oc get pods
NAME READY STATUS RESTARTS AGE
appserver 1/1 Running 0 52m
...by checking again, you will see there isn't any more pods available.
# oc get pods
No resources found.
First Steps
Now, instead of using a Pod Resource, we're going to create a DeploymentConfig instead.
# oc create -f -<<EOF
apiVersion: v1
kind: DeploymentConfig
metadata:
name: appserver
labels:
name: appserver
spec:
replicas: 1
selector:
app: appserver
deploymentconfig: appserver
template:
metadata:
labels:
app: appserver
deploymentconfig: appserver
spec:
containers:
- image: jboss/wildfly:latest
name: appserver
ports:
- containerPort: 8080
protocol: TCP
EOF
deploymentconfig "appserver" created
First Steps
Let's see how this particular Resource develops into a Pod
# oc get pods -w
NAME READY STATUS RESTARTS AGE
appserver-1-deploy 0/1 ContainerCreating 0 4s
NAME READY STATUS RESTARTS AGE
appserver-1-deploy 1/1 Running 0 4s
appserver-1-821ml 0/1 Pending 0 0s
appserver-1-821ml 0/1 Pending 0 0s
appserver-1-821ml 0/1 ContainerCreating 0 0s
appserver-1-821ml 1/1 Running 0 5s
appserver-1-deploy 0/1 Completed 0 10s
appserver-1-deploy 0/1 Terminating 0 10s
appserver-1-deploy 0/1 Terminating 0 10s
First Steps
And now that we've got our first DeploymentConfig (or DC) available, we can list it
# oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
appserver 1 1 1 config
First Steps
Like the previous example, let's list our pods
# oc get pods
NAME READY STATUS RESTARTS AGE
appserver-1-x4njm 1/1 Running 0 16s
Even though the Pod was deleted, see what happens next.
# oc get pods -w
NAME READY STATUS RESTARTS AGE
appserver-1-05p72 0/1 ContainerCreating 0 1s
appserver-1-x4njm 1/1 Terminating 0 31s
NAME READY STATUS RESTARTS AGE
appserver-1-x4njm 0/1 Terminating 0 32s
appserver-1-x4njm 0/1 Terminating 0 32s
appserver-1-x4njm 0/1 Terminating 0 32s
appserver-1-05p72 1/1 Running 0 7s
First Steps
Next, we're going to create a Service Resource
# oc create -f -<<EOF
apiVersion: v1
kind: Service
metadata:
labels:
name: appserver
name: appserver
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: appserver
deploymentconfig: appserver
sessionAffinity: None
type: ClusterIP
EOF
service "appserver" created
First Steps
Like DC, we can always list all Service Resources available
# oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
appserver 172.30.45.14 <none> 8080/TCP 1m
First Steps
And finally, we're going to create a Route that allows us to access this Pod externally
# oc create -f -<<EOF
apiVersion: v1
kind: Route
metadata:
labels:
name: appserver
name: appserver
spec:
host: myserver-<iduser>.<change-ip-infra>.xip.io
port:
targetPort: 8080
to:
kind: Service
name: appserver
weight: 100
EOF
route "appserver" created
First Steps
Next, we make sure the route is available for us
# oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
appserver myserver.cloudapps.testdrive.com appserver 8080 None
First Steps
Libro Microservicios:
https://developers.redhat.com/promotions/building-reactive-microservices-in-java/