You are on page 1of 11

1.

Install and configure Docker


Install and configure Docker on your master and node systems according to
the following conditions:
Docker uses the pre-existing volume group docker-vg as a backing volume.
Docker points to the private registry at workstation.lab.example.com:5000
Docker must NOT use public registries docker.io and
registry.access.redhat.com
Docker uses the certificate available from workstation.lab.example.com's
dir of :/etc/pki/tls/certs/example.com.crt
Add the certificate as trusted
In order to test if your docker configurationg is correct you may try to
run:docker pull openshift/hello-openshift

##########################################################################
###on master do that
ssh-keygen -f .ssh/id_rsa -N ''
ssh-copy-id root@master
ssh-copy-id root@node

###on master and node do follow


yum update -y
systemctl start NetworkManager ; systemctl enable NetworkManager
systemctl stop firewalld ; systemctl disable firewalld
yum install docker -y
systemctl start docker ; systemctl enable docker
#docker info ; pvs ; vgs
cat >> /etc/sysconfig/docker-storage-setup <<EOF
#when train,there no vg,so use the DEVS
DEVS=/dev/vdb
VG=docker-vg
SETUP_LVM_THIN_POOL=yes
EOF

systemctl stop docker


rm -rf /var/lib/docker/*
docker-storage-setup
#docker info
vim /etc/sysconfig/docker
ADD_REGISTRY='--add-registry workstation.lab.example.com:500'
BLOCK_REGISTRY='--block-registry docker.io --block-registry
registry.access.redhat.com'

systemctl restart docker


yum install ca-certificates
update-ca-trust enable
scp root@workstation.lab.example.com:/etc/pki/tls/certs/example.com.crt
/etc/pki/ca-trust/source/anchors/
update-ca-trust extract

#on master
#docker-registry-cli workstation.lab.example.com:5000 search openshift3 ssl
#docker pull openshift/hello-openshift
docker-registry-cli workstation.lab.example.com:5000 search openshift ssl |
awk '$2~"Name" || $1~"Tag"{print $NF}'|xargs -n 2|sed 's/ /:/g' >
/root/images
cat /root/images|while read image ;do docker pull $image;done
scp /root/images root@node:/root/images

#on node
cat /root/images|while read image ;do docker pull $image;done
##########################################################################

2.Install OpenShift Enterprise


Install OpenShift Enterprise(OpenShift Container Platform) on master using
the official installer and configure systems as follows:
########################
System Role
master.lab.example.com MASTER
node.lab.example.com NODE
########################
The installation will be RPM based.
The applications subdomain is set to cloudapps.lab.example.com
The installer uses the root user for ssh access Note:(ssh keys have already
been setup).

##########################################################################
#on master and node
yum -y install wget git net-tools bind-utils iptables-services bridge-utils
atomic-openshift-docker-excluder atomic-openshift-excluder atomic-
openshift-utils
atomic-openshift-excluder unexclude
cp /etc/sysconfig/docker{,.back}

#on master
atomic-openshift-installer install

Are you ready to continue[y/n]: y


User for ssh access[root]: root
(1) Openshift Container Platform
(2) Registry
Choose a variant from above: [1]: Enter
Enter hostname or IP address: master.lab.example.com
Will this host be an openshift master?[y/N]: y
Will this host be RPM or Container based? [rpm]: Enter
Do you want to add additional hosts? [y/N]: y
Enter hostname or IP address: node.lab.example.com
Will this host be an openshift master?[y/N]: N
Will this host be RPM or Container based? [rpm]: Enter
Do you want to add additional hosts? [y/N]: N
New default subdomain[]: cloudapps.lab.example.com //实际考试根据实际情况填写
>Specify your http proxy? Enter
>Specify your https proxy? Enter
Do the above facts look correct?[y/N]: y
Are you ready to continue[y/n]: y

systemctl status atomic-openshift-master.service


#on master and node
systemctl status atomic-openshift-node.service
oc get nodes //检查 maste 和 node 节点状态是否正常

#on master and node


cp /etc/sysconfig/docker{.back,}
systemctl restart docker
atomic-openshift-excluder exclude
##########################################################################

3.Configure OpenShift Enterprise


Once the Master and the Node(s) have been installed,proceed to the
configuration of your OpenShift instance by performing the following
operations.
Edit the default OpenShift Images Stream and replace every entry of
registry.access.redhat.com with workstation.lab.example.com:5000
*Old Exam have:
*Deploy a registry using the image openshift3/ose-${component}:$
{version}
*Deploy a router using the image openshift3/ose-${component}:$
{version}
*Use default naming for both the registry and the router.

##########################################################################
##on master
#oc get pods
oc edit dc registry-console
#change line:image: registry.access.redhat.com/openshift3/registry-
console:3.3
image: workstation.lab.example.com:5000/openshift3/registry-console:3.3
oc edit is -n openshift
#chang all the registry.access.redhat.com =to=>
workstation.lab.example.com:5000
%s/registry.access.redhat.com/workstation.lab.example.com:5000/g

##########################################################################

4.Configure OpenShift authentication


Configure authentication on your OpenShift instance so that:
The Identity Provider is set to HTPasswdPasswordIdentityProvider
The user randy exists with password boaterch
The user ryan exists with password boaterch
Both users must be able to authenticate to the Openshift Instance via
CLI and on the Web Console at https://master.lab.example.com:8443
No other user shell be able to login

##########################################################################
##on master
yum install httpd-tools
vim /etc/origin/master/master-config.yaml
#apiVersion:v1
kind: HTPasswdPasswordIdentityProvider
file: /etc/origin/openshift.passwd

htpasswd -c /etc/origin/openshift.passwd randy


htpasswd /etc/origin/openshift.passwd ryan
systemctl restart atomic-openshift-master
##########################################################################

5.Configure persistent storage


Configure persistent NFS storage on workstation.lab.example.com in the
following way:
*Applay all latest updates on workstation.lab.example.com
Create and share /OSE_mysql
Create and share /OSE_wordpress
Create and share /OSE_registry
All the shares must be available to anyone in the subnet
172.25.250.0/255.255.255.0
Assocate the share named /OSE_registry to the registry running within
your OpenShift Enterprice so that it will be used, instead of the default
one,for permament storage.Use exam-registry-volume for the volume name and
exam-registry-claim for the claim name.

##########################################################################
## on workstation
yum -y update
mkdir /{OSE_mysql,OSE_wordpress,OSE_registry}
chmod 777 /{OSE_mysql,OSE_wordpress,OSE_registry}
chown nfsnobody:nfsnobody /{OSE_mysql,OSE_wordpress,OSE_registry}
yum install nfs-utils rpcbind
cat >> /etc/exports << EOF
/OSE_mysql 172.25.250.0/24(rw,all_squash,async)
/OSE_wordpress 172.25.250.0/24(rw,all_squash,async)
/OSE_registry 172.25.250.0/24(rw,all_squash,async)
EOF
systemctl restart nfs-server ; systemctl enable nfs-server

## on master
oc project default
cat > OSE_registry-volume.json << EOF
{
"apiVersion": "v1",
"kind": "PersistentVolume", //资源类型为 PV
"metadata": {
"name": "exam-registry-volume", //PV 的名称,考题没有要求,但要注意避免与后
面的题目重复
"labels": {
"deploymentconfig": "docker-registry" //标签
}
},
"spec": {
"capacity": {
"storage": "2Gi" //容量,考题没有硬性要求,参考查看下 storage 主机的分区空

},
"accessModes": [ "ReadWriteMany" ], //访问权限,所有主机对该 PV 具有读写权限
"nfs": {
"path": "/OSE_registry", //NFS 共享目录
"server": "workstation.lab.example.com" //NFS 服务器域
}
}
}
EOF

oc create -f OSE_registry-volume.json ; oc get pv

cat > OSE_registry-pvclaim.json << EOF


{
"apiVersion": "v1",
"kind": "PersistentVolumeClaim", //资源类型为 PVC
"metadata": {
"name": "exam-registry-claim", //PVC 名称
"labels": {
"deploymentconfig": "docker-registry" //标签
}
},
"spec": {
"accessModes": [ "ReadWriteMany" ], //访问权限
"resources": {
"requests": {
"storage": "2Gi" //容量 2G
}
}
}
}
EOF

oc create -f OSE_registry-pvclaim.json ; oc get pvc


oc volume dc docker-registry --add --overwrite -t pvc --name=registry-
storage --claim-name=exam-registry-claim
##########################################################################

*6.old:Create OpenShift Enterprise projects


*On your OpenShift Enterprise instance create the following projects:
shark
tokyo
farm
*Additionally,Configure the projects as follows:
*For all of the projects, set the description to 'This is an EX280
project on OpenShift v3'
*Make randy the admin of project shark
*The user ryan must be able to view the project shark but not administer
or delete it.
*Make ryan the admin of projects farm and tokyo.

##########################################################################
oc login https://master.lab.example.com:8443 -u randy -p boaterch
oc new-project shark --description="This is an EX280 project on OpenShift
v3"
oadm policy add-role-to-user view ryan -n shark
#oadm policy remove-role-from-user <role> <username>
oc login https://master.lab.example.com:8443 -u ryan -p boaterch
oc new-project farm --description="This is an EX280 project on OpenShift
v3"
oc new-project tokyo --description="This is an EX280 project on OpenShift
v3"
##########################################################################

7:Create an application from a Git repository


Use the S2I functionality of your Openshift instance to build an
application in the tokyo project.
Use the Git repository at http://workstation.lab.example.com/php-
helloworld for the application source
Use the Docker image labeled
workstation.lab.example.com:5000/openshift3/php-55-rhel7(if you are using
the WebGUI just add the available 2.0 Rady Image)
Once deployed the application must be reachable(and browse-able)at the
following address:http://mordor.tokyo.cloudapp.lab.example.com
Update the original repository so that the app.rb file contains the text
from http://rhgls.lab9.example.com/materials/morfor.txt instead of the word
PLACEHOLDER
Trigger a rebuild so that when browsing
http://monfor.tokyou.cloudapp.example.com it will display the new text

##########################################################################
## on master , use git: http://workstation.lab.example.com/php-helloworld ;
use image : workstation.lab.example.com:5000/openshift3/php-55-rhel7
oc -o json new-app workstation.lab.example.com:5000/openshift3/php-55-
rhel7~http://workstation.lab.example.com/php-helloworld --name=gits2i >
git-s2i.json
oc create -f git-s2i.json
oc expose service gits2i --name=gits2iroute --
hostname=gits2i.cloudapps.lab.example.com --port=80
git clone http://workstation.lab.example.com/php-helloworld
cd php-helloworld
vim index.php
git add index.php
git commit -m "change"
git push

oc start-build gits2i
#or on web
Builds=>Builds=>php-helloworld=>Start Build
##########################################################################

8.Create an application using docker images and definition files(Create an


application using pods)
Using the example files from the wordpress directory under
http://rhgls.lab9.example.com/materials/openshift/origin/examples create a
Wordpress application in the farm project.
Set the OpenShift security context clearance by running the file in
/root/wordprecc_prep.sh on master.lab9.example.com(Note:this is necessary
to allow WordPress to bind to port 80)
For permanent storage use the the NFS shares /OSE_mysql and
/OSE_wordpress from workstation.lab.example.com
For the WordPress pod use the Docker image from
http://rhgls.lab9.example.com/ex280/wordpress.tar(Note:It is normal if the
wordpress pod initially restarts a couple of times due to permission
issues)
For the MySQL pod use the Docker image openshift3/mysql-55-rhel7
Once deployed the application must be reachable at the following
address:http://wordpress.farm.cloudapps.lab.example.com
Finally complete the WordPress installation by setting ryan as the admin
user with password boaterch and root@master.lab.example.com for the email
address.
Set the blog name to EX280 Blog
Create your first post with title:Carpe dlem.quam minimum credula
postero.The text in the post does not matter.

##########################################################################
#on master import image:
wget http://rhgls.lab9.example.com/ex280/wordpress.tar #we should use
online docker to save a image like: docker save wordpress > wordpress.tar
docker load < wordpress.tar # docker load -i wordpress.tar
docker tag docker.io/wordpress
workstation.lab.example.com:5000/openshift3/wordpress
docker push workstation.lab.example.com:5000/openshift3/wordpress

#/root/wordprecc_prep.sh,is like:
#oc patch scc restricted -p '{"runAsUser":{"type":"RunAsAny"}}'
oc patch node master.lab.example.com node.lab.example.com -p '{"runAsUser":
{"type":"RanAsAny"}}'

#on master
:template,https://github.com/openshift/origin/blob/master/examples/
privileged-pod-pvc/

cat >> mysql-volume.yaml << EOF


piVersion: v1
kind: PersistentVolume
metadata:
name: mysql-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
nfs:
path: /OSE_mysql
server: workstation.lab.example.com
persistentVolumeReclaimPolicy: Recycle
EOF

cat >> mysql-pvclaim.yaml << EOF


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-pvclaim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
EOF

oc create -f mysql-volume.yaml
oc create -f mysql-pvclaim.yaml

#create pod mysql and webserver


cat >> mysql-pod.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 0.5
image: openshift/mysql-55-centos7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: yourpassword
- name: MYSQL_USER
value: wp_user
- name: MYSQL_PASSWORD
value: wp_pass
- name: MYSQL_DATABASE
value: wp_db
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql/data
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pvclaim
EOF
oc create -f mysql-pod.yaml

cat >> mysql-service.yaml << EOF


apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
ports:
- port: 3306
selector:
name: mysql
EOF

oc create -f mysql-service.yaml

cat >> wordpress-volume.yaml << EOF


piVersion: v1
kind: PersistentVolume
metadata:
name: wp-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
nfs:
path: /OSE_wordpress
server: workstation.lab.example.com
persistentVolumeReclaimPolicy: Recycle
EOF

cat >> wordpress-pvclaim.yaml << EOF


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: wp-pvclaim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
EOF

oc create -f wordpress-volume.yaml
oc create -f wordpress-pvclaim.yaml

cat >> wordpress-pod.yaml << EOF


apiVersion: v1
kind: Pod
metadata:
name: wordpress
labels:
name: wordpress
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_USER
value: wp_user
- name: WORDPRESS_DB_PASSWORD
value: wp_pass
- name: WORDPRESS_DB_NAME
value: wp_db
- name: WORDPRESS_DB_HOST
value: mysql
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pvclaim
EOF

oc create -f wordpress-pod.yaml

cat >> wordpress-service.yaml << EOF


apiVersion: v1
kind: Service
metadata:
labels:
name: wpfrontend
name: wpfrontend
spec:
ports:
- port: 80
targetPort: wordpress
selector:
name: wordpress
type: LoadBalancer
EOF

oc create -f mysql-service.yaml
oc create -f wordpress-service.yaml

#set route
#ay05.txt:#oc expose pod pod 名称 --name service 名称 --
selector='name=mysqldb'
#day05.txt:#oc expose service 路由名称 --port=8080
#day07.txt:#oc expose service instructor --hostname
instructor.cloudapps.lab.example.com
#oc expose service/<name> --name=routername --
hostname=wordpress.cloudapps.lab.example.com
oc expose service wpfrontend --name=wordpressroute --
hostname=wordpress.farm.cloudapps.lab.example.com --port=80
#then use the web domain visit and set
##########################################################################

9.Configure OpenShift quotas for a project


Configure quotas and limits for project shark so that:
The ResourceQuota resouce is named ex280-quota
The amount of memory consumed across all containers may not exceed 1Gi
The Total amount of CPU usage consumed across all containers may not
exceed 2 Kubernetes compute untis.
The maximum number of replication controllers does not exceed 3
The maximum number of services of pods does not exceed 12
The maximum number of services does not exceed 6
The LimitRange resource is named ex280-limits
The amount of memory consumed by a single pod is between 5Mi and 300Mi
The amount of cpu consumed by a single pod is between 10m and 500m
The amount of cpu consumed by a single container is between 10m and 500m
with a default request value of 100m

##########################################################################
## on master #quota can use command do:oc create quota ex280-quota --
hard=memory=1Gi,cpu=2,replicationcontrollers=3,pods=12,services=6 ;and
then export quota as json,change for limits
oc login -u system:admin
oc project shark

cat >> quota.json << EOF


{
"apiVersion": "v1",
"kind": "ResourceQuota",
"metadata": {
"name": "ex280-quota"
},
"spec": {
"hard": {
"memory": "1Gi",
"cpu": "2",
"replicationcontrollers": "3",
"pods": "12",
"services": "6"
}
}
}
EOF

oc create -f quota.json -n shark

cat >> limits.json << EOF


{
"kind": "LimitRange",
"apiVersion": "v1",
"metadata": {
"name": "ex280-limits",
"creationTimestamp": null
},
"spec": {
"limits": [
{
"type": "Pod",
"max": {
"cpu": "500m",
"memory": "300Mi"
},
"min": {
"cpu": "10m",
"memory": "5Mi"
}
},
{
"type": "Container",
"max": {
"cpu": "500m"
},
"min": {
"cpu": "10m"
},
"default": {
"cpu": "100m"
}
}
]
}
}
EOF

oc create -f limits.json -n shark


##########################################################################

10.Create an application from a template


On master.lab9.example.com using the template file
http://rhgls.lab9.example.com/materials/ex280-template.json as a
basis,install an application in the shark project according to the
following requirements:
The application uses the Git repository at
http://git.lab9.example.com/git/ex280-app.git for its source
All the registry entries point to your local registry at
registry.lab9.example.com:5000
*Import the template so that any Openshift user can use it.
Change the SourceStrategy ImageStream tag name from ruby:latest to
ruby:2.0
Deploy an application using the template
Once deployed the application must be reachable(and browse able) at the
following address:http://ex280-app.shark.devcloud.lab9.example.com

##########################################################################
#on master
#oc export template XXX -o yaml > template.yaml
#or download template from
:https://github.com/openshift/origin/tree/release-3.7/examples/quickstarts,
like:https://raw.githubusercontent.com/openshift/rails-ex/master/
openshift/templates/rails-postgresql.json
oc project openshift #仅在 openshift 下创建的模版才可以给所有人使用
wget http://rhgls.lab9.example.com/materials/ex280-template.json
oc create -f ex280-template.json

#All the registry entries point to your local registry at


registry.lab9.example.com:5000
#Change the SourceStrategy ImageStream tag name from ruby:latest to
ruby:2.0

# login web with the user randy who has the project shark
shark=>new project=>ex280-template=>
APPLICATION_NAME:ex280-app
APPLICATION_HOSTNAME:ex280-app.shark.devcloud.lab9.example.com
GIT_URI:http://git.lab9.example.com/git/ex280-app.git
=>create

#docker tag imageid newid


##########################################################################

You might also like