You are on page 1of 11

Step 1 - Deploy NFS Server

NFS is a protocol that allows nodes to read/write data over a network. The protocol works by
having a master node running the NFS daemon and stores the data. This master node makes
certain directories available over the network.

Clients access the masters shared via drive mounts. From the viewpoint of applications, they
are writing to the local disk. Under the covers, the NFS protocol writes it to the master.

Task

In this scenario, and for demonstration and learning purposes, the role of the NFS Server is
handled by a customised container. The container makes directories available via NFS and
stores the data inside the container. In production, it is recommended to configure a dedicated
NFS Server.

Start the NFS using the command docker run -d --net=host \


--privileged --name nfs-server \
katacoda/contained-nfs-server:centos7 \
/exports/data-0001 /exports/data-0002

The NFS server exposes two directories, data-0001 and data-0002. In the next steps, this is
used to store data.

Step 2 - Deploy Persistent Volume

For Kubernetes to understand the available NFS shares, it requires a PersistentVolume


configuration. The PersistentVolume supports different protocols for storing data, such as AWS
EBS volumes, GCE storage, OpenStack Cinder, Glusterfs and NFS. The configuration provides
an abstraction between storage and API allowing for a consistent experience.

In the case of NFS, one PersistentVolume relates to one NFS directory. When a container has
finished with the volume, the data can either be Retained for future use or the volume can be
Recycled meaning all the data is deleted. The policy is defined by the
persistentVolumeReclaimPolicy option.

For structure is:

apiVersion: v1
kind: PersistentVolume
metadata:
name: <friendly-name>
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: <server-name>
path: <shared-path>

The spec defines additional metadata about the persistent volume, including how much space is
available and if it has read/write access.
Task

Create two new PersistentVolume definitions to point at the two available NFS shares.

kubectl create -f nfs-0001.yaml

kubectl create -f nfs-0002.yaml

View the contents of the files using cat nfs-0001.yaml nfs-0002.yaml

Once created, view all PersistentVolumes in the cluster using kubectl get pv

Step 3 - Deploy Persistent Volume Claim

Once a Persistent Volume is available, applications can claim the volume for their use. The
claim is designed to stop applications accidentally writing to the same volume and causing
conflicts and data corruption.

The claim specifies the requirements for a volume. This includes read/write access and storage
space required. An example is as follows:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim-mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi

Task
Create two claims for two different applications. A MySQL Pod will use one claim, the other
used by an HTTP server.

kubectl create -f pvc-mysql.yaml

kubectl create -f pvc-http.yaml

View the contents of the files using cat pvc-mysql.yaml pvc-http.yaml

Once created, view all PersistentVolumesClaims in the cluster using kubectl get pvc.

The claim will output which Volume the claim is mapped to.

Step 4 - Use Volume

When a deployment is defined, it can assign itself to a previous claim. The following snippet
defines a volume mount for the directory /var/lib/mysql/data which is mapped to the storage
mysql-persistent-storage. The storage called mysql-persistent-storage is mapped to the claim
called claim-mysql.

spec:
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql/data
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: claim-mysql

Task

Launch two new Pods with Persistent Volume Claims. Volumes are mapped to the correct
directory when the Pods start allowing applications to read/write as if it was a local directory.

kubectl create -f pod-mysql.yaml

kubectl create -f pod-www.yaml

Use the command below to view the definition of the Pods.

cat pod-mysql.yaml pod-www.yaml

You can see the status of the Pods starting using kubectl get pods
If a Persistent Volume Claim is not assigned to a Persistent Volume, then the Pod will be in
Pending mode until it becomes available. In the next step, we'll read/write data to the volume.

Step 5 - Read/Write Data

Our Pods can now read/write. MySQL will store all database changes to the NFS Server while
the HTTP Server will serve static from the NFS drive. When upgrading, restarting or moving
containers to a different machine the data will still be accessible.

To test the HTTP server, write a 'Hello World' index.html homepage. In this scenario, we know
the HTTP directory will be based on data-0001 as the volume definition hasn't driven enough
space to satisfy the MySQL size requirement.

docker exec -it nfs-server bash -c "echo 'Hello World' > /exports/data-0001/index.html"

Based on the IP of the Pod, when accessing the Pod, it should return the expected response.

ip=$(kubectl get pod www -o yaml |grep podIP | awk '{split($0,a,":"); print a[2]}'); echo $ip

curl $ip

Update Data
When the data on the NFS share changes, then the Pod will read the newly updated data.

docker exec -it nfs-server bash -c "echo 'Hello NFS World' > /exports/data-0001/index.html"

curl $ip

Step 6 - Recreate Pod

Because a remote NFS server stores the data, if the Pod or the Host were to go down, then the
data will still be available.

Task
Deleting a Pod will cause it to remove claims to any persistent volumes. New Pods can pick up
and re-use the NFS share.

kubectl delete pod www

kubectl create -f pod-www2.yaml

ip=$(kubectl get pod www2 -o yaml |grep podIP | awk '{split($0,a,":"); print a[2]}'); curl $ip

The applications now use a remote NFS for their data storage. Depending on requirements, this
same approach works with other storage engines such as GlusterFS, AWS EBS, GCE storage
or OpenStack Cinder.
Networking Introduction
Kubernetes have advanced networking capabilities that allow Pods and Services to
communicate inside the cluster's network and externally.

In this scenario, you will learn the following types of Kubernetes services.

Cluster IP

Target Ports

NodePort

External IPs

Load Balancer

Kubernetes Services are an abstract that defines a policy and approach on how to access a set
of Pods. The set of Pods accessed via a Service is based on a Label Selector.

Step 1 - Cluster IP

Cluster IP is the default approach when creating a Kubernetes Service. The service is allocated
an internal IP that other components can use to access the pods.

By having a single IP address it enables the service to be load balanced across multiple Pods.

Services are deployed via kubectl apply -f clusterip.yaml.

The definition can be viewed at cat clusterip.yaml

This will deploy a web app with two replicas to showcase load balancing along with a service.
The Pods can be viewed at kubectl get pods

It will also deploy a service. kubectl get svc

More details on the service configuration and active endpoints (Pods) can be viewed via kubectl
describe svc/webapp1-clusterip-svc

After deploying, the service can be accessed via the ClusterIP allocated.

export CLUSTER_IP=$(kubectl get services/webapp1-clusterip-svc -o go-template='{{(index


.spec.clusterIP)}}')
echo CLUSTER_IP=$CLUSTER_IP
curl $CLUSTER_IP:80

Multiple requests will showcase how the service load balancers across multiple Pods based on
the common label selector.

curl $CLUSTER_IP:80

Step 2 - Target Port

Target ports allows us to separate the port the service is available on from the port the
application is listening on. TargetPort is the Port which the application is configured to listen on.
Port is how the application will be accessed from the outside.

Similar to previously, the service and extra pods are deployed via kubectl apply -f clusterip-
target.yaml

The following commands will create the service.

cat clusterip-target.yaml

kubectl get svc

kubectl describe svc/webapp1-clusterip-targetport-svc

After the service and pods have deployed, it can be accessed via the cluster IP as before, but
this time on the defined port 8080.

export CLUSTER_IP=$(kubectl get services/webapp1-clusterip-targetport-svc -o go-


template='{{(index .spec.clusterIP)}}')
echo CLUSTER_IP=$CLUSTER_IP
curl $CLUSTER_IP:8080

curl $CLUSTER_IP:8080

The application itself is still configured to listen on port 80. Kubernetes Service manages the
translation between the two.

Step 3 - NodePort

While TargetPort and ClusterIP make it available to inside the cluster, the NodePort exposes
the service on each Node’s IP via the defined static port. No matter which Node within the
cluster is accessed, the service will be reachable based on the port number defined.
kubectl apply -f nodeport.yaml

When viewing the service definition, notice the additional type and NodePort property defined
cat nodeport.yaml

kubectl get svc

kubectl describe svc/webapp1-nodeport-svc

The service can now be reached via the Node's IP address on the NodePort defined.

curl 172.17.0.40:30080

Step 5 - Load Balancer

When running in the cloud, such as EC2 or Azure, it's possible to configure and assign a Public
IP address issued via the cloud provider. This will be issued via a Load Balancer such as ELB.
This allows additional public IP addresses to be allocated to a Kubernetes cluster without
interacting directly with the cloud provider.

As Katacoda is not a cloud provider, it's still possible to dynamically allocate IP addresses to
LoadBalancer type services. This is done by deploying the Cloud Provider using kubectl apply -f
cloudprovider.yaml. When running in a service provided by a Cloud Provider this is not required.

When a service requests a Load Balancer, the provider will allocate one from the 10.10.0.0/26
range defined in the configuration.

kubectl get pods -n kube-system

kubectl apply -f loadbalancer.yaml

The service is configured via a Load Balancer as defined in cat loadbalancer.yaml

While the IP address is being defined, the service will show Pending. When allocated, it will
appear in the service list.

kubectl get svc

kubectl describe svc/webapp1-loadbalancer-svc

The service can now be accessed via the IP address assigned, in this case from the
10.10.0.0/26 range.

export LoadBalancerIP=$(kubectl get services/webapp1-loadbalancer-svc -o go-


template='{{(index .status.loadBalancer.ingress 0).ip}}') echo LoadBalancerIP=$LoadBalancerIP
curl $LoadBalancerIP

curl $LoadBalancerIP

Create Ingress Routing


Kubernetes have advanced networking capabilities that allow Pods and Services to
communicate inside the cluster's network. An Ingress enables inbound connections to the
cluster, allowing external traffic to reach the correct Pod.

Ingress enables externally-reachable urls, load balance traffic, terminate SSL, offer name based
virtual hosting for a Kubernetes cluster.

In this scenario you will learn how to deploy and configure Ingress rules to manage incoming
HTTP requests.

Step 1 - Create Deployment

To start, deploy an example HTTP server that will be the target of our requests. The deployment
contains three deployments, one called webapp1 and a second called webapp2, and a third
called webapp3 with a service for each.

cat deployment.yaml

Task
Deploy the definitions with kubectl apply -f deployment.yaml

The status can be viewed with kubectl get deployment

Step 2 - Deploy Ingress

The YAML file ingress.yaml defines a Nginx-based Ingress controller together with a service
making it available on Port 80 to external connections using ExternalIPs. If the Kubernetes
cluster was running on a cloud provider then it would use a LoadBalancer service type.

The ServiceAccount defines the account with a set of permissions on how to access the cluster
to access the defined Ingress Rules. The default server secret is a self-signed certificate for
other Nginx example SSL connections and is required by the Nginx Default Example.

cat ingress.yaml

Task
The Ingress controllers are deployed in a familiar fashion to other Kubernetes objects with
kubectl create -f ingress.yaml

The status can be identified using kubectl get deployment -n nginx-ingress

Step 3 - Deploy Ingress Rules

Ingress rules are an object type with Kubernetes. The rules can be based on a request host
(domain), or the path of the request, or a combination of both.

An example set of rules are defined within cat ingress-rules.yaml


The important parts of the rules are defined below.

The rules apply to requests for the host my.kubernetes.example. Two rules are defined based
on the path request with a single catch all definition. Requests to the path /webapp1 are
forwarded onto the service webapp1-svc. Likewise, the requests to /webapp2 are forwarded to
webapp2-svc. If no rules apply, webapp3-svc will be used.

This demonstrates how an application's URL structure can behave independently about how the
applications are deployed.

- host: my.kubernetes.example

http:

paths:

- path: /webapp1

backend:

serviceName: webapp1-svc

servicePort: 80

- path: /webapp2

backend:

serviceName: webapp2-svc

servicePort: 80

- backend:

serviceName: webapp3-svc

servicePort: 80

Task
As with all Kubernetes objects, they can be deployed via kubectl create -f ingress-rules.yaml

Once deployed, the status of all the Ingress rules can be discovered via kubectl get ing

Step 4 - Test

With the Ingress rules applied, the traffic will be routed to the defined place.
The first request will be processed by the webapp1 deployment.

curl -H "Host: my.kubernetes.example" 172.17.0.15/webapp1

The second request will be processed by the webapp2 deployment.

curl -H "Host: my.kubernetes.example" 172.17.0.15/webapp2

Finally, all other requests will be processed by webapp3 deployment.

curl -H "Host: my.kubernetes.example" 172.17.0.15

This scenario teaches you how to use Helm, the package manager for Kubernetes, to deploy
Redis. Helm simplifies discovering and deploying services to a Kubernetes cluster.

"Helm is the best way to find, share, and use software


built for Kubernetes."
Install Helm

Helm is a single binary that manages deploying Charts to Kubernetes. A chart is a packaged
unit of kubernetes software. It can be downloaded from
https://github.com/kubernetes/helm/releases

curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.8.2-linux-amd64.tar.gz tar


-xvf helm-v2.8.2-linux-amd64.tar.gz mv linux-amd64/helm /usr/local/bin/

Once installed, initialise update the local cache to sync the latest available packages with the
environment.

helm init helm repo update

Search For Chart

You can now start deploying software. To find available charts you can use the search
command.

For example, to deploy Redis we need to find a Redis chart.

helm search redis

We can identify more information with the inspect command.

helm inspect stable/redis

Deploy Redis

Use the install command to deploy the chart to your cluster.

helm install stable/redis


Helm will now launch the required pods. You can view all packages using helm ls

If you receive an error that Helm could not find a ready tiller pod, it means that helm is still
deploying. Wait a few moments for the tiller Docker Image to finish downloading.

In the next step we'll verify the deployment status.

See Results

Helm deploys all the pods, replication controllers and services. Use kubectl to find out what was
deployed.

kubectl get all

The pod will be in a pending state while the Docker Image is downloaded and until a Persistent
Volume is available.

kubectl apply -f pv.yaml

Redis needs permissions to write chmod 777 -R /mnt/data*

Once complete it will move into a running state. You'll now have a Redis Cluster running on top
of Kubernetes.

The helm could be provided with a more friendly name, such as:

helm install --name my-release stable/redis

You might also like