You are on page 1of 13

23/04/2019 Kubernetes Networking Explained: Introduction

ALL THINGS KUBERNETES

Kubernetes Networking
Explained: Introduction

Posted by Kirill Goltsman May 30, 2018

Kubernetes is a powerful platform for managing containerized


applications. It supports their deployment, scheduling,
replication, updating, monitoring, and much more. Kubernetes
has become a complex system due to the addition of new
abstractions, resource types, cloud integrations, and add-ons.
Further, Kubernetes cluster networking is perhaps one of the
most complex components of the Kubernetes infrastructure
because it involves so many layers and parts (e.g., container-to-
container networking, Pod networking, services, ingress, load
balancers), and many users are struggling to make sense of it
all.

Privacy & Cookies Policy


https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 1/13
23/04/2019 Kubernetes Networking Explained: Introduction

Kubernetes Networking Explained

The goal of Kubernetes networking is to turn containers and


Pods into bona de “virtual hosts” that can communicate with
each other across nodes while combining the bene ts of VMs
with a microservices architecture and
containerization. Kubernetes networking is based on several
layers, all serving this ultimate purpose:

Container-to-container communication using


localhost   and the Pod’s network namespace. This
networking level enables the container network interfaces
for tightly coupled containers that can communicate with
each other on speci ed ports much like the conventional
applications communicate via localhost  .
Pod-to-pod communication that enables communication
of Pods across Nodes. If you want to learn more about
Pods, see our recent article).
Services. A Service abstraction de nes a policy
(microservice) for accessing Pods by other applications.
Ingress, load balancing, and DNS.

Sounds like a lot of stuff, doesn’t it? It is. That’s why we decided
to create a series of articles explaining Kubernetes networking
from the bottom (container-to-container communication) to
the top (pod networking, services, DNS, and load balancing). In
the rst part of the series, we discuss container-to-container
and pod-to-pod networking. We demonstrate how Kubernetes
networking is different from the “normal” Docker approach,
what requirements for networking implementations it imposes,
and how it achieves a homogeneous networking system that
allows Pods communication across nodes. We think that by the
end of this article you’ll have a better understanding of
Privacy & Cookies Policy
https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 2/13
23/04/2019 Kubernetes Networking Explained: Introduction

Kubernetes networking that will prepare you for the


deployment of the full- edged microservices applications using
Kubernetes services, DNS, and load balancing.

Fundamentals of Kubernetes
Networking
Kubernetes platform aims to simplify cluster networking by
creating a at network structure that frees users from setting
up dynamic port allocation to coordinate ports, designing
custom routing rules and sub-nets, and using Network Address
Translation (NAT) to move packets across different network
segments. To achieve this, Kubernetes prohibits networking
implementations involving any intentional network
segmentation policy. In other words, Kubernetes aims to keep
the networking architecture as simple as possible for the end
user. The Kubernetes platform sets the following networking
rules:

All containers should communicate with each other


without NAT.
All nodes should communicate with all containers without
NAT.
The IP as seen by one container is the same as seen by the
other container (in other words, Kubernetes bars any IP
masquerading).
Pods can communicate regardless of what Node they sit
on.

To understand how Kubernetes implements these rules, let’s


rst discuss the Docker model that serves as a point of
reference for Kubernetes networking.

Overview of the Docker


Networking Model
As you might know, Docker supports numerous network
architectures like overlay networks and Macvlan networks, but
its default networking solution is based on host-private
networking implemented by the bridge   networking driver.
To clarify the terms, as with any other private network, Docker’s
host-private networking model is based on a private IP address
space that can be freely used by anybody without the approval
of the Internet registry but that has to be translated using NAT
Privacy & Cookies Policy
or a proxy server if the network needs to connect to the
https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 3/13
23/04/2019 Kubernetes Networking Explained: Introduction

Internet. A host-private network is a private network that lives


on one host as opposed to a multi-host private network that
covers multiple hosts.

Governed by this model, Docker’s bridge   driver implements


the following:

First, Docker creates a virtual bridge ( docker0  ) and


allocates a subnet from one of the private address blocks
for that bridge. A network bridge is a device that creates a
single merged network from multiple networks or
network segments. By the same token, a virtual bridge is
an analogy of a physical network bridge used in the virtual
networking. Virtual network bridges like docker0   allow
connecting virtual machines (VMs) or containers into a
single virtual network. This is precisely what the Docker’s
bridge driver is designed for.
To connect containers to the virtual network, Docker
allocates a virtual ethernet device called veth   attached
to the bridge. Similarly to a virtual bridge, veth   is a
virtual analogy of the ethernet technology used to
connect hosts to LAN or Internet or package and to pass
data using a wide variety of protocols. The veth   is
mapped to eth0   network interface, which is Linux’s
Ethernet interface that manages Ethernet device and
connection between the host and the network. In Docker,
each in-container eth0   is provided with an IP address
from the bridge’s address range. In this way, each
container gets its own IP address from that range.

The above-described architecture is schematically represented


in the image below.

Privacy & Cookies Policy


https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 4/13
23/04/2019 Kubernetes Networking Explained: Introduction

In this image, we see that both Container 1 and Container 2 are


part of the virtual private network created by the
docker0   bridge. Each of the containers has a veth   interface
connected to the docker0   bridge. Since both containers and
their veth   interfaces are on the same logical network, they
can easily communicate if they manage to discover each other’s
IP addresses. However, since both containers are allocated a
unique veth  , there is no shared network interface between
them, which hinders coordinated communication, container
isolation, and ability to encapsulate them in a single abstraction
like pod. Docker allows solving this problem by allocating ports,
which then can be forwarded or proxied to other containers.
This has a limitation that containers should coordinate the ports
usage very carefully or allocate them dynamically.

Kubernetes Solution
Kubernetes bypasses the above-mentioned limitation by
providing a shared network interface for containers. Using the
analogy from the Docker model, Kubernetes allows containers
to share a single veth   interface like in the image below.

K8s Networking Solution

Privacy & Cookies Policy


https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 5/13
23/04/2019 Kubernetes Networking Explained: Introduction

As a result, Kubernetes model augments the default host-


private networking approach in the following way:

Allows both containers to be addressable on veth0   (e.g.,


172.17.02   in the image above).
Allows containers to access each other via allocated ports
on localhost  . Practically speaking, this is the same as
running applications on a host with added bene ts of
container isolation and design of tightly coupled container
architectures.

To implement this model, Kubernetes creates a special


container for each pod that provides a network interface for
other containers. This container is started with a “pause”
command that provides a virtual network interface for all
containers, allowing them to communicate with each other.

By now, you have a better understanding of how container-to-


container networking works in Kubernetes. As we have seen, it
is largely based on the augmented version of the
bridge   driver but with an added bene t of a shared network
interface that provides better isolation and communication for
containerized applications.

Tutorial
Now, let’s illustrate a possible scenario of the communication
between two containers running in a single pod. One of the
most common examples of the multi-container
communication via localhost   is when one container like
Apache HTTP server or NGINX is con gured as a reverse proxy
that proxies requests to a web application running in another
container.

Elaborating upon this case, we are going to discuss a situation


when the NGINX container is con gured to proxy request from
its default port ( :80  ) to the Ghost publishing
platform accessible on some port (e.g port:2368  )

To complete this example, we’ll need the following


prerequisites:

A running Kubernetes cluster. See Supergiant GitHub wiki


for more information about deploying a Kubernetes
cluster with Supergiant. As an alternative, you can install a

Privacy & Cookies Policy


https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 6/13
23/04/2019 Kubernetes Networking Explained: Introduction

single-node Kubernetes cluster on a local system using


Minikube.
A kubectl command line tool installed and con gured to
communicate with the cluster. See how to install kubectl
here.

Step #1: Define a ConfigMap


Con gMaps are Kubernetes objects that allow decoupling the
app’s con guration from the Pod’s spec enabling better
modularity of your settings. In the example below, we are
de ning a Con gMap for NGINX server that includes a basic
reverse proxy con guration.

1 apiVersion: v1
2 kind: ConfigMap
3 metadata:
4   name: nginx-conf
5 data:
6   nginx.conf: |-
7     user  nginx;
8     worker_processes  2;
9     error_log  /var/log/nginx/error.log warn;
10     pid        /var/run/nginx.pid;
11     events {
12       worker_connections  1024;
13     }
14     http {
15       sendfile        on;
16       keepalive_timeout  65;
17       include /etc/nginx/conf.d/*.conf;
18       server {
19         listen 80 default_server;
20         location /ghost {
21           proxy_pass http://127.0.0.1:2368;
22         }
23       }
24     }

In brief, this Con gMap tells NGINX to proxy requests from its
default port localhost:80   to localhost:2368   on which the
Ghost is listening to requests.

This Con gMap should be rst passed to Kubernetes before we


can deploy a Pod. Save the Con gMap in a le (e.g.,  nginx-
config.yaml  ), and then run the following command:

1 kubectl create -f nginx-config.yaml


Privacy & Cookies Policy
https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 7/13
23/04/2019 Kubernetes Networking Explained: Introduction

Step #2: Create a Deployment


The next thing we need to do is to create a Deployment for our
two-container pod (see our recent article for the review of the
Pod deployment options in Kubernetes).

1 apiVersion: apps/v1
2 kind: Deployment
3 metadata:
4   name: tut-deployment
5   labels:
6     app: tut
7 spec:
8   replicas: 1
9   selector:
10     matchLabels:
11       app: tut
12   template:
13     metadata:
14       labels:
15         app: tut
16     spec:
17       containers:
18       - name: ghost
19         image: ghost:latest
20         ports:
21         - containerPort: 2368
22       - name: nginx
23         image: nginx:alpine
24         ports:
25         - containerPort: 80
26         volumeMounts:
27         - name: proxy-config
28           mountPath: /etc/nginx/nginx.conf
29           subPath: nginx.conf
30       volumes:
31       - name: proxy-config
32         configMap:
33           name: nginx-conf

These deployment specs:

De ne a deployment named ‘tut-deployment‘ (


metadata.name  ) and assign a label ‘tut‘ to all pods of this
deployment ( metadata.labels.app  ).
Sets desired state of the deployment to 2 replicas (
spec.replicas  ).
De ne two containers: ‘ghost‘ that uses ghost
Docker container image and ‘nginx‘ container that uses
Privacy & Cookies Policy
https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 8/13
23/04/2019 Kubernetes Networking Explained: Introduction

the nginx image from Docker repository.


Open a container port:80   for the ‘nginx‘ container (
spec.containers.name.image  ).
Create a volume ‘proxy-con g‘ and one volume of a type
configMap   named ‘nginx-con g‘ that will be used by
containers to access a ConfigMap   resource titled “nginx-
con g” created in the previous step.
Mounts proxy-config   volume to the path
/etc/nginx/nginx.conf   to enable the container’s
access to NGINX con guration.

To create this deployment, save the above manifest in the tut-


deployment.yaml    le and run the following command:

1 kubectl create -f tut-deployment.yaml

If everything is OK, you will be able to see the running


deployment using kubectl get deployment tut-deployment:

1 NAME             DESIRED   CURRENT   UP-TO-DATE   A


2 tut-deployment   2         2         2            0

Step #3: Exposing a NodePort


Now, as our Pods are running, we should expose the NGINX
port:80   to the public Internet to see if the reverse proxy
works. This can be done by exposing the Deployment as a
Service (in the next tutorial, we are going to cover Kubernetes
services in more detail):

1 kubectl expose deployment tut-deployment --type=Nod


2 service "tut-deployment" exposed

After our deployment is exposed, we need to nd a


NodePort   dynamically assigned to it:

1 kubectl describe service tut-deployment

This command will produce an output similar to this:

1 Name:           tut-deployment
2 Namespace:      default
3 Labels:         app=tut
4 Selector:       app=tut
5 Type:           NodePort
6 IP:         10.3.208.190
7 Port:           <unset> 80/TCP
8 NodePort:       <unset> 30234/TCP
9 Endpoints:      10.2.6.6:80,10.2.6.7:80 Privacy & Cookies Policy
https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 9/13
23/04/2019 Kubernetes Networking Explained: Introduction

10 Session Affinity:   None

We need a NodePort   value, which is 30234   in our case. Now


you can access the ghost publishing platform through NGINX
using http://YOURHOST:30234  .

That’s it! Now you see how containers can easily communicate
via localhost   using built-in Pod’s virtual network. As such, a
container-to-container networking is a building block of the
next layer, which is a pod-to-pod networking discussed in the
next section.

From Container-to-Container
to Pod-to-Pod Communication
One of the most exciting features in Kubernetes is that pods
and containers within pods can communicate with each other
even if they land on different nodes. This feature is something
that is not implemented in Docker by default (Note: Docker
supports multi-host connectivity as a custom solution available
via overlay   driver). Before delving deeper into how
Kubernetes implements pod-to-pod networking, let’s rst
discuss how networking works on a pod level.

As we remember from the previous tutorial, pods are


abstractions that encapsulate containers to provide Kubernetes
services like shared storage, networking interfaces, deployment,
and updates to them. When Kubernetes creates a pod, it
allocates an IP address to it. This IP is shared by all containers in
that pod and allows them to communicate with each other
using localhost   (as we saw in the example above). This is
known as “IP-per-Pod” model. It is an extremely convenient
model where pods can be treated much like physical hosts or
VMs from the standpoint of port allocation, service discovery,
load balancing, migration, and more.

So far, so good! But what if we want our pods to be able to


communicate across nodes? This becomes a little more
complicated.

Referring to the example above, let’s assume that we now have


two nodes hosting two containers each. All these containers are
connected using docker0   bridges and have shared
veth0   network interfaces. However, on both nodes a Docker
bridge ( docker0  ) and a virtual ethrenet interface ( veth0   )
Privacy
are now likely to have the same IP address because they were& Cookies Policy
https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 10/13
23/04/2019 Kubernetes Networking Explained: Introduction

both created by the same default Docker function. Even if


veth   IPs are different, we still do not avoid a problem of an
individual node being unaware of private network address
space created on another node, which makes it dif cult to
reach pods on it.

How Does Kubernetes Solve this


Problem?
Let’s see how Kubernetes elegantly solves this problem. As we
see in the image below, veth0  , custom bridge  , eth0  , and
a gateway that connects two nodes are now parts of the shared
private network namespace centered around the gateway (
10.100.01  ). This con guration implies that Kubernetes has
somehow managed to create a separate network that covers
two nodes. You may also notice that addresses to bridges are
now assigned depending on what node a bridge is living on. So
for example, we now have a 10.0.1...   address space shared
by a custom bridge and veth0   on Node 1 and a
10.0.2...   address space shared by the same components on
Node 2. At the same time, however, eth0 on both nodes share
the address space of the common gateway, which allows both
nodes to communicate ( 10.100.0.0   address space).

The design of this network is similar to an overlay network. (In a


nutshell, an overlay network is a network built on top of another
low-level network.) For example, the internet was originally
Privacybuilt
& Cookies Policy
https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 11/13
23/04/2019 Kubernetes Networking Explained: Introduction

as an overlay over the telephone network. A pod network in


Kubernetes is an example of an overlay network that takes
individual private networks within each node and transforms
them into a new software-de ned network (SDN) with a shared
namespace, which allows pods to communicate across nodes.
That’s how the Kubernetes magic works!

Kubernetes ships with this model by default, but there are


several networking solutions that achieve the same result.
Remember that any network implementation that violates
Kubernetes networking principles (mentioned in the Intro) will
not work with Kubernetes. Some of the most popular
networking implementations supported by Kubernetes are the
following:

Cisco Application Centric Infrastructure — an integrated


overlay and underlay SDN solution with the support for
containers, virtual machines, and bare metal servers.
Cilium — open source software for container applications
with a strong security model.
Flannel — a simple overlay network that satis es all
Kubernetes requirements while being one of the most
easiest to install and run.

For more available networking solutions, see the of cial


Kubernetes documentation.

Conclusion
In this article, we covered two basic components of the
Kubernetes networking architecture: container-to-container
networking and pod-to-pod networking. We have seen that
Kubernetes uses overlay networking to create a at network
structure where containers and pods can communicate with
each other across nodes. All routing rules and IP namespaces
are managed by Kubernetes by default, so there is no need to
bother creating subnets and using dynamic port allocation. In
fact, there are several out-of-the-box overlay network
implementations to get you started. Kubernetes networking
enables an easy migration of applications from VMs to pods,
which can be treated as “virtual hosts” with the functionality of
VMs but with an added bene t of container isolation and
microservices architecture. In our following tutorial, we discuss
the next layer of the Kubernetes networking: services, which are
abstractions that implement microservices and service
Privacy & Cookies Policy
https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 12/13
23/04/2019 Kubernetes Networking Explained: Introduction

discovery for pods, enabling highly available applications


accessible from the outside of a Kubernetes cluster.

Of cial Slack Channel

SUPPORT

Pricing

Customer Portal

TOOLKIT

Supergiant Repo

Capacity

Control

Analyze

COMPANY

About Us

Contact Us

Privacy Policy

Support Policy

©2019. Supergiant is a trademark of Qbox

Privacy & Cookies Policy


https://supergiant.io/blog/kubernetes-networking-explained-introduction/ 13/13

You might also like