You are on page 1of 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/281768484

Software-Defined Networking of Linux Containers

Conference Paper · September 2014


DOI: 10.1109/RoEduNet-RENAM.2014.6955310

CITATIONS READS

8 636

5 authors, including:

Octavian Mihai Machidon Razvan Bocu


Universitatea Transilvania Brasov University College Cork
40 PUBLICATIONS   84 CITATIONS    15 PUBLICATIONS   26 CITATIONS   

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Octavian Mihai Machidon on 15 September 2015.

The user has requested enhancement of the downloaded file.


Software-Defined Networking of Linux Containers

Cosmin Costache1, Octavian Machidon1, Adrian Mladin1, Florin Sandu1, Razvan Bocu1
1”
Transilvania”University, Department of Electronics and Computers – Brasov, Romania

Abstract — Today’s IT organizations who act as service II. SOFTWARE DEFINED NETWORKING
providers are under increasing pressure to keep up with the
continuous and growing demand for IT services. Through the
The Software Defined Networking (SDN) is a new
shift from interactive, manual processes to automated, self- architectural concept that aims to decouple the network
deployment of resources, the providers can increase the efficiency control and forwarding functions [1]. This separation enables
of delivering on-demand services. Virtualized resources, particu- the network control layer to be programmable. Another key
larly virtual machines and containers that are in the focus of the feature of the SDN is the use of open protocols for the
present paper, make the infrastructure transparent to the final communication between the network elements and the
user, are easier to be configured and can seamlessly migrate to controller [2].
another host, in real time – preserving processes status. The A typical SDN architecture consists of 3 layers. The top
Linux containers represent an emerging technology for fast and layer is the application layer which includes applications
lightweight process virtualization. Because the containers require delivering services. The applications interact with the SDN
less resources to run by sharing the operating system kernel, a controller which facilitates automated network management.
higher density of containers can be achieved on the same host, At the bottom is the physical network layer composed of plain
opposed to other virtualization solutions like hardware or para- network elements. The network elements are simplified and
virtualization. The paper presents a solution to enable the on- they concentrate only on the forwarding functions. All the
demand provisioning of Linux containers using Software-Defined
decisions, route calculations and policies are implemented in
Networking, a flexible approach to treating even control-level
resources “as a Service”.
the controller [3]. Figure 1 shows the typical SDN
architecture.
Keywords — SDN; Linux containers; virtualization; Software- In an SDN environment the controller is the central point,
Defined Networking; providing an abstract, centralized view of the entire network.
The most common protocol used in SDN networks for the
I. INTRODUCTION communication between the controller and the network
elements (switches) is the OpenFlow protocol. There are
The rise of cloud services and the increasing number of several commercial and open-source SDN controllers. For the
mobile personal devices such as smartphones, tablets and current research, we have decided to use the Open Daylight
notebooks accessing the cloud services, are putting a lot of controller. Open Daylight is a Java based controller providing
stress on the IT and network infrastructure. The need to be enterprise grade performance.
able to provision the user services on demand and optimize the
resource allocation has become a priority for many service
providers.
In this paper we present a flexible solution based on
lightweight Linux containers that can enable the on-demand
provisioning of user applications or services. The applications
are running inside isolated containers that can be started and
interconnected on demand. After starting, the virtual network
interface of the container is connected to a virtual switch
instance. The networking between the containers is
dynamically configured by adding or updating flow definitions
into the virtual switches.
The paper is structured as follows: section 2 explains the
concept of software defined networking, while section 3 will
Fig. 1. The SDN Architecture
cover the main virtualization solutions. Our solution for
implementing the software defined networking of Linux The software defined networking is an emerging
containers is presented in section 4, the conclusions being architecture model that is well suited for the dynamic nature of
presented in section 5. today’s applications [4]. Exposing the network control through
APIs, the network services are abstracted and the network Kernel-based Virtual Machine (KVM) is an open source
itself becomes programmable [5]. hypervisor that provides enterprise-class performance to run
Windows or Linux guest virtual machines on x86 hardware. A
The method we are presenting leverages the capability to very used alternative is OpenVZ [8].
control and program the network through the SDN Controller,
in order to interconnect the Linux containers. Linux containers (LXC) represent a different method of
OS-level virtualization. It allows multiple isolated Linux
III. VIRTUALIZATION SOLUTIONS systems (containers) to be run on a single host operating
The virtualization can be described in a generic way as a system. The host kernel provides process isolation and
separation of the service request from the underlying physical performs resource management. This means that even though
delivery of that service [6]. In computer virtualization, an all the containers are running under the same kernel, each
additional layer called hypervisor is typically added between container is a virtual environment that has its own file system,
the hardware and the operating system. The hypervisor layer is processes, memory, devices, etc. In the research presented
responsible for both sharing of hardware resource and the hereby, we used an open source implementation of the Linux
enforcement of mandatory access control rules based on the Containers technology called Docker.
available hardware resources. Docker is an open-source platform for the management of
Linux containers. Docker containers can be seen as extremely
There are three types of virtualization: full virtualization,
lightweight virtual machines that allow code to be run in
para-virtualization and operating system level (OS-level)
isolation from other containers. A Docker container can boot
virtualization. In the following sub-sections we will present
extremely fast making it the best candidate for on demand
the concepts used by each virtualization model.
provisioning scenarios.
A. Full virtualization
The full virtualization is designed to provide a total IV. SDN FOR LINUX CONTAINERS
abstraction of the underlying hardware and creates a complete
virtual system for the guest operating system [7]. The The containers have been in the IT environment for long
hypervisor monitors the hardware resources and mediates time, but the use of containers instead of virtual machines is a
between the guest operating systems and the underlying novel approach [9]. Linux containers are lighter and provide
hardware. With this model, no modifications are needed in the better performance compared to classical virtual machines. A
guest OS. Each virtual machine is independent and unaware of full virtual machine can take up to several minutes to be
other virtual machines running on the same physical hardware. provisioned, whereas a container can be instantiated and
One advantage of this virtualization technique is the started in seconds. Because the containers do not run on top of
decoupling of the software (OS) from the underlying a hypervisor, the applications they contain offer superior
hardware. The performance of this technique is less than bare performance close to bare-metal performance. This paper will
hardware because of the hypervisor mediation. The most present a method to easily interconnect containers running on
common full virtualization solutions are provided by different virtual machines. It is also possible to isolate the
VMWare, Microsoft and Oracle. interconnected containers into VLANs.
The test environment is composed of 3 virtual machines
B. Para-virtualization running Linux and each VM is hosting multiple containers.
The para-virtualization technique requires modifications in Because we had only one physical machine available, we
the guest operating systems that are running inside the virtual decided to use virtual machines to simulate a network
machines. This method uses a hypervisor for shared access to topology with 3 nodes. All the virtual machines run on top of a
the underlying hardware but integrates virtualization-aware Linux OS using the Kernel Virtualization Module (KVM).
code into the operating system itself. As a result, the guest The host OS is a 64bit Ubuntu distribution (12.04 LTS). The
operating systems are aware that they are executing on top of a virtual machines are running on a Linux OS based on the
hypervisor and allow them to interact more directly with the Ubuntu 14.04 distribution and each has allocated 1GB of
host system's hardware. This leads to a higher performance RAM.
and less virtualization overhead. Because the Docker containers are lightweight, each
The primary advantage of the para-virtualization is the virtual machine will host multiple Docker containers.
decrease of performance penalty observed in the full Additionally, on each virtual machine we have installed a
virtualization. virtual switch module. After creation, all the containers on a
virtual machine, will be attached to its local switch. The
C. OS-level virtualization switches will be linked using GRE (Generic Routing
The OS-level virtualization does not require an additional Encapsulation) tunnels. For the virtual switch we have chosen
hypervisor layer. Instead, the virtualization capabilities are the open source software called “open vSwitch”. To simplify
part of the host operating system (OS). This technique the test environment we have configured static IP addresses on
virtualizes servers on top of the host operating system itself. each virtual machine.
The overhead produced through the hypervisor mediation is To enable the communication between the containers
eliminated and enables near native performance. located on different virtual machines, we have created GRE
tunnels between the 3 open vSwitch instances. The tunnel repository. For our test scenario we have configured a private
configuration is depicted in Fig. 2. Docker repository and made it available on the network. The
Each open vSwitch instance is connected using GRE repository has been populated with several Docker
tunnels with its peers from the other virtual machines. On each preconfigured containers.
virtual switch we will create a bridge corresponding to the To facilitate the search and retrieval from the repository,
actual network interface. We will call this bridge “tun-br” each container has an associated unique ID.
(tunnel bridge). The bridge will have an TEP interface (Tunnel
Endpoint) which will get an IP address assigned. A Docker container can be retrieved from the repository
using the pull command: docker pull <container_id>. After
being retrieved from the repository, the application packaged
in the container can be executed using the run command:
docker run <container_id> <application_name>. We can
always attach to a running container using the attach
command: docker attach <container_id>. If the containers are
started from scripts, they can be assigned to variables, for easy
handling. As an example:

C1=$(docker run -d -n=false -t -i ubuntu /bin/bash)


docker attach $C1

When the container is started, the Docker daemon will


automatically assign MAC and IP addresses and the container
Fig.2 The GRE tunnel configuration between the VMs will be attached to the default docker0 bridge. An example
with 2 containers connected to the default bridge is shown in
The bridge is created in the vSwitch from the command figure 3.
line interface using following commands (for simplicity we
will present only the configuration from one virtual machine,
the configuration on the other two virtual machines being done
in a similar manner):

ovs-vsctl add-br tun-br


ovs-vsctl add-port tun-br tep0 -- set interface tep0
type=internal

To connect two virtual machines we have to create a


tunnel between them. To create the GRE tunnel to the peer
virtual switches an additional bridge is created between the Fig. 3 Docker container configuration on VM1
GRE ports on each virtual machine. The bridge is called “sdn-
br” The following commands are to exemplify the GRE tunnel Because we want the containers to be able to communicate
creation from VM1 (the allocated IP addresses for the VMs using the configured GRE tunnel, we will have to attach them
are 192.168.122.101 –VM1, 192.168.122.102-VM2, etc.) to the previously created bridge sdn-br3. Also we would like
to manually assign them the MAC and IP address. For this
ovs-vsctl add-br sdn-br purposes we have developed a script for container
configuration:
ovs-vsctl set bridge sdn-br stp_enable=true
ovs-vsctl add-port sdn-br gre1 -- set interface gre1 ovsattach <Open vSwitch Bridge> <Container_ID>
type=gre options:remote_ip=192.168.122.101 <IP_Address>/<Subnet> <MAC_Address> [VLAN]
ovs-vsctl add-port sdn-br gre2 -- set interface gre2
type=gre options:remote_ip=192.168.122.102 Usage example:
C1=$(docker run -d -n=false -t -i ubuntu /bin/bash)
After all the GRE tunnels are created, the next step is to sudo ./ovsattach.sh sdn-br3 $C1 1.0.0.3/24
create the Docker containers on each virtual machine. This 00:00:00:00:00:03 20
step requires that Docker is already installed and configured
on the machine. We will skip the Docker installation steps,
because they are not in the focus of this paper. To simplify the syntax, we have assigned the container to a
variable C1.
The content and runtime configuration of a Docker is The steps were performed on the other virtual machines to
stored in a repository as a template also called Docker image. add the containers.
A Docker image can be downloaded from a public or private
The final network configuration is depicted in Fig. 4. It is
composed of 3 virtual machines, each having an open vSwitch V. CONCLUSIONS
instance and up to 3 Docker containers.
In certain scenarios of resource allocation, a good
alternative to classical virtual machines is represented by
containers, that are using certain already available specific
capabilities of the operating systems: processing and storage
reservation per process – “control groups”, sharing of the
binary files and libraries etc. We have extended this concept of
grouping to services, that can be configured to run in isolation
inside a container instantiated on demand. Due to the
lightweight nature of the Linux containers, the service
provider can reach a higher density of containers using the
same resources as opposed to other virtualization solutions.
Our research covered the SDN-based control of data flow
in bridges, that can be done remotely and even granted to 3rd
parties (service customers).
The control of the bridges was done via an Open vSwitch
data base that can be accessed in the Cloud.
Recently, OpenStack embedded Docker control, proving
Fig. 4 Full network configuration
the real interest and good perspective of Linux containers – e.g.
After all the virtual switches were configured, we have their remote instantiation in scalable SDN.
installed an SDN controller on the host machine. We have
chosen the Open Daylight SDN controller, because it is a ACKNOWLEDGMENT
mature product, providing enterprise grade performance.
This paper is supported by the Sectoral Operational
All the virtual switches running inside the virtual machines Programme Human Resources Development (SOP HRD),
were registered to this controller. After registration the ID134378 financed from the European Social Fund and by the
switches became controller aware and all the configurations Romanian Government.
we have done directly into the switches using the command
line interface, can be done now using the GUI interface REFERENCES
provided by the Open Daylight controller.
[1] Nadeau T., Gray K., “SDN – Software Defined Networks”, O’Reilly,
The registration of a virtual switch to the SDN controller 2013, ISBN: 978-1-449-34230-2
has to be done from a command line like the following: [2] Nygren A., Pfa B., Lantz B., Heller, “OpenFlow Switch Specification”,
version 1.3.3 – ONF, Open Networking Foundation – September 27,
ovs-vsctl set-controller sdn-br3 tcp:192.168.122.1:6633 2013
[3] Jain R., Paul S., ”Network virtualization and software defined
networking for cloud computing: a survey”, IEEE Communications
In this example we assume that the controller is running on Magazine, vol.51, no.11, pp.24-31
the host machine having the IP address 192.168.122.1 and it is [4] Bakshi, K., "Considerations for Software Defined Networking (SDN):
listening on port 6633. Approaches and use cases", IEEE Aerospace Conference, 2013, pp.1-9
After the switches are registered with the controller, all the [5] ONF, Open Networking Foundation – "Software-Defined Networking:
The New Norm for Networks". White paper, April 13, 2012. Retrieved
packets received by the switches, that do not match any entry June 2014
in the flow table, are sent to the controller which takes the [6] Buyyaa R., Yeoa C.S., Venugopala S., Broberga J., Brandicc I. –
appropriate decisions. The controller can decide to insert a “Cloud Computing and emerging IT platforms: Vision, hype, and reality
rule in the flow table of the switch or to drop the package. for delivering computing as the 5th utility”, Future Generation Computer
Systems, Volume 25, June 2009
Beside the GUI interface, the controller exposes a set of
APIs that can be used to automatically configure the flows [7] VMware Whitepaper, “Understanding Full Virtualization, Para-
virtualization, and Hardware Assist” – www.vmware.com/resources/
between containers. This will allow the SDN applications to techresources/1008, Retrieved May 2014
dynamically provision containers and configure the data flow [8] Kolyshkin K. – “Virtualization in Linux” – OpenVZ Technical Report,
as response to user requests. September 2006 – http://download.openvz.org/doc/ openvz-intro.pdf,
Using scripts we can now instantiate Docker containers Retrieved May 2014
into any of the available virtual machines and in the same time [9] Bardac M., Deaconescu R., Florea A.M. – "Scaling Peer-to-Peer Testing
with Linux Containers", The 9th RoEduNet Conference, Sibiu,
using the APIs provided by the SDN controller, to configure Romania, June 2010
the underlying network to interconnect the containers.
[10] Xavier M., Neves M., Rossi F., Ferreto T., Lange T., De Rose C. –
This creates the premises of dynamic provisioning of user “Performance evaluation of container-based virtualization for high
services [10]. performance computing environments,” in Parallel, Distributed and
Network-Based Processing (PDP), 2013 21st Euromicro International
Conference, 2013, pp. 233–240

View publication stats

You might also like