Professional Documents
Culture Documents
net/publication/281768484
CITATIONS READS
8 636
5 authors, including:
All content following this page was uploaded by Octavian Mihai Machidon on 15 September 2015.
Cosmin Costache1, Octavian Machidon1, Adrian Mladin1, Florin Sandu1, Razvan Bocu1
1”
Transilvania”University, Department of Electronics and Computers – Brasov, Romania
Abstract — Today’s IT organizations who act as service II. SOFTWARE DEFINED NETWORKING
providers are under increasing pressure to keep up with the
continuous and growing demand for IT services. Through the
The Software Defined Networking (SDN) is a new
shift from interactive, manual processes to automated, self- architectural concept that aims to decouple the network
deployment of resources, the providers can increase the efficiency control and forwarding functions [1]. This separation enables
of delivering on-demand services. Virtualized resources, particu- the network control layer to be programmable. Another key
larly virtual machines and containers that are in the focus of the feature of the SDN is the use of open protocols for the
present paper, make the infrastructure transparent to the final communication between the network elements and the
user, are easier to be configured and can seamlessly migrate to controller [2].
another host, in real time – preserving processes status. The A typical SDN architecture consists of 3 layers. The top
Linux containers represent an emerging technology for fast and layer is the application layer which includes applications
lightweight process virtualization. Because the containers require delivering services. The applications interact with the SDN
less resources to run by sharing the operating system kernel, a controller which facilitates automated network management.
higher density of containers can be achieved on the same host, At the bottom is the physical network layer composed of plain
opposed to other virtualization solutions like hardware or para- network elements. The network elements are simplified and
virtualization. The paper presents a solution to enable the on- they concentrate only on the forwarding functions. All the
demand provisioning of Linux containers using Software-Defined
decisions, route calculations and policies are implemented in
Networking, a flexible approach to treating even control-level
resources “as a Service”.
the controller [3]. Figure 1 shows the typical SDN
architecture.
Keywords — SDN; Linux containers; virtualization; Software- In an SDN environment the controller is the central point,
Defined Networking; providing an abstract, centralized view of the entire network.
The most common protocol used in SDN networks for the
I. INTRODUCTION communication between the controller and the network
elements (switches) is the OpenFlow protocol. There are
The rise of cloud services and the increasing number of several commercial and open-source SDN controllers. For the
mobile personal devices such as smartphones, tablets and current research, we have decided to use the Open Daylight
notebooks accessing the cloud services, are putting a lot of controller. Open Daylight is a Java based controller providing
stress on the IT and network infrastructure. The need to be enterprise grade performance.
able to provision the user services on demand and optimize the
resource allocation has become a priority for many service
providers.
In this paper we present a flexible solution based on
lightweight Linux containers that can enable the on-demand
provisioning of user applications or services. The applications
are running inside isolated containers that can be started and
interconnected on demand. After starting, the virtual network
interface of the container is connected to a virtual switch
instance. The networking between the containers is
dynamically configured by adding or updating flow definitions
into the virtual switches.
The paper is structured as follows: section 2 explains the
concept of software defined networking, while section 3 will
Fig. 1. The SDN Architecture
cover the main virtualization solutions. Our solution for
implementing the software defined networking of Linux The software defined networking is an emerging
containers is presented in section 4, the conclusions being architecture model that is well suited for the dynamic nature of
presented in section 5. today’s applications [4]. Exposing the network control through
APIs, the network services are abstracted and the network Kernel-based Virtual Machine (KVM) is an open source
itself becomes programmable [5]. hypervisor that provides enterprise-class performance to run
Windows or Linux guest virtual machines on x86 hardware. A
The method we are presenting leverages the capability to very used alternative is OpenVZ [8].
control and program the network through the SDN Controller,
in order to interconnect the Linux containers. Linux containers (LXC) represent a different method of
OS-level virtualization. It allows multiple isolated Linux
III. VIRTUALIZATION SOLUTIONS systems (containers) to be run on a single host operating
The virtualization can be described in a generic way as a system. The host kernel provides process isolation and
separation of the service request from the underlying physical performs resource management. This means that even though
delivery of that service [6]. In computer virtualization, an all the containers are running under the same kernel, each
additional layer called hypervisor is typically added between container is a virtual environment that has its own file system,
the hardware and the operating system. The hypervisor layer is processes, memory, devices, etc. In the research presented
responsible for both sharing of hardware resource and the hereby, we used an open source implementation of the Linux
enforcement of mandatory access control rules based on the Containers technology called Docker.
available hardware resources. Docker is an open-source platform for the management of
Linux containers. Docker containers can be seen as extremely
There are three types of virtualization: full virtualization,
lightweight virtual machines that allow code to be run in
para-virtualization and operating system level (OS-level)
isolation from other containers. A Docker container can boot
virtualization. In the following sub-sections we will present
extremely fast making it the best candidate for on demand
the concepts used by each virtualization model.
provisioning scenarios.
A. Full virtualization
The full virtualization is designed to provide a total IV. SDN FOR LINUX CONTAINERS
abstraction of the underlying hardware and creates a complete
virtual system for the guest operating system [7]. The The containers have been in the IT environment for long
hypervisor monitors the hardware resources and mediates time, but the use of containers instead of virtual machines is a
between the guest operating systems and the underlying novel approach [9]. Linux containers are lighter and provide
hardware. With this model, no modifications are needed in the better performance compared to classical virtual machines. A
guest OS. Each virtual machine is independent and unaware of full virtual machine can take up to several minutes to be
other virtual machines running on the same physical hardware. provisioned, whereas a container can be instantiated and
One advantage of this virtualization technique is the started in seconds. Because the containers do not run on top of
decoupling of the software (OS) from the underlying a hypervisor, the applications they contain offer superior
hardware. The performance of this technique is less than bare performance close to bare-metal performance. This paper will
hardware because of the hypervisor mediation. The most present a method to easily interconnect containers running on
common full virtualization solutions are provided by different virtual machines. It is also possible to isolate the
VMWare, Microsoft and Oracle. interconnected containers into VLANs.
The test environment is composed of 3 virtual machines
B. Para-virtualization running Linux and each VM is hosting multiple containers.
The para-virtualization technique requires modifications in Because we had only one physical machine available, we
the guest operating systems that are running inside the virtual decided to use virtual machines to simulate a network
machines. This method uses a hypervisor for shared access to topology with 3 nodes. All the virtual machines run on top of a
the underlying hardware but integrates virtualization-aware Linux OS using the Kernel Virtualization Module (KVM).
code into the operating system itself. As a result, the guest The host OS is a 64bit Ubuntu distribution (12.04 LTS). The
operating systems are aware that they are executing on top of a virtual machines are running on a Linux OS based on the
hypervisor and allow them to interact more directly with the Ubuntu 14.04 distribution and each has allocated 1GB of
host system's hardware. This leads to a higher performance RAM.
and less virtualization overhead. Because the Docker containers are lightweight, each
The primary advantage of the para-virtualization is the virtual machine will host multiple Docker containers.
decrease of performance penalty observed in the full Additionally, on each virtual machine we have installed a
virtualization. virtual switch module. After creation, all the containers on a
virtual machine, will be attached to its local switch. The
C. OS-level virtualization switches will be linked using GRE (Generic Routing
The OS-level virtualization does not require an additional Encapsulation) tunnels. For the virtual switch we have chosen
hypervisor layer. Instead, the virtualization capabilities are the open source software called “open vSwitch”. To simplify
part of the host operating system (OS). This technique the test environment we have configured static IP addresses on
virtualizes servers on top of the host operating system itself. each virtual machine.
The overhead produced through the hypervisor mediation is To enable the communication between the containers
eliminated and enables near native performance. located on different virtual machines, we have created GRE
tunnels between the 3 open vSwitch instances. The tunnel repository. For our test scenario we have configured a private
configuration is depicted in Fig. 2. Docker repository and made it available on the network. The
Each open vSwitch instance is connected using GRE repository has been populated with several Docker
tunnels with its peers from the other virtual machines. On each preconfigured containers.
virtual switch we will create a bridge corresponding to the To facilitate the search and retrieval from the repository,
actual network interface. We will call this bridge “tun-br” each container has an associated unique ID.
(tunnel bridge). The bridge will have an TEP interface (Tunnel
Endpoint) which will get an IP address assigned. A Docker container can be retrieved from the repository
using the pull command: docker pull <container_id>. After
being retrieved from the repository, the application packaged
in the container can be executed using the run command:
docker run <container_id> <application_name>. We can
always attach to a running container using the attach
command: docker attach <container_id>. If the containers are
started from scripts, they can be assigned to variables, for easy
handling. As an example: