Professional Documents
Culture Documents
Virtualization at The Network Edge - A Technology Perspective
Virtualization at The Network Edge - A Technology Perspective
Abstract — Container-based virtualization offers a very virtualized services at the network edge, might utilize Xen or
feasible alternative to heavyweights like KVM or XEN. KVM as their virtualization methodology. An example
Containers are lightweight and offer near-native performance. cloudlet implementation is OpenStack++ [2], based on the
They are also easy to deploy because of continuous dominant industry cloud platform, OpenStack.
integration/development tools and environments. This paper
Containers as a form of virtualization are relatively nascent
offers a brief introduction to containers, defines its properties
and provides use-cases in the context of those properties. but gaining a lot of momentum, especially after the
Secondly, we look at the live migration of stateful applications via introduction of Docker and its suitability towards DevOps [3].
containers. Live migration is a promising technique at the Standard virtual machines offer a platform that can host
network edge to unload computing to other nodes to expand the multiple services within itself. On the other hand, containers
use of fog computing / mobile-edge computing. Our experiment facilitate at-scale deployments of one or more applications.
shows that live migration of stateful applications can result in Hence, containers should be complementary to standard
three different types of errors, namely, resend, reprocessed and virtual machines and not a full replacement.
wrong-order errors. In the next sections we will discuss the suitability of
Keywords— Fog Computing, Edge Computing, Virtualization,
containers to the modern paradigm of fog and edge
Containers, Docker, Live Migration. computing. First, we will look at containers in more detail,
followed by assessing its suitability to edge computing
platforms. Prominent use-cases and implementations will be
I. INTRODUCTION
discussed afterwards. Finally, we present the results of our
Virtual machines have a long history of being used for container migration experiments.
hardware abstraction. Especially when heterogeneity and
software complexity increases, the use of virtualization II. WHAT ARE CONTAINERS?
becomes more and more apparent. At the turn of the century, The Linux kernel offers a facility called cgroups, or control
virtualization was assessed as a solution to the heterogeneity, groups, that can isolate or limit the usage of a specific process,
complexity, manageability and programmability of wireless e.g., their access to memory, CPU, and I/O. Cgroups also
sensor networks [1]. offers prioritization, meaning preferential access to resources
We are on the verge of another evolution of this virtualization for individual processes. Others features of cgroups include,
at the network edge. That is, utilizing virtualization to offer gauging a groups resource usage and its control, and the
services at the network edge. Services not only targeted to ability to suspend a group. The Linux kernel also has another
sensors and actuators but consumers and highly demanding feature called namespaces. Namespaces allow the kernel to
machines, i.e. vehicles and corporations, even cities for their isolate a process’ view of the system, e.g., it can isolate a
smart city initiatives. There are three major virtualization process’ view of the process IDs, filesystem, hostnames and
technologies used nowadays, Xen, KVM and recently network access. Combine cgroups’ ability to partition
Containers. Other technologies include Microsoft Hyper-V resources and the ability of namespaces to isolate a process’
and solutions from VMware (ESXi, vSphere). KVM relies on view of the system, and you get effective virtualization. This
AMD-V or Intel VT-x instructions to facilitate the running of OS-level virtualization and separation of groups of processes
multiple OSs on a single CPU. Based on its extensive usage in are commonly known as containerization or containers. Major
cloud computing and available expertise, both technical and in projects that offer container-based virtualization include
research, it makes sense to bring it down to the edge level. Docker, LXC, CoreOS, LMCTFY, and Apache Mesos.
Micro-datacenters and cloudlets, technologies offering
Being container-based virtualization means that for each fog-cloud hierarchy. Each level of this hierarchy is covered by
resource or process the kernel and standard libraries are systems of diverse types and capabilities. The container
shared, they do not need to be replicated. This reduces the size operating systems that we discussed can be placed on the fog
of each container, compared with a virtual machine image. and cloud hierarchies at each level. ResinOS and RancherOS
The drawback is that a container cannot run a different are better suited to the fog environment, while CoreOS is
operating system within itself or utilize a separate set of suited towards cloud computing. Nevertheless, CoreOS can
drivers from the ones offered by the kernel. However, that is be used at the telecom base stations or public-private servers.
not an issue when all one wants is to run a thousand copies of This view gives us an understanding of how containers can
the same application. exist throughout the fog-to-cloud continuum.
Different operating systems have been designed that target the
operation and management of containers. Of interest to this
discussion, are CoreOS1, ResinOS2 and RancherOS3. CoreOS’ III. CONTAINERS AT THE EDGE
Container Linux is an operating system designed from the The properties and solutions designed for containers that make
ground up to be used with containers. It can be run on public them ideal for fog/edge devices will be discussed next. Each
or private clouds, within a virtual environment or a physical of these would require a thorough discussion, but that would
server within an organization’s premises. Container Linux is go beyond the limits of this paper.
designed to run at massive scale, i.e., clusters. ResinOS is
A. Lightweight
based on the embedded Yocto Linux. ResinOS allows you to
run Docker containers on embedded devices. It supports a Standalone applications would always win in relative size
myriad of embedded computing boards from Raspberry Pi against virtualized ones. However, if virtualization is the
with ARM processors to UP boards with Intel x86 CPUs. option to go for, containers offer a significant advantage in
Lastly, there is RancherOS, which removes most of the parts size. For example, the virtualized Mi-ESS testbed [4] took
of the system and replaces them with containers. This strategy 2660 MB, while the containerized version in comparison was
results in a minimalistic OS ready to run Docker containers 476MB. This drastic difference is because the container
while being less than 30MB in size. Figure 1 shows a mist- version does not need the full Centos OS (2355 MB) image
and only the required libraries are used (181 MB).
1
B. Mobility / Migration
https://coreos.com/why/
2
https://resinos.io Reduced size means easier mobility of containers between
3
https://rancher.com/rancher-os/ systems or general transfers, e.g., for deployment. Mobility
88
2018 Third International Conference on Fog and Mobile Edge Computing (FMEC)
might not be a big problem for data centres which have IV. USE-CASES / SCENARIOS
dedicated gigabit links between their nodes, but on fog/edge The following example scenarios and use-cases illustrate how
systems this is a big concern. The latency/transfer containers can help to alleviate the issues in fog/edge
requirements become even stricter when we are dealing with environments.
eHealth or Smart Grids or VR/AR games.
A. Resource Management
C. Performance
In the fog-layer, or any edge computing environment, we have
Containers offer near-native performance in terms of geographically distributed, large-scale resource scattering.
computation and network I/O [5]. The only drawback is Some of the resources can be self-managed by the
significant UDP traffic when using Docker’s NAT feature. organization, some rented on a cloud-computing style model
D. Heterogeneity from an MEC service provider. Renner et al. [10] discuss the
Handling heterogeneity of computing hardware is common to feasibility of using containers for sharing the distributed
computing resources amongst users à la cloud computing. In
virtualization technologies. The added advantage of containers
the deployment of such large scale and diverse systems,
is that besides the cloud they enable small single board
management becomes a problem early on. Traditional
computers like the Intel NUC to be part of the virtualization
management approaches, where much manual effort is put in
infrastructure [6].
by system administrators, would be insufficient. The features
E. Management of Manageability, Discoverability and Deployability make
Container management is highly automated. Docker Swarm containers an ideal platform to build autonomic [9]
and Google Kubernetes allow one to set up policies to management systems for fog computing.
automate the deployment of containers over large clusters.
The ability to run a Docker swarm on edge devices has B. Dynamic Service Delivery
already been demonstrated [7].
Containers can be restarted, suspended, migrated and updated
F. Discoverability on the fly without affecting other containers or the local
Container management solutions come with an extra feature system. Combine this ability, with efficient management
called ‘registry’, i.e., Docker Registry4 or CoreOS Quay5. A strategies and companies can offer dynamic services in the fog
registry holds all the container images and associated layer. A local gateway can act as a content delivery service
information. A fog node can request a co-located registry to and by merely deploying a newer container can be
download an image instead of requesting from the cloud. transformed into a video transcoding service, based on the
needs of the end-user. Morabito and Beijar [6] assess the
G. Deployability viability of such a platform on the Raspberry Pi 2 and Odroid
Containers gained prominence because of their suitability to C1+ single board computers. Brunisholz et al. [11] have built
DevOps (Development, Operations). These are environments a platform that can be re-configured to suit various scenarios
where fast developments are coupled with faster deployments. and experiments in the wireless networking domain. Their
Although currently used in enterprise environments, this is platform, called WalT, relies on containers and Raspberry Pi
very well suited to the goals of mobile edge computing to achieve adaptability and cost-efficiency.
(MEC). As with enterprises, in MEC the deployments are
going to be controlled by the network operators themselves.
C. Distributed Edge Platforms
H. Dynamicity Cheng et al. [12] present a geo-distributed analytics platform
As a lightweight VM infrastructure with distributed registries at the network edge that has a dynamic topology. Containers
around a region, containers allow the systems to be allow them to not only offer a distributed, dynamic platform
dynamically re-configured. One can switch a text compression but grant multi-tenancy as well. Recently, building dynamic
service on a fog node with an image compression service on and distributed platforms is a lot easier via container
the fly. orchestration technologies like Docker Swarm, Apache Mesos
and Google Kubernetes. Javed [7] built a distributed, resilient,
I. Security
edge cluster based on Kubernetes and Apache Kafka.
On the cloud front, containers do not offer as much separation With dynamic deployment and clustering, we can also build
as VMs do. But for edge devices, the ability to verify the local ad-hoc edge clusters. Moreover, once the cluster performs its
system, the integrity of a container and its functionality are intended task, it can be reconfigured to perform an entirely
more important. Current container solutions offer that level of new operation. One scenario would be reconfiguring drone
security [8]. swarms to perform person-of-interest surveillance or
environmental monitoring or disaster evaluation. Running
containers on a drone and updating it on-the-fly via ResinOS
has already been demonstrated [13].
4
https://docs.docker.com/registry/
5
https://coreos.com/quay-enterprise/
89
2018 Third International Conference on Fog and Mobile Edge Computing (FMEC)
6
https://jelastic.com/docker/
7
https://criu.org/Main_Page
90
2018 Third International Conference on Fog and Mobile Edge Computing (FMEC)
Algorithm 1: TestDriver is responsible for during the enhanced migration remains relatively constant,
maintaining list of online and active DockerHandlers between 12-15 seconds (Figure 5). The reason is that with the
(DH) and distributing tasks. enhanced algorithm the PUs are down only during the time it
takes to switch the active PU. Downtime in case of the
standard algorithm increases relative to the migration time.
Data: List of unprocessed tasks
The downtime is higher because both the host and receiver
Result: All tasks have been processed by
workers are offline during the transfer process. The enhanced
ProcessingUnits.
algorithm also reduces the total amount of errors (Figure 6).
No resend, and reprocessed errors occur in case of the
initializeSockets();
enhanced algorithm because at-least one PU is active all the
activeProcessingUnit = null;
time. There are though more re-processed errors potentially
WHILE Test is not completed DO
because one PU is executing a task and upon migration, the
getListOfActiveProcessingUnits();
same task is re-sent to the newer PU.
readNextTaskFromList();
IF 60 seconds since last migration THEN
StopCurrentProcessingUnit();
y = getRandomProcessingUnit();
sendStartToNewProcessingUnit (y);
END
IF 10 seconds since last task sent THEN
sendTaskToActiveProcessingUnit();
END
processReceivedMessagesAndSendResponse();
END
C. Results
We have three performance matrices for our experiment: Figure 3 Algorithm run times in seconds.
Migration times, this is the total time it took to migrate
containers; Downtime, measures how long no processing
occurred during this migration; Error Count, errors that were
encountered. During the experiment, we track three types of
errors: 1) Re-send errors, if the TestDriver had to send a task
again, this was considered as a re-send error. The cause of
such errors could be an offline PU or loss of data during live
migration. 2) Reprocessed errors are the ones where a task
was performed more than once. An example case is that when
mid-execution a container is transferred, sometimes it would
cause it to process that task more than once. 3) ‘wrong-order
errors’, the tasks from TestDriver need to be executed in a
“First In First Out” manner. Reasons for lack of in-order
execution can be improper migration, memory corruption or Figure 4 Total migration time for each algorithm per iteration
choosing to execute the next task in the queue while failing to
finish the current one.
How each error corresponds to a stateful application varies.
For example, re-send errors in the case of a webserver are not
critical, in fact they are somewhat expected. The same does
not hold true for database operations. Wrong order errors for
an asynchronous application are okay but not for financial
systems. Similarly, re-processing a user-login attempt might
be acceptable, but re-processing a user’s order (from an online
shopping cart) is not. The effects of each error are thus
dependent upon the application’s architecture.
Figures 3-6 represent the test results. The standard migration
method increases the overall execution time by 5x, while the
enhanced method increases it by 2.1x (Figure 3). The total
migration time is almost similar as in each case almost the
same amount of data is transferred (Figure 4). The downtime Figure 5 Downtime in seconds for each algorithm, per Iteration
91
2018 Third International Conference on Fog and Mobile Edge Computing (FMEC)
92