You are on page 1of 6

2018 International Conference on Computational Science and Computational Intelligence (CSCI)

Towards Dynamic On-Demand Fog Computing Formation Based On


Containerization Technology
Hani Sami and Azzam Mourad

Department of Computer Science & Mathematics - Lebanese American University, Beirut, Lebanon

Abstract— The advantage of having fog nodes near IOT In parallel, the rise of the containerization technology is
devices can be seen by bringing intelligence near the edge. Fogs opening the door for interesting solutions serving the fog
form storage place, real time processing for the tremendous computing objectives. Containers are services running on a
amount of data generated by sensors, host for users’ services
to provide a better quality of service with a fast response time, device by using its actual operating system to provide them
and most importantly saving the IOT devices energy. If a fog with services. This makes containers more lightweight and
is not present near an IOT device, its advantage will simply be gives them advantage over virtual machines which use a
eliminated. The current fogs in the literature are preconfigured full copy of the operating system and are much heavier on
in specific locations having some specific services running on the devices [4]. It is easier now to just have an abstracted
them all the time which limits their diverse availabilities and
prevent dynamic update and context adaptation. In this paper, operating system with all the environments needed to run
we address the aforementioned limitations by proposing on- multiple services in multiple containers that are copy of
demand fog formation to create and manage fog devices and ser- images pulled from image repositories like Docker Hub.
vices on the fly. Our approach benefits from the containerization Docker is one of the main containerization technologies used
technology like Docker to build fog environment on-demand nowadays [5]. The presence of lot of containers opened the
with the least initialization costs possible using volunteering
devices and Kubernetes utility called Kubeadm, which allows eyes to the need of having managers or orchestrators for
monitoring fog nodes and orchestrates their services in a highly such huge number of containers running on many different
dynamic environment. A testing strategy composed of real devices. Docker Swarm, Kubernetes, and Apache Mesos-
devices is used and promising results are presented illustrating Marathon are the main popular open source orchestration
the ability and efficiency of initializing fog devices on the fly tools [6]. The authors in [7] compared between the three of
anywhere and whenever needed.
Index Terms— IOT, Cloud Computing, On-Demand Fog, Fog
these orchestrators in terms of their limitations of use towards
Formation, Containers, Docker, Kubernetes, Kubeadm. IOT devices with fog nodes.
In this paper, we benefit from the containerization technology
and address the aforementioned problems of the existing
I. INTRODUCTION
statically formed fog computing architectures and solutions
Fog devices are currently extending the cloud solutions in the literature by proposing a dynamic on-demand fog
near the edges to aid the constraint IOT devices that are computing framework based on Kubeadm and Docker with
tremendously increasing[1][2]. Fog computing in the context the presence of any type of volunteering devices. Our pro-
of IOT environment can be the software running on a device posed on-demand creation serves as a solution to overcome
that collects, processes, and sends data on behalf of the IOT the limitation of fog availability, usage of statically defined
devices to the cloud. Moreover, fog nodes perform some locations, hosting pre-configured devices, and embedding
processing on the data to minimize the load on the cloud, and pre-selected services. Kubeadm is a Kubernetes utility that
host some services given from the cloud to respond to users allows us to create custom clusters on the fly composed of
in a faster way while avoiding the networking costs to/from fogs using Kubelet component on any resource constraint
the cloud, which can lead to saving energy on the requesting devices as well as monitoring them through the master node.
devices [3]. In this context, there are three layers of IOT However, to initialize a Kubeadm cluster on the spot and
environment with traditional fog devices running specific propmt other devices to join causes a significant delay in
types of services all the time covering specific users which time. Kubernetes/ Kubeadm is chosen as the orchestrator tool
are the cloud, fog, and IOT devices or users. Traditional fogs because it supports Docker.
are running on initialized and pre-configured fogs having a To partially overview our approach and illustrate its contri-
fixed location and services running. This is leading to fog bution and advantages, we take two real life scenarios that
absence to cover other locations with new users demanding require dynamic creation of a nearby fog: (1) An IOT device
the same or new services to be served by a near fog. There is consuming significant energy while producing lot of data
a need of having an architecture that can help in creating fogs and sending it all the way to the cloud. In this Scenario, a
on-demand anywhere, and can adapt or configure the services Kubeadm cluster must be created beforehand next to this
installed on that fog which should be updated, removed, or user, and the needed services are pushed on-demand to
changed dynamically based on the scenarios happening at fogs nearby. (2) A crime happened in an area where our
the edge. framework have a Kubeadm environment built, so an image

978-1-7281-1360-9/18/$31.00 ©2018 IEEE 960


DOI 10.1109/CSCI46756.2018.00187
processing service will be pushed to volunteering fog(s) to consumption. In this paper, we are providing a scalable and
process the image and identify faces in the crime seen. In flexible framework that can create and manage fog services
this case, our framework based on some criteria lets the cloud on any type of volunteering resources present anywhere. We
decide that we need fog serving near this device. A kubeadm are supporting users in need of a service to be hosted nearby.
cluster should be prepared next to this user beforehand using In addition, our framework is capable of forming the fog
volunteering devices only, knowing that this user is already devices with the services requested from users on the fly
requesting services from the cloud. Moreover, fog as worker taking into account the initialization and deployment time.
node in the cluster running on any type of volunteering The authors in [9] [10] tried to build a framework that
devices imposes lot of challenges to address in our approach can dynamically push services to fogs present in an area
such as making the right decision on when we actually where these devices are known for the orchestrator. The
need serving fogs, location of the orchestrator of a cluster, main point is to use the containerization technology with
time it takes to initialize the orchestrator, how to decide on its orchestration to manage the services on the fogs and
which fog to deploy the service, and dynamicity of volunteers to build platform as a service architecture counting on the
coming in and out. We were able via our approach to achieve orchestration technologies by integrating them into a fully
promising results in terms of better quality of service to functional environments supporting IOT. By default, these
the requesters with faster response time, and deployment of orchestration technologies comes in hand with a group of fea-
services on-demand anywhere anytime. In this context, the tures that eliminates a single point of failure by pushing the
contributions of this paper are three folds: containers into many devices at the same time, restarting any
1) A novel on-demand fog creation framework based on failed services, allowing a link and communication between
requesters’ need anytime anywhere a volunteering de- containers, and sharing their data volume or storage. Authors
vice exists. Hence, our proposed approach addresses the of the papers are concerned in the dynamicity of updating
problems of the availability of statically located and services on pre-configured fog devices while monitoring their
pre-configured fog with pre-selected services through performance.
enabling dynamic creation, management, and service A model was proposed by the authors in [11] where dynamic
deployment. To the best of our knowledge, none of the deployment of services on helper nodes(fogs) of the main
current approaches in the literature have targeted these server using Docker is possible. So it is feasible to remove,
problems and provided similar features. add, stop, and run any service on a physically known fog
2) Efficient service deployment approach on the fly on anytime. The Kubernetes orchestrator was running on a
newly formed nearby fogs. server. Their approach was to first gather users’ requests on
3) Efficient orchestration approach for better response time that server. Second, although fogs are not near users, the
in a dynamic fog environment. proposed model distributes requests on fogs after pushing
This paper is organized as follows. The related work is the needed services. This means that fogs are only doing
described in section II. An elaboration of the methodology some processing instead of the server. Furthermore, the
and architecture is shown in section III. Section IV presents networking delays are not avoided, however they are getting
the experiment setup and results. Finally, the conclusion of faster processing time because of distributing the tasks from
the paper with future work is depicted in section V. one server to many nodes. Last, gather the results back to
the server and send them to users afterwards. The fogs along
II. R ELATED W ORK with their locations are already known and most importantly
In this section, we review some of the literature behind not near users. Furthermore, the user is requesting to push
using containers and orchestration tools as an environment services to fogs and it is not on-demand decided by the
that can work on the fog and IOT devices. Elaborating the server. If a user wants a real time update using data generated
usage of containers on fog devices in the most useful and by a specific location where a fog does not exist, their
efficient way is one of the recent research topics evolving in framework will fail in doing the job.
this area. Benefiting from all of these features, we built our framework
In the recent work done in [8], authors focused on the ability to run on volunteering devices to make the fog environment
of using lightweight Docker containerization technology better suited for all kinds of applications that can now
to support service provisioning over IOT devices. Authors run anywhere using volunteering resources and dynamic
compared between two main frameworks. The first one allocation of services in an on-demand and on the fly manner
is Container-Based Pair-Oriented IoT Service Provisioning without any user interaction. To the best of our knowledge,
where two devices are cooperating and responsible for the none of the literature work has proposed a solution serving
interactions, while the second one is Container-Based Edge- the fog computing paradigm using volunteering devices only.
Managed Clustering where serving IOT nodes are being Our contribution is summarized as an on-demand fog com-
monitored by a manager node. Their main contribution puting formation on the fly using volunteering devices that
is to show how lightweight containerization technology can be created anytime anywhere based on users’ needs while
can manage IOT resources by hosting services on them. taking into account many constraints such as the profiles of
Their experiments shows the ability and efficiency of using the volunteers, their time constraints, and their dynamicity
containerization technology in terms of CPU and energy while coming in and out of a cluster.

961
Fig. 2: Node Architecture Per Layer.
Fig. 1: Overall Architecture.

orchestration are:
• Reducing the delay in choosing and creating new
III. M ETHODOLOGY AND A RCHITECTURE
fog devices to join the cluster in a highly dynamic
In this section of the paper, we discuss the architecture environment of fogs coming in and out. In other words,
of our proposed framework illustrated in Figure 1. The on- the delay of the communication between the cloud and
demand fog Kubeadm cluster is built using the volunteering fogs near edge devices is eliminated.
resources and cloud only. Kubeadm utility is used because it
supports Docker and allows us to create Kubernetes clusters • Distributing the load coming to the cloud on
using any constraint devices. The architecture is composed orchestrators, especially when fogs joining the
of four layers: cloud, orchestrators, fogs, and users. Next, we cluster have short period of time availability which
discuss the role of each layer briefly. renders the situation very dynamic.
The cloud is responsible of taking a decision on-demand
where we need serving fog(s) for a particular service based • Avoiding to have a single point of failure where the
on users behaviors and recurrent requests coming from a cloud is the only orchestrator for huge number of fog
particular location. A fog node should be present in the target devices.
location to host the needed service. For each group of fog A fog is the worker node of a Kubeadm cluster that have
devices, there is an associated orchestrator node. If these at least one service running to serve users. The service is
devices are not running and ready before the cloud decides assigned by a decision from its master node. This fog is
to push a services, they should be initialized. Therefore, this composed as well of volunteering devices who asked to join
process causes huge time delays. For this purpose, the cloud by sending requests to the cloud. If there is no suitable fog
is also responsible of having the volunteering master node available to serve a user, the requests continue their flow
and potential fog volunteering devices ready before the need normally to the cloud until the newly created fog’s IP is
of pushing any service to avoid the delays of initializing new published to users.
ones. In addition, the cloud is monitoring all orchestrators’ A user sends requests normally to the cloud if there is no fog
services and their time availability to make sure that all present in the area, and receives a fog IP address once any is
clusters are up and running properly. available next to it. This way, the user is experiencing a better
A master node of a Kubeadm cluster is the orchestrator quality of service (QOS) by having a faster response time,
which creates that cluster, and is responsible of adding and less communication overhead, and a full device dedicated
removing nodes to it as well as monitoring their status, to serve one or a group of users. This user can be an IOT
pushing, updating, or removing services if they are not device.
in use for a certain period of time. The orchestrator also The components embedded in each node of the architecture
monitors the time remaining for the fogs to serve. Before a is presented in Figure 2. All nodes have to run the Kubeadm
fog volunteer leaves the cluster, the orchestrator migrates the containerization required modules. A decision module for
containers to other suitable volunteer in its cluster. selecting volunteering nodes as orchestrators and fogs has
The worker nodes are the fogs in our case that send joining to run on the cloud and the orchestrator respectively. Fur-
requests to the master to become part of that cluster. The thermore, the profiler component should be running on the
master nodes of the on-demand clusters are monitoring orchestrator and fog volunteers.
their workers or fog devices services and time remaining In the sequel, we present a description of the modules
in the cluster. The advantages of having a second layer of running on each layer of the architecture. Each of these

962
modules is implemented as python scripts using Flask web always asks for a secondary one from the cloud. A limitation
services framework and pushed to Docker Hub repository in Kubeadm is that whenever the master node goes down,
after building their Docker files. These images are used later the whole cluster gets down as well. [12].
by any device joining a cluster:
C. Orchestrator Manager
A. Kubeadm Containerization Required Modules In the following, we provide a description of the function-
Kubeadm helps in building a best practice of the Ku- alities offered by the orchestrator manager which is a case
bernetes cluster in a very secure, easy, and extensible way of an orchestrator of orchestrators running on the cloud:
[12]. When using Kubeadm, the type of machine does not 1- Collecting Users Requests: Users requests are the server
matter and it can range from a Raspberry pi to a server. logs that can be used for further analysis. These logs should
Docker should be always running on all devices. Kubelet be uesd to build and train the decision module of deciding if
component starts running on each end. It is responsible to a serving fog is needed or not. This data represents historical
keep the communication alive with the master. It also checks data of users behavior in a particular location.
the health of services running and the status of the device 2- Taking decision of Fog Creation: The volunteer of a
as well. Kubectl is the command line tool which should Kubeadm cluster will not become a fog, until a service is
be installed to control Kubeadm cluster. A network solution pushed to it.
should be installed in the cluster and on each device to let Based on users behavior, number of requests coming, and the
the pods communicate with each other. An example of a level of urgency of services needed near them (a user can
CNI network is Flannel [13]. Kubeadm will not run if any send urgent request when they are losing a lot of energy or
of these dependencies is missing. The master and worker they need a better QOS), the cloud decides if a fog should be
nodes should have the Kubeadm containerization required created to host that service. A time sensitive situation where a
modules installed. When the master is initialized, it will have fog should be created directly is handled because volunteers
a unique hashed token to be sent to devices to join the cluster inside Kubeadm cluster are ready if any is available and
as worker nodes. When the worker node is ready, different suitable. The cloud might also decide that there is no need
images are pulled and run inside its pods once joined. to push a service at this moment. It is a complex decision
to be made and it might require a machine learning model
B. Kubeadm Environment Initializer that can classify the user behavior and requests coming into
The cloud starts initializing the first orchestrator in a two categories as follow: no need for a fog, a fog should
location when it receives certain number of requests above be created. however, the implementation of this unit is out
a certain threshold from this location. For this paper, we of the scope of this paper. For now, we are assuming in our
are assuming that there is always a list of available suitable experiments that the decision of creating a fog is already
volunteers to use. The following are the provided functions: taken by the cloud.
1-Taking decisions for selecting the most suited volun- 3- Forwarding Requests to Orchestrators: When the cloud
teering orchestrators: The Decision module for selecting decides to push a service to a particular location, the cloud
volunteering nodes as orchestrator discussed in section III,E will forward a request to a corresponding orchestrator in
is triggered. It takes as input the profiles of available volun- a location. This request contains the list of services to be
teers in a particular location. Its output is the most suitable pushed to fogs or available volunteers under that orchestrator.
volunteer to be assigned as new orchestrator.
2- Joining volunteers to orchestrator created: Whenever D. Fog Manager:
the volunteering orchestrator of a location is ready, the cloud In this section, we discuss the functionalities of the fog
prompts all remaining volunteering devices in this location manager, a case where we have an orchestrator of fogs.
to join the running Kubeadm cluster monitored by the newly The orchestrator is on volunteering devices near fogs. The
created orchestrator. Therefore, the Kubeadm environment following component only runs on the orchestrators:
is ready to directly handle pushing of services without any 1- Getting List of Services and Volunteers: The fog man-
initialization delays needed. ager here accepts requests coming from the cloud containing
3- Guarantees highly available cluster if possible: This a list of services to be pushed. The list of available potential
module also checks the time remaining and availability of volunteers is available to the master node since all of them
all the orchestrator running. It also informs the orchestrator are connected to it. Therefore, the master can ask for the
selection algorithm about the need of having a new orchestra- volunteers profiles when needed.
tor before the old one terminates. If resources are available, 2- Calling Decision Module to Select Volunteers: The
always another master node should be created next to the orchestrator triggers the decision module to get best distribu-
initial one in order to achieve a highly available cluster first tion of services needed to be pushed on available volunteers.
(HA: highly available, this feature is supported by Kubeadm 4- Load Balancer: This component monitors the number of
where both master nodes join the same network [12]). In requests served by each fog and decides on a Load Balancer
addition, if the initial orchestrator goes down, the secondary if needed, which means we need more containers to serve this
one will be replacing it. This way, the framework avoids the service. In this case, either the fog hosting the service can
delays of creating new master node. The initial master node make duplicates of the containre, or the load of is distributed

963
on others in peak time only. This feature is provided by computation power, number of CPUs, current CPU usage,
Kubernetes [12]. size of the memory, memory usage, size of the disk, disk
5- Backing-Up Fogs: Similar to how we achieve HA usage, battery level if available, name of the device, and more
clusters, this module checks the remaining time of a fog importantly its location and time availability. The profile
inside the cluster and tries to initiate a new one, or push the is updated frequently and requested when needed from the
services to new volunteer in the same cluster before the old cloud or the orchestrator.
one is down or stopped serving.
6- Monitoring Volunteer Status and Services: This is done H. Fog Client:
by checking the profile and services running on each fog. In this section, we discuss the fog client functionalities
If any service is not being requested by users, it will be running on the volunteering fogs.
removed. By default, Kubelet restarts any service that fails 1- Sending Requests to Join a Cluster: When a vol-
automatically and report that failure to the orchestrator [12]. unteering device wants to advertise its resources to join
If the failure occurs more than a provided threshold, the as orchestrator or fog, the fog client sends join requests
orchestrator in our framework initializes another one with containing its advertised resources (profile) to the cloud to
the required services running before excluding it from the keep a record of it.
cluster. 2- Keeping orchestrator Updated: The volunteer device
7- Publishing New Fogs IPs: The orchestrator informs the replies to the cloud by sending its updated version of the
cloud that new fog IP addresses are available. The cloud will profile once requested based on time stamps.
then inform users that requesting the services you need will
happen through this IP. IV. I MPLEMENTATION AND E XPERIMENT

E. Decision Module for Selecting Volunteer Nodes as Or- We dedicate this section to describe the implemented
chestrator or Fog scenario and experiments to proof the feasibility and effec-
tiveness of our approach while discussing our findings.
This module is triggered by the orchestrator or fog man-
ager where an orchestrator or a fog is setup based on its A. Implementation
decision. The optimal selection among nodes is based on In order to proof the efficiency of our proposed on-demand
several criteria and profiling measures, which is considered approach, we built a suitable environment composed of many
as multi-objective optimization problem (currently under nodes and a typical network topology where a user can access
development and out of the scope of this paper). For now, services running on machines present in a lab and others
we assume that the chosen orchestrators or fogs are the most on a cloud server. We set up the environment on a Linux
suitable one to run the assigned services which may not be EC2 T2.small instance running on AWS cloud. We used in
always the case in real life scenarios. This module chooses the lab an HP Core i7 laptop running Windows7 with 8GB
and initializes orchestrators or on-demand fogs based on of RAM and a Raspberry pi2 to demonstrate the usage of
their provided profiles such as resources, computation power, mobile resource constraint device. The user requesting the
number of requests initiated by users in a specific location, service was two hops away from the lab. The service used
and the time availability. If an orchestrator should be selected for testing in this environment was composed of a simple
and initialized to cover a particular area, this algorithm will image. The container of the image receives requests from
have the responsibility to send required modules of our users and replies back with an HTML file. The purpose
framework to run on the selected orchestrator. The chosen of choosing this service is to check the networking delay
orchestrator’s resources should be high enough depending on to receive responses from the hosting device rather than
the services demands that are known for all the orchestrator counting on the computation power of them for now. The
and specified by our framework. If a fog is to be chosen, service was implemented in Python using Flask framework
it will select it based on the services that should be hosted along with its Docker file that was pushed to Docker Hub
in the particular area by comparing its demands to what we repository for further usage.
have currently in the available volunteering set.
F. Volunteers Management B. Experiment
The cloud accepts volunteering requests from users and To illustrate the advantage of our proposed on-demand
stores them in a database. The data contains the profile of scheme, we prompt the AWS instance on the cloud to
the user and most importantly its location. This database is send an http message to the Windows7 machine containing
accessed by the elaborated volunteer management algorithm a Kubeadm Init command. Windows7 runs this command
whenever the cloud wants to choose an orchestrator. It can to have the orchestrator initialized. It takes 25 seconds to
be a file running on the orchestrator having the list of all initialize the orchestrator on the Windows7 laptop. The user
available volunteers in its cluster. will not realize the time delay if the Kubeadm cluster is
initialized before-hand, so the cluster is ready whenever
G. Profiler the cloud decides to push any service on the ready cluster.
This module allows the master node of a cluster to request Once the orchestrator is ready, it sends the cloud a ready
all the necessary information about the device including the status with a kubeadm token. Furthermore, the cloud tells

964
resources present anywhere close to users. On the other
hand, we used Kubeadm and Docker to manage the required
services on the fly on volunteering fog devices. A fog can
be any constraint device. Containerization technology and
volunteering resources form our on-demand fog formation
solution to create fog devices on the fly wherever and
whenever needed. This approach is composed of a second
cooperative layer of orchestration to handle the volunteers
and the dynamic environment of resources present. Services
running on them can be updated according to the need.
Kubeadm cluster is created before-hand when possible to
avoid the initialization time when possible. The feasibility
and improvement of using the on-demand fogs as serving
Fig. 3: Percentage and Response Time of Requests Sent to nodes is proven by the experimental results compared to
Fog Running On Raspberry Pi2 Against Regular Cloud. the cloud as a serving node. The paper shows promising
results in this context. As a future work, we will add the
SDN controller component to the fog manager to manage
the Raspberry pi2 to join the Kubeadm cluster created by the all routes between volunteering fogs and update users with
Windows machine by sending the token to it. The time for the the proper path to reach their services. In addition, we are
Raspberry pi2 to join the cluster is around 24s. This time is working on optimizing the distribution of services on the best
also not counted in our experiments because the Kubeadm set of suitable volunteers in a particular area next to users.
cluster is ready beforehand (We did not include the graph Our next interest will be to create the model of deciding on
corresponding to joining time of devices to Kubeadm cluster the time a serving fog is needed by studying users behavior
because of the space shortage). The pi runs Kubeadm Join in locations of interest from the log files of the cloud.
command with the corresponding token to become part of
R EFERENCES
the cluster. Consequently, the Raspberry pi2 is now the only
worker node active in the windows machine cluster. [1] J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet of
things (iot): A vision, architectural elements, and future directions,”
We assumed that there is a need of pushing our HTML Future generation computer systems, vol. 29, 2013.
service near our user. This request is sent to the orchestrator [2] A. M. Alberti and D. Singh, “Internet of things: perspectives, chal-
that decides to push the service to the ready fog for use. lenges and opportunities,” in International Workshop on Telecommu-
nications (IWT 2013), 2013.
Figure 3 shows the difference between the response time of [3] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and
different number of requests sent to the cloud, against the its role in the internet of things,” in Proceedings of the first edition of
ones sent to the nearby fog, including the time of pushing the MCC workshop on Mobile cloud computing. ACM, 2012.
[4] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio, “An updated
the service from the concerned repository. performance comparison of virtual machines and linux containers,”
The results are promising where major time saving is in 2015 IEEE International Symposium Performance On Analysis of
achieved with respect to accessing the service from the Systems and Software (ISPASS). IEEE, 2015.
[5] D. Merkel, “Docker: lightweight linux containers for consistent de-
cloud. It also explores the feasibility and advantage of our velopment and deployment,” Linux Journal, vol. 2014, no. 239, p. 2,
on-demand approach in real life. The response time of the 2014.
requests going to the fog is growing at the beginning until [6] R. Smith, Docker Orchestration. Packt Publishing Ltd, 2017.
[7] S. Hoque, M. S. de Brito, A. Willner, O. Keil, and T. Magedanz,
the docker images get downloaded on the master and the “Towards container orchestration in fog computing infrastructures,”
container is ready to serve. Cluster initialization time is in 2017 IEEE 41st Annual on Computer Software and Applications
not counted because it is created before-hand. During this Conference (COMPSAC), vol. 2. IEEE, 2017, pp. 294–299.
[8] R. Morabito, I. Farris, A. Iera, and T. Taleb, “Evaluating performance
time, requests continue their flow to the cloud. After having of containerized iot services for clustered devices at the network edge,”
the service running as container on the fog, the response IEEE Internet of Things Journal, vol. 4, no. 4, pp. 1019–1030, 2017.
time is almost negligible with a value of 16 seconds for [9] C. Pahl, S. Helmer, L. Miori, J. Sanin, and B. Lee, “A container-
based edge cloud paas architecture based on raspberry pi clusters,”
1000 requests compared to the cloud that takes around 220 in IEEE International Conference on Future Internet of Things and
seconds. An almost constant 93% improvement in response Cloud Workshops (FiCloudW). IEEE, 2016, pp. 117–124.
time is achieved. The results illustrate the significance of our [10] C. Wöbker, A. Seitz, H. Mueller, and B. Bruegge, “Fogernetes:
Deployment and management of fog computing applications,” in
approach by pushing a fog on the fly to start serving on the NOMS 2018-2018 IEEE/IFIP Network Operations and Management
Raspberry pi2 with the least initialization costs possible. Symposium. IEEE, 2018.
[11] H.-J. Hong, P.-H. Tsai, and C.-H. Hsu, “Dynamic module deployment
in a fog computing platform,” in 2016 18th Asia-Pacific on Network
V. C ONCLUSION Operations and Management Symposium (APNOMS). IEEE, 2016,
pp. 1–6.
Static services running on fog devices along with their [12] G. Sayfan, Mastering Kubernetes. Packt Publishing Ltd, 2017.
availability near IOT devices are the main challenges targeted [13] H. Zeng, B. Wang, W. Deng, and W. Zhang, “Measurement and
in this paper. User intervention is required to update the pre- evaluation for docker container networking,” in 2017 International
Conference on Cyber-Enabled Distributed Computing and Knowledge
configured services running on fogs. We were able to solve Discovery (CyberC). IEEE, 2017, pp. 105–108.
the problem of fog availability by utilizing idle volunteering

965

You might also like