You are on page 1of 14

MockFog 2.

0: Automated Execution of Fog Application


Experiments in the Cloud ∗
Jonathan Hasenburg, Martin Grambow, and David Bermbach

Mobile Cloud Computing Research Group,


TU Berlin & Einstein Center Digital Future
{jh,mg,db}@mcc.tu-berlin.de

September 23, 2020

Abstract new application needs to be tested in practice. The


arXiv:2009.10579v1 [cs.DC] 22 Sep 2020

physical fog infrastructure, however, will typically be


Fog computing is an emerging computing paradigm available for a very brief time only: in between having
that uses processing and storage capabilities located finished the physical deployment of devices and before
at the edge, in the cloud, and possibly in between. going live. Before that period, the infrastructure pre-
Testing and benchmarking fog applications, however, sumably does not exist and afterwards its full capacity
is hard since runtime infrastructure will typically be in is used in production. Without an infrastructure to run
use or may not exist, yet. more complex integration tests or benchmarks, e.g., for
In this paper, we propose an approach that em- fault-tolerance in wide area deployments, however, the
ulates such infrastructure in the cloud. Developers application developer is left with guesses, (very small)
can freely design emulated fog infrastructure, configure local testbeds, and simulation.
performance characteristics, manage application com- In this paper, we extend our preliminary work pre-
ponents, and orchestrate their experiments. We also sented in [17]. We propose to evaluate fog applica-
present our proof-of-concept implementation MockFog tions on an emulated infrastructure testbed created in
2.0. We use MockFog 2.0 to evaluate a fog-based smart the cloud which can be manipulated based on a pre-
factory application and showcase how its features can defined orchestration schedule. In an emulated fog en-
be used to study the impact of infrastructure changes vironment, virtual cloud machines are configured to
and workload variations. closely mimic the real (or planned) fog infrastructure.
By using basic information on network characteristics,
either obtained from the production environment or
1 Introduction based on expectations and experiences with other ap-
Fog computing is an emerging computing paradigm plications, interconnections between the emulated fog
that uses processing and storage capabilities located at machines can be manipulated to show similar charac-
the edge1 , in the cloud, and possibly in between to deal teristics. Likewise, performance measurements from
with increasing amounts of IoT data but also to address real fog machines can be used to determine resource
latency and privacy requirements from use cases such limits on Dockerized2 application containers3 . This
as autonomous driving, 5G, and eHealth [7, 12, 27]. way, fog applications can be fully deployed in the cloud
However, even though fog computing has many advan- while experiencing comparable performance and failure
tages, there are currently only a few fog applications characteristics as a real fog deployment.
and “commercial deployments are yet to take off” [51]. Using an emulated infrastructure also makes it pos-
Arguably, the main adoption barrier is the deployment sible to change machine and network characteristics, as
and management of physical infrastructure, particu- well as the workload used during application testing at
larly at the edge, which is in stark contrast to the ease runtime based on an orchestration schedule. For ex-
of adoption in the cloud [7]. ample, this makes it possible to evaluate the impact of
In the lifecycle of a fog application, this is not only sudden machine failures or unreliable network connec-
a problem when running and operating a production tions as part of a system test with varying load. While
system – it is also a challenge in application testing: testing in an emulated fog will never be as “good” as
While basic design questions can be decided using sim- in a real production fog environment, it is certainly
ulation, e.g., [8, 13, 16], there comes a point when a better than simulation-based evaluation only. More-
over, it allows application engineers to test arbitrary
∗ This work has been submitted for possible publication.
failure scenarios and various infrastructure options at
Copyright may be transferred without notice, after which this
version may no longer be accessible. 2 https://docker.com
1 Some scientists use fog and edge as synonyms, but we adhere 3 When benchmarking Dockerized applications, developers

to the definitions presented in [7]. need to account for Dockerization impacts [11, 12].

1
Node
App
1 2 Agent

Infrastructure Application Experiment VM: Edge 1 Node App


Emulation Management Orchestration Agent App
Setup and manage cloud Manage the application Execute a pre-defined Node
4 3 VM: Cloud
infrastructure based on lifecycle and data schedule of
Application Manager
infrastructure definition collection infrastructure changes
Developer App
Node
I II III Agent App
controls VM: Edge 2
communicates
Cloud
Figure 1: MockFog comprises three modules.
Figure 2: Example: MockFog node manager, node
large scale, which is also not possible on small local agents, and containerized application components
testbeds. (App).
Thus, we make the following contributions:

• We describe the design of MockFog, a system that for the infrastructure bootstrapping and infrastructure
can emulate fog computing infrastructure in ar- teardown. For the second module, developers define
bitrary cloud environments, manage applications, application containers and where to deploy them. The
and orchestrate experiments integrated into a typ- application management module uses this configura-
ical application engineering process. tion for the application container deployment, the col-
lection of results, and for the application shutdown.
• We present our proof-of-concept implementation For the third module, developers define an experi-
MockFog 2.0, the successor to the original proof- ment orchestration schedule that includes application
of-concept implementation4 . instructions and infrastructure changes. The experi-
ment orchestration module uses this configuration to
• We demonstrate how MockFog 2.0 allows develop- initiate infrastructure changes or to signal load gener-
ers to automate experiments that involve changing ators and system under test at runtime.
infrastructure and workload characteristics with The implementation of all three modules is spread
an example application. over two main components: the node manager and the
The remainder of this paper is structured as follows: node agents. There is only a single node manager in-
We first describe the design of MockFog and discuss stance in each MockFog setup. It serves as the point of
how it is used within a typical application engineering entry for application developers and is, in general, their
process (Section 2). Next, we evaluate our approach only way of interacting with MockFog. In contrast, one
through a proof-of-concept implementation (Section 3) node agent instance runs on each of the cloud virtual
and a set of experiments with a smart-factory appli- machines (VMs) that are used to emulate a fog in-
cation using the prototype (Section 4). Finally, we frastructure. Based on input from the node manager,
compare MockFog to related work (Section 5) before a node agents manipulate their respective VM to show
discussion (Section 6) and conclusion (Section 7). the desired machine and network characteristics to the
application.
Figure 2 shows an example with three VMs: two
2 MockFog Design are emulated edge machines, and one is a single “emu-
lated” cloud machine. In the example, the node man-
In this section, we present the MockFog design, starting ager has instructed the node agents to manipulate net-
with a high level overview of its three modules (Sec- work properties of their VMs in such a way that an ap-
tion 2.1). Then, we discuss how to use Mockfog in plication appears to have all its network traffic routed
a typical application engineering process (Section 2.2) through the cloud VM. Moreover, the node agents en-
before describing each of the modules (Sections 2.3 sure that the communication to the node manager is
to 2.5). not affected by network manipulation by using a dedi-
cated management network. Note that developers can
2.1 MockFog Overview freely choose where to run the node manager, e.g., it
could run on a developer’s laptop or on another cloud
MockFog comprises three modules: the infrastructure VM.
emulation module, the application management mod-
ule, and the experiment orchestration module (see Fig-
ure 1). For the first module, developers model the 2.2 Using MockFog in Application En-
properties of their desired (emulated) fog infrastruc- gineering
ture, namely the number and kind of machines but
A typical application engineering process starts with
also the properties of their interconnections. The in-
requirements elicitation, followed by design, implemen-
frastructure emulation module uses this configuration
tation, testing, and finally maintenance. In agile, con-
4 https://github.com/OpenFogStack/MockFog-Meta tinuous integration and DevOps processes, these steps

2
Commit tion 2.3.1 and Section 2.3.2.
Once during the Setup
During the infrastructure bootstrapping step (cf.
Build MockFog Configuration
Figure 3), the node manager connects to the respec-
Unit Testing Infrastructure Bootstrapping I tive cloud service provider to set up a single VM in
the cloud for each fog machine in the infrastructure
Application Container Deployment II model. VM type selection is straightforward when the
Integration Testing Optional
cloud service provider accepts the machine properties
as input directly, e.g., on Google Compute Engine. If
System Testing Experiment Orchestration III not, e.g., on Amazon EC2, the mapping selects the
smallest VM that still fulfills the respective machine re-
Benchmarking
quirements. MockFog then hides surplus resources by
Collection of Results & Application Shutdown
limiting resources for the containers directly. When all
Legend II
Developer Task
machines have been setup, the node manager installs
MockFog Task Infrastructure Teardown I the node agent on each VM which will later manipulate
the machine and network characteristics of its VM.
Once the infrastructure bootstrapping has been com-
Figure 3: The three MockFog modules setup and man- pleted, the developer continues with the application
age experiments during the application engineering management module. Furthermore, MockFog provides
process. IP addresses and access credentials for the emulated fog
machines. With these, the developer can establish di-
rect SSH connections, use customized deployment tool-
are executed in short development cycles, often even in
ing, or manage machines with the APIs of the cloud
parallel – with MockFog, we primarily target the test-
service provider if needed.
ing phase. Within the testing phase, a variety of tests
Once all experiments have been completed, the de-
could be run, e.g., unit tests, integration tests, system
veloper can also use the infrastructure emulation mod-
tests, or acceptance tests [53] but also benchmarks to
ule to destroy the provisioned experiment infrastruc-
better understand system quality levels of an appli-
ture. Here, the node manager unprovisions all emu-
cation, e.g., performance, fault-tolerance, data consis-
lated resources and deletes the access credentials cre-
tency [5]. Out of these tests, unit tests tend to evaluate
ated for the experiment.
small isolated features only and acceptance tests are
usually run on the production infrastructure; often, in-
volving a gradual roll-out process with canary testing, 2.3.1 Machine Properties
A/B testing, and similar approaches, e.g., [45]. For in- Machines are the parts of the infrastructure on which
tegration and system tests as well as benchmarking, application code is executed. Fog machines can appear
however, a dedicated test infrastructure is required. in various different flavors, ranging from small edge
With MockFog, we provide such an infrastructure for devices such as Raspberry Pis5 , over machines within
experiments. a server rack, e.g., as part of a Cloudlet [27, 44], to
We imagine that developers integrate MockFog into virtual machines provisioned through a public cloud
their deployment pipeline (see Figure 3). Once a new service such as Amazon EC2.
version of the application has passed all unit tests, To emulate this variety of machines in the cloud,
MockFog can be used to setup and manage experi- their properties need to be described precisely. Typical
ments. properties of machines are compute power, memory,
and storage. Network I/O would be another standard
property, however, we chose to model this only as part
2.3 Infrastructure Emulation Module of the network in between machines.
A typical fog infrastructure comprises several fog ma- While the memory and storage property are self-
chines, i.e., edge machines, cloud machines, and pos- explanatory, we would like to emphasize that there are
sibly also machines within the network between edge different approaches for the measurement of compute
and cloud [7]. If no physical infrastructure exists yet, power. Amazon EC2, for instance, uses the amount of
developers can follow guidelines, best practices or ref- vCPUs to indicate the compute power of a given ma-
erence architectures such as proposed in [22, 34, 40, chine. This, or the number of cores, is a very rough ap-
42, 43]. On an abstract level, the infrastructure can proximation that, however, suffices for many use cases
be described as a graph comprising machines as ver- as typical fog application deployments rarely achieve
tices and the network between machines as edges [21]. 100% CPU load. As an alternative, it is also possi-
In this graph, machines and network connections can ble to use more generic performance indicators such
also have properties such as the compute power of a as instructions per seconds (IPS) or floating-point op-
machine or the available bandwidth of a connection. erations per second (FLOPS). Our current prototype
For the infrastructure emulation module, the developer (Section 3) uses Docker’s resource limits .
6

specifies such an abstract graph before assigning prop- 5 https://raspberrypi.org


erties to vertices and edges. We describe the machine 6 https://docs.docker.com/engine/reference/

and network properties supported by MockFog in Sec- commandline/update/

3
Table 1: Properties of Emulated Network Connections M1

Property Description M2 5m 3ms s M5


s 2m
R1 4ms R2
Rate Available Bandwidth Rate
1m

s
Delay Latency of Outgoing Packages s

5m
3ms
Dispersion Delay Dispersion (+/-) M3 M6

Loss Percentage of Packages Lost in Tran- M4


sition
Corruption Percentage of Corrupted Packages Figure 4: Example: Infrastructure graph with ma-
Reorder Probability of Package Reordering chines (M), routers (R), and network latency per con-
nection.
Duplicate Probability of Package Duplication

2.3.2 Network Properties


Within the infrastructure graph, machines are con-
nected through network connections: only connected
machines can communicate. In real deployments, these
connections usually have diverse network characteris-
tics [51], e.g., slow and unreliable connections at the
edge and fast and reliable connections near the cloud,
which strongly affect applications running on top of
them. These characteristics, therefore, also need to be
modeled – see Table 1 for an overview of our model Figure 5: The container configuration comprises a
properties. For example, if a connection between ma- unique container name and additional meta data.
chines A and B has a delay of 10 ms, a dispersion of
2 ms, and a package loss probability of 5%, a package
sent from A to B would have a mean latency of 10 ms dependencies. For this purpose, a requirement is that
with a standard deviation of 2 ms and a 5% probability all application components are Dockerized. Further-
of not arriving at all. more, developers have to define application containers
In most scenarios, not all machines are connected and how they should be deployed on the infrastruc-
directly to each other. Instead, machines are con- ture. If these requirements cannot be met, developers
nected via switches, routers, or other machines. See can use their own deployment tooling instead of the
Figure 4 for an example with routers and imagine hav- application management module before continuing to
ing to model the cartesian product of machines in- the experiment orchestration.
stead. In the graph, network latency is calculated For each container, the container configuration spec-
as the weighted shortest path between two machines. ifies a unique container name, the Docker image to be
For instance, if the connection between M2 and R1 used, information on local files that should be copied
(in short: M2-R1) has a delay of 5 ms, R1-R2 has to the VM, environment variables, and command line
4 ms, and R2-M6 has 1 ms, the overall latency for arguments. As an example, Figure 5 shows a very sim-
M2-M6 is 10 ms. The available bandwidth rate is ple container configuration for a container with name
the minimum rate of any connection on the short- camera in JSON format. For this container, the Docker
est path between two machines. The dispersion is image is dockerhub/camera; if it is not available lo-
the sum of dispersion values on the shortest path be- cally, MockFog pulls the latest version from the Docker
tween two machines. Probability-based metrics, e.g., Hub. Furthermore, MockFog copies the contents of the
loss, are aggregated along the shortest path between local directory appdata/camera to the /camera direc-
two machines tory on each VM where the specific container will run.
Qn using basic probability theory methods
(p = 1 − i=1 (1 − pi )). When the container is started, the environment vari-
ables SERVER_IP and SERVER_PORT are set to
the specified values and become available to the ap-
2.4 Application Management Module
plication running inside the container. The value of
Fog applications comprise many components with com- SERVER_IP is resolved by a function that retrieves
plex interdependencies. The configuration of such an the IP address of the VM named cell-tower-2. Ad-
application also depends on the infrastructure as com- ditional such functions, e.g., for retrieving the IP ad-
ponents are not deployed on a single machine. As an dresses of all VMs on which a container with a specific
example, for communicating with a component running container name have been deployed, exist as well. Fi-
on another machine, we need its IP address and port. nally, the camera container is instructed to write a lo-
MockFog can deploy and configure application compo- cal copy of its recording to /camera/recording.mp4 via
nents on the emulated infrastructure, resolving such command line arguments. As this file path is inside

4
State A State B instructions to the node agents which then update ma-
Update Infrastructure Update Infrastructure
chine and network properties accordingly. For exam-
ple, it is possible to reduce the amount of available
Issue Application Commands Issue Application Commands memory (e.g., due to noisy neighbors), render a set
of network links temporarily unavailable, increase net-
Broadcast State Change Broadcast State Change
work latency or package loss, or render a machine com-
Monitor Transitioning Conditions Monitor Transitioning Conditions
pletely unreachable in which case all (application) com-
munication to and from the respective VM is blocked.
MockFog can also reset all infrastructure manipula-
Figure 6: In each state, MockFog can execute four ac- tions back to what was originally defined by the de-
tions. veloper. This action is optional.

Issue Application Commands Based on the or-


the above specified VM directory, its contents can be chestration schedule, the node manager can send cus-
retrieved by MockFog automatically. tomizable instructions to application components. For
In the deployment configuration, developers specify example, this can be used to instruct a workload gen-
for each container a deployment mapping of application erator to change its workload profile. This action is
components to VMs. Furthermore, they can addition- optional.
ally limit CPU and memory resources available to a
container, e.g., for balancing resource needs of multi- Broadcast State Change Sometimes, it is neces-
ple containers running on the same VM. During the sary to notify application components or a benchmark-
application container deployment step (cf. Figure 3), ing system that a new state has been reached. While
the node manager installs dependencies on the VMs, the Issue Application Commands action may distribute
copies files, and starts the configured containers. complex scripts if necessary, this action is a lightweight
Once the experiment has been completed, the devel- notification mechanism. In this, the first two actions
oper can also use the application container module for are of preparatory nature while this action signals to all
terminating the application for collecting results. components that the next experiment phase has been
reached. This action is optional.

2.5 Experiment Orchestration Module Monitor Transitioning Conditions Once the


node manager reaches this action, an experiment timer
There are various ways of testing and benchmarking is started. The node manager then continuously mon-
an application. As discussed in Section 2.2, MockFog itors if a set of transitioning conditions – as defined
primarily targets integration and system tests as well in the orchestration schedule – have been met. In
as benchmarking because these require a dedicated test MockFog, transitioning conditions can either be time-
infrastructure. For such experiments, MockFog can ar- based or event-based: A time-based condition is ful-
tificially inject failures to emulate network partitioning, filled when the experiment timer reaches the speci-
machine crashes, and other events. This is particularly fied time threshold. An event-based condition is ful-
useful as failures are common in real deployments but filled when the node manager has received the required
will not necessarily happen while an application is be- amount of a specific event (messages). This is use-
ing tested. Hence, artificial failures are the go-to ap- ful if a developer wishes to react to events distributed
proach for studying fault-tolerance and resilience of an by application components, e.g., when any application
application [6]. component sends a failure event once, MockFog should
For the experiment orchestration (cf. Figure 3), de- transition to the ABORT EXPERIMENTS state. Ap-
velopers define an orchestration schedule in the form plication components can either send events to the
of a state machine. We describe the actions executed node manager directly or the node manager can receive
within a state in Section 2.5.1; we describe how devel- events from a monitoring system such as Prometheus7 .
opers can build complex orchestration schedules with
states and their transitions in Section 2.5.2.
2.5.2 Building Complex Orchestration Sched-
ules
2.5.1 State Actions
For each state, developers can define multiple transi-
The orchestration schedule comprises a set of states tioning conditions; this allows MockFog to proceed to
and a set of transitioning conditions. At each point indifferent states depending on what is happening during
time, there is exactly one active state for which Mock-the experiment. For instance, an orchestration sched-
Fog executes four actions in the following order (cf. ule could have a time-based condition that leads to an
Figure 6): ABORT EXPERIMENTS state and additional event-
based conditions that lead to a NEXT LOAD PHASE
Update Infrastructure With MockFog, all prop- state. A transitioning condition may comprise several
erties of emulated fog machines and network connec- sub-conditions connected by boolean operators. This
tions (Table 1) can be manipulated. For this, the node
manager parses the orchestration schedule and sends 7 https://prometheus.io

5
MEMORY
RESET
3.1 Node Manager
E: memory error

INIT
T: 20min MEMORY T: 1min & The node manager NodeJS package can either be inte-
-20% E: application started
grated in custom tooling or be controlled via the com-
T: 20min
HIGH
LATENY
FINAL mand line. We provide a command line tool as part
T: 20min
of the package that allows users to control the func-
tionality of the three modules. For the infrastructure
Figure 7: The experiment orchestration schedule can emulation module, the node manager relies on the In-
be visualized as a state diagram. frastructure as Code (IaC) paradigm. Following this
paradigm, an infrastructure definition tool serves to
“define, implement, and update IT infrastructure archi-
allows developers to define arbitrarily complex state
tecture” [30]. The main advantage of this is that users
diagrams, see for example Figure 7.
can define infrastructure in a declarative way with the
In the example, the orchestration schedule comprises IaC tooling handling resource provisioning an deploy-
five states: When started, the node manager tran- ment indempotently. In our implementation, the node
sitions to INIT, i.e., it distributes the infrastructure manager relies on Ansible9 playbooks.
configuration update and application commands. Af- The node manager command line tool offers a num-
terwards, it broadcasts state change messages (e.g., ber of commands for each module. As part of the in-
this might initiate the workload generation needed for frastructure emulation module, the developer can:
benchmarking) and begins monitoring the transition
conditions of INIT. As the only transitioning condi- • Bootstrap machines: setup virtual machines on
tion is a time-based condition set to 20 minutes, the AWS EC2 and configure a virtual private cloud
node manager transitions to MEMORY -20% after 20 and the necessary subnets.
minutes. During MEMORY -20%, the node manager
instructs all node agents to reduce the amount of mem- • Install node agents: (re)-install the node agent on
ory available to application components by 20%. Then each VM.
it again broadcasts state change messages (e.g., this
might restart workload generation) and starts to mon- • Modify network characteristics: instruct node
itor the transitioning conditions of MEMORY -20%. agents to modify network characteristics.
For this state, there are two transitioning conditions.
• Destroy and clean up: unprovision all resources
If any application component emits a memory error
and remove everything created through the boot-
event, the node manager immediately transitions to
strap machines command.
MEMORY RESET and instructs the node agents to
reset memory limits. Otherwise, the node manager When modifying the network characteristics for a
transitions to HIGH LATENCY after 20 minutes. It MockFog-deployed application, the node manager ac-
also transitions to HIGH LATENCY from MEMORY counts for the latency between provisioned VMs. For
RESET when it receives the event application started example, when communication should on average in-
and at least one minute has elapsed. At the start of cur a 10 ms latency, and the existing average latency
HIGH LATENCY, the node manager instructs all node between two VMs is already 0.7 ms, the node manager
agents to increase the latency between emulated ma- instructs the respective node agents to delay messages
chines. Then, it again broadcasts state change mes- by 9.3 ms.
sages and waits for 20 minutes before finally transi- As part of the application management module, the
tioning to FINAL. developer can:

• Prepare files: upload the local application direc-


3 Proof-of-Concept Implementa- tories to the VMs and pull Docker images.

tion • Start containers: start Docker containers on each


VM and apply container resource limits.
In this section, we describe our proof-of-concept imple-
mentation MockFog 2.0. Due to the significant changes • Stop containers: stop Docker containers on each
to the MockFog approach, MockFog 2.0 is a complete VM.
rewrite and does not build on the first MockFog proto-
type. MockFog 2.0 has been developed with the goal • Collect results: download the application directo-
of independence from specific IaaS cloud providers and ries from the VMs to a local directory on the node
can therefore be extended to the provider of choice. manager machine.
Our current open source proof-of-concept prototype 8
With the experiment orchestration module, the de-
integrates with Amazon EC2. Our implementation veloper can initialize experiment orchestration. When
contains two NodeJS packages: the node manager (Sec- the orchestration schedule includes infrastructure
tion 3.1) and the node agent (Section 3.2). changes, the node manager instructs affected node
8 https://github.com/MoeweX/MockFog2 9 https://ansible.com

6
agents to override their current configuration in accor- Logistics
Central
Office
Prognosis
dance with the updated model. This is done via a dedi- C07
Dashboard
C10
cated “management network”, which always has vanilla Temperature Predict Generate
Camera
network characteristics and which is hidden from ap- Sensor Pickup Dashboard

plication components.
C01 C04 C06 C09

Check for C02 Production C03 Adapt C05 Packaging C08 Aggregate
Defects Control Packaging Control

3.2 Node Agent


Figure 8: The smart factory application comprises 11
While the node agent is also implemented in NodeJS, components and 10 communication paths between in-
it uses the Python library tcconfig10 to manage net- dividual components (C01 — C10).
work connections. tcconfig is a command wrapper
for the linux traffic control utility tc11 . Thus, our cur-
rent node agent prototype only works for Linux-based an overview of the features of MockFog 2.0 and not to
VMs. The node manager ensures that all dependencies design a realistic benchmark or system test of our ex-
are installed alongside the node agent. ample application. As a scenario, we used the smart
The node manager also starts the node agent on each factory application introduced in [35]13 .
VM. This, however, can also be done manually via the
command line. The only configuration necessary is the
port on which the node agent exposes its REST end- 4.1 Overview of the Smart Factory Ex-
point. This REST endpoint is used by the node man- ample
ager but can also be used by developers directly. To
In the smart factory, a production machine produces
simplify its usage, we created a fully documented Swag-
goods that are packaged by another machine. Based
ger12 interface.
on input from a camera and a temperature sensor, the
Using the REST endpoint, one can retrieve status in-
production rate and packaging rate are adjusted in re-
formation and real time ping measurements to a list of
altime. The packaging rate is used to create a logistic
other machines. The node manager uses the ping mea-
prognosis, i.e., for scheduling the collection and deliv-
surement results to calculate the artificial delay which
ery of goods. Furthermore, a dashboard provides a
should be injected to reach the desired latency between
historic packaging rate overview.
VMs. Furthermore, the endpoint can be used to set
Each of the components of the smart factory applica-
resource limits for individual containers as needed by
tion communicates with at least one other component
the application management module for the start con-
(see Figure 8). Camera sends its recordings to check for
tainers command and by the experiment orchestration
defects which notifies production control about prod-
module. Finally, the endpoint can be used to supply
ucts that should be discarded. Based on input from
(and read the current) network manipulation configu-
production control and temperature sensor, adapt pack-
ration. On each update call, the node agent receives
aging transmits the target packaging rate to packaging
an adjacency list containing all other VMs. The list
control. Adapt packaging calculates this rate based on
includes the corresponding specification of its effective
the current production rate, the backlog of produced
metrics: how it should be realized from the viewpoint
but not packaged items, and the temperature input:
of the node manager’s infrastructure model. If a par-
packaging must be halted if the current temperature
ticular machine should not be reachable, the adjacency
exceeds a threshold. Packaging control sends the cur-
list contains a package loss probability of 100% for the
rent rate and backlog to predict pickup and aggregate.
corresponding VM. This allows us to easily emulate
Predict pickup predicts when the next batch of goods
network partitions.
is ready for pickup and sends this information to logis-
tics prognosis. Aggregate aggregates multiple rate and
4 Experiments backlog values to preserve bandwidth and transmits
the results to generate dashboard. Generate dashboard
After having shown with our proof-of-concept imple- stores the data in a database, creates an executive sum-
mentation MockFog 2.0 that MockFog can indeed be mary, and sends it to central office dashboard.
implemented, we now use an example application to The smart factory application comprises components
showcase its key features. In this second evaluation that react to events from the physical world (light gray
part, we run experiments with a fog-based smart fac- boxes) and components that only react to messages
tory application (Section 4.1) for which we emulate received from other application components (dark gray
a runtime infrastructure with MockFog 2.0. In the boxes). For example, the temperature sensor measures
experiments, we use an orchestration schedule (Sec- the operation temperature of the physical machine that
tion 4.2) that includes multiple infrastructure and packages goods. Adapt packaging, on the other hand,
workload changes and study the effects on the appli- receives messages from other application components
cation (Section 4.3). Note, that our goal is to provide and has no direct interaction with the physical world.
10 https://github.com/thombashi/tcconfig 13 Our fork of this application’s source code is available
11 https://man7.org/linux/man-pages/man8/tc.8.html at https://github.com/MoeweX/smart-factory-fog-example/
12 https://swagger.io/ tree/mockfog2.

7
1 CPU

Table 2: Mapping of Application Components to Ma-


1.0 GB

Camera chines
1 CPU 1 CPU
0.5 GB
Central
2 GB
Application Component Machine
Production 2ms
Machine 2ms 1 CPU 2 CPU
16ms Office
0.3 GB 2 GB
Server Camera Camera
1 CPU 6ms Factory 20ms 2 CPU
Gateway
0.5 GB Server 4 GB
Temperature Sensor Temperature Sensor
Packaging 2ms 24ms Cloud
Machine 2ms Check for Defects Gateway
1 CPU
0.1 GB
Adapt Packaging Gateway
Temperature
Sensor Production Control Production Machine
Packaging Control Packaging Machine
Figure 9: The smart factory infrastructure comprises Predict Pickup Factory Server
multiple machines with different CPU and memory re-
sources. Communication between directly connected Logistics Prognosis Factory Server
machines incurs a round-trip latency between 2 ms and Aggregate Factory Server
24 ms. Generate Dashboard Cloud
Central Office Dashboard Central Office Server
When testing real-time systems, two important con-
cepts are reproducibility and controllability [1, p. 263].
During experiments, camera and temperature sensor A
T: 1min
hence generate an input sequence that can be con- Baseline INIT

trolled by MockFog 2.0 to achieve reproducibility. Fur- T: 5min &


E: 295 dge
B
thermore, components that do something in the phys- Factory Server Lost a CPU Core
ical world based on received messages, e.g., packaging T: 5min &
control, only log their actions when doing experiments C
E: 295 dge
20% Loss + 20 % Corruption
rather than sending instructions to physical machines. between Gateway & Factory Server
Figure 9 shows the machines and network links of T: 5min &
E: 295 dge
T: 5min 5sec EXPERIMENT
FAILED
the smart factory infrastructure. A gateway connects D
Infrastructure Manipulations Reset
the camera, production machine, packaging machine,
T: 5min &
and temperature sensor. Each of these has a 2 ms E
E: 295 dge
T: 1min
round-trip latency to the gateway, 1 CPU core, and 100ms Delay
between Factory Server & Cloud
0.1 GB to 1.0 GB of available memory. The gateway T: 5min &

is connected to the factory server, which is connected


E: 295 dge T: 5min &
F E: 250x dge
Temperature Sensor Measurements
Final
to the central office server and the cloud. The cloud Increase by 30%

and central office server are also connected directly.


All connections between machines have a bandwidth Figure 10: The orchestration schedule has nine states.
of 1 GBit and do not incur any artificial package loss, During successful executions, the transitioning condi-
corruption, reordering, or duplicates. Table 2 shows tions mostly use a combination of time-based (5 min-
the mapping of application components to machines. utes) and event-based conditions (receipt of 295 dash-
To derive such a mapping and to compare it to other board generated events (dge)).
approaches, developers can use approaches such as [16,
23, 49].
frame of 5 minutes, the experiment failed. If, however,
it has been received 295 times and five minutes have
4.2 Orchestration Schedule
passed, MockFog 2.0 transitions to the next state.
For the experiments, we use an orchestration sched- For state B, MockFog 2.0 changes the infrastructure:
ule with nine states (Figure 10). At the beginning of the factory server has only access to one CPU core in-
each state, MockFog 2.0 instructs camera and temper- stead of two. Then, in state C, loss and corruption on
ature sensor to restart their workload data sequence. the network link between gateway and factory server
Thus, the application workload is comparable during are set to 20%. Note, that the factory server has not
each state. The schedule starts with INIT ; after a regained access to its second CPU core. In state D, all
minute, MockFog 2.0 transitions to state A. The pur- infrastructure changes are reset; the environment now
pose of state A is to establish a baseline by running again closely mimics the real production environment
the application in an environment that closely mimics and the application can stabilize. In state E, the round-
the real production environment. At runtime, gener- trip latency for messages sent from the factory server
ate dashboard creates a new dashboard once per second to the cloud is increased from 24 ms to 100 ms. In state
and sends a notification to the node manager. We use F, the latency is reset to 24 ms but the temperature
this event as a failure indicator in all states; if it has sensor is instructed to change its workload: the av-
not been received for at least 295 times within a time- erage temperature sensor measurements are now 30%

8
100
C01
C02 75

Difference to Median in %
C03 50
C04 25
C05
0
C06
C07 25
C08 50
C09 75
C10
100
n1
n2
n3
n4
n5

n1
n2
n3
n4
n5

n1
n2
n3
n4
n5

n1
n2
n3
n4
n5

n1
n2
n3
n4
n5

n1
n2
n3
n4
n5
Ru
Ru
Ru
Ru
Ru

Ru
Ru
Ru
Ru
Ru

Ru
Ru
Ru
Ru
Ru

Ru
Ru
Ru
Ru
Ru

Ru
Ru
Ru
Ru
Ru

Ru
Ru
Ru
Ru
Ru
State A State B State C State D State E State F

Figure 11: Latency deviation across experiment runs is small for most communication paths even though
experiments were run in the cloud. On paths C04 to C07, resource utilization is high leading to the expected
variance across experiment runs.

higher which causes the packaging machine to pause 10


4

more frequently. This, in turn, should decrease the


average packaging rate and increase the average pack-
aging backlog. After state F, MockFog 2.0 transitions
to FINAL and the experiment orchestration ends. If at Latency [ms] 10
3

any point a failure occurs, MockFog 2.0 will transition


to EXPERIMENT FAILED.
2
10
4.3 Results
In the following, we first validate that running the or-
chestration schedule leads to reproducible results (Sec- 10
1

tion 4.3.1). Then, we analyze how the changes made State A State B State C State D State E State F
in each state of the orchestration schedule affect the
application (Section 4.3.2) before summarizing our re- Figure 12: Latency between packaging control and lo-
sults (Section 4.3.3). gistics prognosis is affected by both CPU and network
restrictions.
4.3.1 Experiment Reproducibility
To analyze reproducibility, we repeat the experiment stability of affected communication paths. This, how-
five times. For each experiment run, we bootstrap a ever, is not a limit of reproducibility; rather, identifying
new infrastructure, install the application containers, such cases is exactly what MockFog was designed for.
and start the experiment orchestration – this is done Thus, we can conclude that experiment orchestration
automatically by MockFog 2.0. After the experiment leads to reproducible results under normal operating
run, we calculate the average latency for each com- conditions. This holds true even if a new set of virtual
munication path (C01 to C10 in Figure 8). Ideally, the machines is allocated for each run.
latency results from all five runs should be identical for
each communication path; in the following, we refer to 4.3.2 Application Impact of State Changes
the five measurement values for a given communication
path as latency set. In practice, however, it is not pos- Of the five experiment runs, the second run is the most
sible to achieve such a level of reproducibility because representative for the orchestration schedule: The la-
the application is influenced by outside factors [1, p. tency of its communication paths is usually close to the
263]. For example, running an application on cloud median latency of the set (Figure 11). Thus, we select
VMs and in Docker containers already leads to signif- this run as the basis for analyzing how the changes
icant performance variation [11, 12]. To measure this made in each state affect application metrics.
variation, we use the median runs of each latency set Figure 12 shows the latency between packaging con-
as a baseline and calculate how much individual runs trol and logistics prognosis. This latency includes the
deviate from this baseline (see Figure 11). From the communication path latency of C06 and C07, as well as
figure, we can see that the deviation is small for almost the time predict pickup needs to create the prognosis.
all communication paths. The exception is communi- In states A, D, E, and F, there are either no infras-
cation paths C04 to C07 in states B, C, and E which tructure changes or the ones made are on alternative
shows significant variance across runs. In these states, communication paths; thus, latency is almost identi-
the node manager applies various resource limits on cal. In state B, the factory server loses a CPU core;
the factory machine. Reducing the available compute as a result, predict pickup needs more time to create a
and network resources seems to negatively impact the prognosis which increases the latency. In state C, the

9
Median Quartile
24

22 14

Packaging Rate per Second


12
20
Latency [ms]

10
18
8
16 6
14 4

12 2
0
10
State A State B State C State D State E State F State A State B State C State D State E State F

Figure 13: Latency on C09 between aggregate and gen- Figure 14: Distribution of packaging rate per state:
erate dashboard is affected by the delay between factory When the temperature increases in state F, packaging
server and cloud. control needs to pause more often resulting in more
frequent packaging rates of 0 (machine is paused, 1st
Quartile) and 15 (machine is running at full speed to
communication path C06 additionally suffers from a catch up on the backlog, 3rd Quartile).
20% probability of package loss and a 20% probability
of package corruption. As these packages have to be
resent14 , this significantly increases overall latency. 4.3.3 Summary
Figure 13 shows the latency on C09, i.e., the time be- In conclusion, our experiments show that MockFog
tween aggregate sending and generate dashboard receiv- 2.0 can be used to automatically setup an emulated
ing a message. In states A, D, and F, there are either no fog infrastructure, install application components, and
infrastructure changes or the ones made are on alterna- orchestrate reproducible experiments. As desired,
tive communication paths; thus, latency is almost iden- changes to infrastructure and workload generation are
tical. Note, that the minimum latency is 12 ms; this clearly visible in the analysis results. The main bene-
makes sense as the round-trip latency between factory fit of the MockFog approach is that this autonomous
server and cloud is 24 ms. In states B and C, the fac- process can be integrated into a typical application en-
tory server loses a CPU core; MockFog 2.0 implements gineering process. This allows developers to automat-
this limitation by setting Docker resource limits. As a ically evaluate how a fog application copes with a va-
result, there is now 1 CPU core that is not used by the riety of infrastructure changes, failures, and workload
application containers and hence available to the op- variations after each commit without access to a phys-
erating system. As the resource limitation seems not ical fog infrastructure, with little manual effort, and in
to impact aggregate, the additional operating system a repeatable way [5].
resources slightly decrease latency. While the effect,
here, is only marginal, one has to keep such side effects
in mind when doing experiments with Docker contain- 5 Related Work
ers. In state E, the round-trip latency between factory
server and cloud is increased to 100 ms. Still, the min- Testing and benchmarking distributed applications in
imum latency only increases to 18 ms as packages are fog computing environments can be very expensive as
routed via the central office server (round-trip latency the provisioning and management of needed hardware
is 16 ms + 20 ms). is costly. Thus, in recent years, a number of approaches
have been proposed which aim to enable experiments
The packaging control reports its current packaging
on distributed applications or services without the need
rate once a second. Figure 14 shows the distribution of
for access to fog devices, especially edge devices.
reported values, i.e., how often each packaging rate was
There are a number of approaches that, similarly to
reported per state. In states A, B, C, D, and E, the
MockFog, aim to provide an easy-to-use solution for
workload generated by camera and temperature sensor
experiment orchestration on emulated testbeds. WoT-
is constant, so the rates are similar. In state F, how-
bench [18, 19] can emulate a large number of Web of
ever, the temperature sensor distributes measurements
Things devices on a single multicore server. As such, it
that are 30% higher on average. As a result, the pack-
is designed for experiments involving many power con-
aging machine must halt production more frequently,
strained devices and cannot be used for experiments
i.e., the packaging rate equals zero. This also increases
with many resource intensive application components
the backlog; hence, the packaging machine will more
such as distributed application backends. D-Cloud [3,
frequently run at full speed to catch up on the back-
14] is a software testing framework that uses virtual
log, i.e., the packaging rate equals 15.
machines in the cloud for failure testing of distributed
systems. However, D-Cloud is not suited for the eval-
14 Resent packages can also be impacted by loss or corruption. uation of fog applications as users cannot control net-

10
work properties such as the latency between two ma- terminates virtual machines and containers running in
chines. Héctor [4] is a framework for automated testing the cloud. The intuition behind this approach is that
of IoT applications on a testbed that comprises phys- failures will occur much more frequently so that engi-
ical machines and a single virtual machine host. Hav- neers are encouraged to aim for resilience. Chaos Mon-
ing only a single host for virtual machines significantly key does not provide the runtime infrastructure as we
limits scalability. Furthermore, the authors only men- do, but it would very well complement our approach.
tion the possibility of experiment orchestration based For instance, Chaos Monkey could be integrated into
on an “experiment definition” but do not provide more MockFog’s experiment orchestration module. Another
details. Balasubramanian et al. [2] and Eisele et al. [10] solution that complements MockFog is DeFog [29]. De-
also present testing approaches that build upon phys- Fog comprises six Dockerized benchmarks that can be
ical hardware for each node rather than more flexi- deployed on edge or cloud resources. From the Mock-
ble virtual machines. EMU-IoT [39] is a platform for Fog point of view, these benchmark containers are
the creation of large scale IoT networks. The platform workload generating application components. Thus,
can also orchestrate customizable experiments and has they could be managed and deployed by MockFog’s
been used to monitor IoT traffic for the prediction of application management and experiment orchestration
machine resource utilization [38]. EMU-IoT focuses module. Gandalf [25] is a monitoring solution for long-
on modeling and analyzing IoT networks; it cannot term Cloud deployments. It is used in production as
manipulate application components or the underlying part of Azure, Microsoft’s Cloud service offer. It is
runtime infrastructure. therefore not part of the application engineering pro-
Gupta et al. presented iFogSim [13], a toolkit to cess (cf. Figure 3) and could be used after running
evaluate placement strategies for independent applica- experiments with MockFog. Finally, MockFog can be
tion services on machines distributed across the fog. used to evaluate and experiment with fog computing
In contrast to our solution, iFogSim uses simulation frameworks such as FogFrame [48] or URMILA [46].
to predict system behavior and, thus, to identify good
placement decisions. While this is useful in early devel-
opment stages, simulation based approaches cannot be 6 Discussion
used for testing of real application components which
While MockFog allows application developers to over-
we support with MockFog. [8, 24, 41] also describe
come the challenge that a fog computing testing in-
systems which can simulate complex IoT scenarios with
frastructure either does not exist yet or is already used
thousands of IoT devices. Additionally, network delays
in production, it has some limitations. For example,
and failure rates can be defined to model a realistic,
it does not work when a specific local hardware is re-
geo-distributed system. More simulation approaches
quired, e.g., when the use of a particular crypto chip
include FogExplorer [15, 16] which aims to find good
is deeply embedded in the application source code.
fog application designs or Cisco’s PacketTracer15 which
MockFog also tends to work better for the emulation
simulates complex networks – all these simulation ap-
of larger edge machines such as a Raspberry Pi but
proaches cannot be used for experimenting with real
has problems when smaller devices are involved as they
application components.
cannot be emulated accurately.
[9, 28, 32, 33] build on the network emulators Similarly, if the communication of a fog applica-
MiniNet [31] and MaxiNet [52]. While they target tion is not based on Ethernet or WiFi, e.g., be-
a similar use case as MockFog, their focus is not on cause sensors communicate via a LoRaWAN[47] such
application testing and benchmarking but rather on as TheThingsNetwork17 , MockFog’s approach of emu-
network design (e.g., network function virtualization). lating connections between devices does not work out
Based on the papers, the prototypes also appear to be of the box as these sensors expect to have access to a
designed for single machine deployment – which limits LoRa sender. With additional effort, however, appli-
scalability – while MockFog was specifically designed cation developers could adapt their sensor software to
for distributed deployment. Finally, neither of these use Ethernet or WiFi when no Lora sender is available.
approaches appears to support experiment orchestra-
Also, emulating real physical connections is difficult
tion or the injection of failures. Missing support for
as their characteristics are often influenced by external
experiment orchestration is also a key difference be-
factors such as other users, electrical interference, or
tween MockFog and MAMMOTH [26], a large scale
natural disasters. While it would be possible to add a
IoT emulator.
machine learning component to MockFog that updates
OMF [37], MAGI [20], and NEPI [36] can orches- connection properties based on past data collected on
trate experiments using existing physical testbeds. On a reference physical infrastructure, it is hard to justify
a high level, these solutions aim to provide a function- this effort for most use cases.
ality which is similar to the third MockFog module, Finally, MockFog starts one VM for every single
i.e., the experiment orchestration module. fog machine. This approach does not work well when
For failure testing, Netflix has released Chaos Mon- the infrastructure model comprises thousands of IoT
key[50] as open source16 . Chaos Monkey randomly devices. In this case, one should run groups of de-
vices with similar network characteristics on only a few
15 https://www.netacad.com/courses/packet-tracer
16 https://github.com/Netflix/chaosmonkey 17 https://www.thethingsnetwork.org

11
larger VMs. [6] David Bermbach, Liang Zhao, and Sherif Sakr.
“Towards Comprehensive Measurement of Con-
sistency Guarantees for Cloud-Hosted Data Stor-
7 Conclusion age Services”. In: Performance Characterization
and Benchmarking. Ed. by Raghunath Nambiar
In this paper, we proposed MockFog, a system for
and Meikel Poess. Red. by David Hutchison et al.
the emulation of fog computing infrastructure in ar-
Vol. 8391. Springer, 2014, pp. 32–47. isbn: 978-
bitrary cloud environments. MockFog aims to sim-
3-319-04935-9 978-3-319-04936-6.
plify experimenting with fog applications by provid-
ing developers with the means to design emulated fog [7] David Bermbach et al. “A Research Perspec-
infrastructure, configure performance characteristics, tive on Fog Computing”. In: 2nd Workshop
manage application components, and orchestrate their on IoT Systems Provisioning & Management
experiments. We evaluated our approach through a for Context-Aware Smart Cities. Springer, 2018,
proof-of-concept implementation and experiments with pp. 198–210. doi: 10.1007/978-3-319-91764-
a fog-based smart factory application. We demon- 1_16.
strated how MockFog’s features can be used to study [8] Giacomo Brambilla et al. “A Simulation Platform
the impact of infrastructure changes and workload vari- for Large-Scale Internet of Things Scenarios in
ations. Urban Environments”. In: Proceedings of the The
First International Conference on IoT in Urban
Space. ICST, 2014. doi: 10 . 4108 / icst . urb -
Acknowledgment iot.2014.257268.
We would like to thank Elias Grünewald and Sascha [9] Antonio Coutinho et al. “Fogbed: A Rapid-
Huk who have contributed to the proof-of-concept pro- Prototyping Emulation Environment for Fog
totype of the preliminary MockFog paper [17]. Computing”. In: Proceedings of the IEEE Inter-
national Conference on Communications (ICC).
IEEE, 2018. doi: 10.1109/ICC.2018.8423003.
Bibliography [10] Scott Eisele et al. “Towards an architecture for
[1] Paul Ammann and Jeff Offutt. Introduction to evaluating and analyzing decentralized Fog ap-
software testing. Cambridge University Press, plications”. In: 2017 IEEE Fog World Congress.
2008. 322 pp. IEEE, 2017, pp. 1–6. doi: 10.1109/FWC.2017.
8368531.
[2] Daniel Balasubramanian et al. “A Rapid Testing
Framework for a Mobile Cloud”. In: 2014 25th [11] Martin Grambow et al. “Dockerization Impacts
IEEE International Symposium on Rapid System in Database Performance Benchmarking”. In:
Prototyping. IEEE, 2014, pp. 128–134. doi: 10. Technical Report MCC.2018.1. Berlin, Deutsch-
1109/RSP.2014.6966903. land: TU Berlin & ECDF, Mobile Cloud Com-
puting Research Group, 2018.
[3] Takayuki Banzai et al. “D-Cloud: Design of a
Software Testing Environment for Reliable Dis- [12] Martin Grambow et al. “Is it safe to docker-
tributed Systems Using Cloud Computing Tech- ize my database benchmark?” In: Proceedings of
nology”. In: 2010 10th IEEE/ACM International the 34th ACM/SIGAPP Symposium on Applied
Conference on Cluster, Cloud and Grid Comput- Computing. ACM, 2019, pp. 341–344. doi: 10.
ing. IEEE, 2010, pp. 631–636. doi: 10 . 1109 / 1145/3297280.3297545.
CCGRID.2010.72. [13] Harshit Gupta et al. “iFogSim: A Toolkit for
[4] Ilja Behnke, Lauritz Thamsen, and Odej Kao. Modeling and Simulation of Resource Manage-
“Héctor: A Framework for Testing IoT Ap- ment Techniques in the Internet of Things, Edge
plications Across Heterogeneous Edge and and Fog Computing Environments”. In: Software:
Cloud Testbeds”. In: Proceedings of the 12th Practice and Experience 47.9 (2017), pp. 1275–
IEEE/ACM International Conference on Utility 1296. doi: 10.1002/spe.2509.
and Cloud Computing Companion. ACM, 2019, [14] Toshihiro Hanawa et al. “Large-Scale Software
pp. 15–20. doi: 10.1145/3368235.3368832. Testing Environment Using Cloud Computing
[5] David Bermbach, Erik Wittern, and Stefan Tai. Technology for Dependable Parallel and Dis-
Cloud Service Benchmarking: Measuring Qual- tributed Systems”. In: 2010 Third International
ity of Cloud Services from a Client Perspective. Conference on Software Testing, Verification,
Springer, 2017. and Validation Workshops. IEEE, 2010, pp. 428–
433. doi: 10.1109/ICSTW.2010.59.
[15] Jonathan Hasenburg, Sebastian Werner, and
David Bermbach. “FogExplorer”. In: Proceedings
of the 19th International Middleware Confer-
ence (Posters). Middleware ’18. Rennes, France:

12
ACM, 2018, pp. 1–2. isbn: 978-1-4503-6109-5. [26] Vilen Looga et al. “MAMMOTH: A massive-
doi: 10.1145/3284014.3284015. scale emulation platform for Internet of Things”.
[16] Jonathan Hasenburg, Sebastian Werner, and In: 2012 IEEE 2nd International Conference
David Bermbach. “Supporting the Evaluation of on Cloud Computing and Intelligence Systems.
Fog-based IoT Applications During the Design IEEE, 2012, pp. 1235–1239. doi: 10.1109/CCIS.
Phase”. In: Proceedings of the 5th Workshop on 2012.6664581.
Middlware and Applications for the Internet of [27] Redowan Mahmud, Ramamohanarao Kotagiri,
Things. M4IoT’18. Rennes, France: ACM, 2018, and Rajkumar Buyya. “Fog Computing: A Tax-
pp. 1–6. isbn: 978-1-4503-6118-7. doi: 10.1145/ onomy, Survey and Future Directions”. In: In-
3286719.3286720. ternet of Everything: Algorithms, Methodologies,
[17] Jonathan Hasenburg et al. “MockFog: Emulat- Technologies and Perspectives. Springer, 2018,
ing Fog Computing Infrastructure in the Cloud”. pp. 103–130. arXiv: 1611.05539.
In: 2019 IEEE International Conference on Fog [28] Ruben Mayer et al. “EmuFog: Extensible and
Computing. IEEE, 2019, pp. 144–152. doi: 10. Scalable Emulation of Large-Scale Fog Comput-
1109/ICFC.2019.00026. ing Infrastructures”. In: 2017 IEEE Fog World
[18] Raoufehsadat Hashemian et al. “Contention Congress. IEEE, 2017. doi: https://doi.org/
Aware Web of Things Emulation Testbed”. In: 10.1109/FWC.2017.8368525.
Proceedings of the ACM/SPEC International [29] Jonathan McChesney et al. “DeFog: fog com-
Conference on Performance Engineering. ACM, puting benchmarks”. In: Proceedings of the 4th
2020, pp. 246–256. doi: 10 . 1145 / 3358960 . ACM/IEEE Symposium on Edge Computing.
3379140. ACM, 2019, pp. 47–58. doi: 10.1145/3318216.
[19] Raoufeh Hashemian et al. “WoTbench: A Bench- 3363299.
marking Framework for the Web of Things”. In: [30] Kief Morris. Infrastructure as Code: Managing
Proceedings of the 9th International Conference Servers in the Cloud. 1st ed. O’Reilly, 2016.
on the Internet of Things. ACM, 2019. doi: 10. [31] Rogerio Leao Santos de Oliveira et al. “Us-
1145/3365871.3365897. ing Mininet for Emulation and Prototyping
[20] Alefiya Hussain et al. “Toward Orchestration Software-defined Networks”. In: 2014 IEEE
of Complex Networking Experiments”. In: 13th Colombian Conference on Communications and
USENIX Workshop on Cyber Security Experi- Computing. IEEE, 2014. doi: 10 . 1109 /
mentation and Test. USENIX, 2020. ColComCon.2014.6860404.
[21] A Kajackas and R Rainys. “Internet Infrastruc- [32] Manuel Peuster, Johannes Kampmeyer, and Hol-
ture Topology Assessment”. In: Electronics and ger Karl. “Containernet 2.0: A Rapid Proto-
Electrical Engineering 7.103 (2010), p. 4. typing Platform for Hybrid Service Function
[22] Vasileios Karagiannis and Stefan Schulte. “Com- Chains”. In: 2018 4th IEEE Conference on Net-
parison of Alternative Architectures in Fog Com- work Softwarization and Workshops. Montreal,
puting”. In: 2020 IEEE 4th International Con- QC: IEEE, 2018, pp. 335–337. doi: 10 . 1109 /
ference on Fog and Edge Computing (ICFEC). NETSOFT.2018.8459905.
IEEE, 2020, pp. 19–28. doi: 10 . 1109 / [33] Manuel Peuster, Holger Karl, and Steven van
ICFEC50348.2020.00010. Rossem. “MeDICINE: Rapid Prototyping of
[23] Shweta Khare et al. “Linearize, Predict and Production-ready Network Services in multi-PoP
Place: Minimizing the Makespan for Edge-based Environments”. In: 2016 IEEE Conference on
Stream Processing of Directed Acyclic Graphs”. Network Function Virtualization and Software
In: Proceedings of the 4th ACM/IEEE Sympo- Defined Networks. IEEE, 2016, pp. 148–153. doi:
sium on Edge Computing. ACM, 2019, pp. 1–14. 10.1109/NFV-SDN.2016.7919490.
doi: 10.1145/3318216.3363315. [34] Tobias Pfandzelter and David Bermbach. “IoT
[24] Isaac Lera, Carlos Guerrero, and Carlos Juiz. Data Processing in the Fog: Functions, Streams,
“YAFS: A Simulator for IoT Scenarios in or Batch Processing?” In: 2019 IEEE Interna-
Fog Computing”. In: IEEE Access 7 (2019), tional Conference on Fog Computing (ICFC).
pp. 91745–91758. doi: 10.1109/ACCESS.2019. IEEE, 2019, pp. 201–206. doi: 10.1109/ICFC.
2927895. 2019.00033.
[25] Ze Li et al. “Gandalf: An Intelligent, End-To-End [35] Tobias Pfandzelter, Jonathan Hasenburg, and
Analytics Service for Safe Deployment in Cloud- David Bermbach. From Zero to Fog: Efficient En-
Scale Infrastructure”. In: 17th USENIX Sympo- gineering of Fog-Based IoT Applications. 2020.
sium on Networked Systems Design and Imple- arXiv: 2008.07891 [cs.DC].
mentation. USENIX, 2020.

13
[36] Alina Quereilhac et al. “NEPI: An Integra- [46] Shashank Shekhar et al. “URMILA: Dynami-
tion Framework for Network Experimentation”. cally trading-off fog and edge resources for per-
In: 19th International Conference on Software, formance and mobility-aware IoT services”. In:
Telecommunications and Computer Networks. Journal of Systems Architecture 107 (2020). doi:
IEEE, 2011. 10.1016/j.sysarc.2020.101710.
[37] Thierry Rakotoarivelo et al. “OMF: a con- [47] Jonathan de Carvalho Silva et al. “LoRaWAN - A
trol and management framework for networking Low Power WAN Protocol for Internet of Things:
testbeds”. In: ACM SIGOPS Operating Systems a Review and Opportunities”. In: 2017 2nd Inter-
Review 43.4 (2010), pp. 54–59. doi: 10 . 1145 / national Multidisciplinary Conference on Com-
1713254.1713267. puter and Energy Science. IEEE, 2017.
[38] Brian Ramprasad, Joydeep Mukherjee, and [48] Olena Skarlat et al. “A Framework for Optimiza-
Marin Litoiu. “A Smart Testing Framework for tion, Service Placement, and Runtime Operation
IoT Applications”. In: 2018 IEEE/ACM Interna- in the Fog”. In: 2018 IEEE/ACM 11th Interna-
tional Conference on Utility and Cloud Comput- tional Conference on Utility and Cloud Comput-
ing Companion. IEEE, 2018, pp. 252–257. doi: ing (UCC). IEEE, 2018, pp. 164–173. doi: 10.
10.1109/UCC-Companion.2018.00064. 1109/UCC.2018.00025.
[39] Brian Ramprasad et al. “EMU-IoT - A Virtual [49] Olena Skarlat et al. “Optimized IoT service place-
Internet of Things Lab”. In: 2019 IEEE Inter- ment in the fog”. In: Service Oriented Computing
national Conference on Autonomic Computing. and Applications 11.4 (2017), pp. 427–443. doi:
IEEE, 2019, pp. 73–83. doi: 10 . 1109 / ICAC . 10.1007/s11761-017-0219-8.
2019.00019. [50] Ariel Tseitlin. “The Antifragile Organization”.
[40] Thomas Rausch et al. “Synthesizing Plausible In- In: Communications of the ACM 56.8 (2013),
frastructure Configurations for Evaluating Edge pp. 40–44. doi: 10.1145/2492007.2492022.
Computing Systems”. In: 3rd USENIX Workshop [51] Prateeksha Varshney and Yogesh Simmhan. “De-
on Hot Topics in Edge Computing. USENIX, mystifying Fog Computing: Characterizing Ar-
2020. chitectures, Applications and Abstractions”. In:
[41] Maria Salama, Yehia Elkhatib, and Gordon 2017 IEEE 1st International Conference on Fog
Blair. “IoTNetSim: A Modelling and Simulation and Edge Computing. IEEE, 2017, pp. 115–124.
Platform for End-to-End IoT Services and Net- doi: 10.1109/ICFEC.2017.20.
working”. In: Proceedings of the 12th IEEE/ACM [52] Philip Wette, Martin Draxler, and Arne
International Conference on Utility and Cloud Schwabe. “MaxiNet: Distributed emulation of
Computing. 2019, pp. 251–261. doi: 10 . 1145 / software-defined networks”. In: 2014 IFIP Net-
3344341.3368820. working Conference. IEEE, 2014. doi: 10.1109/
[42] Lidiane Santos et al. “An Architectural Style for IFIPNetworking.2014.6857078.
Internet of Things Systems”. In: Proceedings of [53] Mario Winter et al. Der Integrationstest. Hanser,
the 35th Annual ACM Symposium on Applied 2013.
Computing. ACM, 2020, pp. 1488–1497. doi: 10.
1145/3341105.3374030.
[43] Lidiane Santos et al. “Identifying Requirements
for Architectural Modeling in Internet of Things
Applications”. In: 2019 IEEE International Con-
ference on Software Architecture Companion.
IEEE, 2019, pp. 19–26. doi: 10 . 1109 / ICSA -
C.2019.00011.
[44] Mahadev Satyanarayanan et al. “The Case for
VM-based Cloudlets in Mobile Computing”. In:
IEEE Pervasive Computing (2009), p. 9. doi: 10.
1109/MPRV.2009.64.
[45] Gerald Schermann et al. “Bifrost: Supporting
Continuous Deployment with Automated Enact-
ment of Multi-Phase Live Testing Strategies”.
In: Proceedings of the 17th International Mid-
dleware Conference. ACM, 2016. doi: 10.1145/
2988336.2988348.

14

You might also like