You are on page 1of 18

Future Generation Computer Systems 90 (2019) 503–520

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

A context-sensitive offloading system using machine-learning


classification algorithms for mobile cloud environment
∗ ∗
Warley Junior , Eduardo Oliveira, Albertinin Santos, Kelvin Dias
Informatics Center, Federal University of Pernambuco, Road Jorn. Anibal Fernandes, Cidade Universitaria, Recife, PE 50740-560, Brazil

highlights

• We develop CSOS, which integrates middleware, classifiers, and profiling system.


• Decision engine is highly accurate for the dynamic nature of mobile environments.
• CSOS transforms raw context elements to high-level context information at runtime.
• The benchmark applications are easily configurable by the programmer.
• Through a real-world deployment, the proposed system improves dynamic offloading.

article info a b s t r a c t

Article history: Computational offloading in Mobile Cloud Computing (MCC) has attracted attention due to benefits in en-
Received 17 November 2017 ergy saving and improved mobile application performance. Nevertheless, this technique underperforms
Received in revised form 2 July 2018 if the offloading decision ignores contextual information. While recent studies have highlighted the use
Accepted 16 August 2018
of contextual information to improve the computational offloading decision, there still remain challenges
Available online xxxx
regarding the dynamic nature of the MCC environment. Most solutions design a single reasoner for the
Keywords: offloading decision and do not know how accurate and precise this technique is, so that when applied in
Mobile cloud real-world environments it can contribute to inaccurate decisions and consequently the low performance
Context-sensitive of the overall system. Thus, this paper proposes a Context-Sensitive Offloading System (CSOS) that takes
Machine-learning advantage of the main machine-learning reasoning techniques and robust profiling system to provide
Classification algorithms
offloading decisions with high levels of accuracy. We first evaluate the main classification algorithms
under our database and the results show that JRIP and J48 classifiers achieves 95% accuracy. Secondly, we
develop and evaluate our system under controlled and real scenarios, where context information changes
from one experiment to another. Under these conditions, CSOS makes correct decisions as well as ensuring
performance gains and energy efficiency.
© 2018 Elsevier B.V. All rights reserved.

1. Introduction The MCC paradigm can be categorized into three different mod-
els: the public cloud, the cloudlet server, and the ad-hoc cloudlet.
The synergy of three heterogeneous cornerstone technologies A public cloud is formed from computational resources that are
– namely mobile computing, cloud computing, and networking – located in centralized data centers and maintained by cloud service
has allowed great advances to be made in the way mobile devices providers. A cloudlet server is a low-cost multicore computers
access and process their services. Mobile cloud computing (MCC) is cluster that is located in the same Wireless Local Area Network
a result of such progress since it provides cloud computing services (WLAN) as the mobile device; it can provide cloud services on a
on a mobile ecosystem. MCC enables computation and storage
small scale and is commonly found in domestic, corporate, and
migration from resource-poor mobile device to cloud servers, en-
public environments. Finally, an ad-hoc cloudlet is a group of mo-
hancing service availability, speed, and reliability. Moreover, it
bile devices with more powerful processing and autonomous en-
may reduce the device’s energy consumption without degrading
application performance. These benefits are achieved through of- ergy that shares its resources with local neighbors (e.g., resource-
floading operations, i.e. code and data are transferred from mobile poor mobile devices) [2].
devices to remote servers with rich resources [1]. According to [3], computational offloading is the opportunistic
process that relies on external infrastructure to execute a compu-
∗ Corresponding authors. tational task outsourced by a low-power device. Moving a compu-
E-mail addresses: wmvj@cin.ufpe.br (W. Junior), ehammo@cin.ufpe.br tational task from one device to another is not a trivial endeavor.
(E. Oliveira), ams11@cin.ufpe.br (A. Santos), kld@cin.ufpe.br (K. Dias). Network bandwidth, received signal strength, input data size, and

https://doi.org/10.1016/j.future.2018.08.026
0167-739X/© 2018 Elsevier B.V. All rights reserved.
504 W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520

surrogate capabilities, amongst others, play a critical role in de- 2. Related work
ciding whether or not to offload a task. Since these parameters
can change abruptly, the opportunistic moments to offload in a Remote cloud offloading is sensitive to the multiple parameters
remote cloud system are sporadic. Therefore, the effectiveness of of the system (context of the device, application, and network),
an offloading operation is determined by its ability to infer where which means that it is challenging to pinpoint an opportunistic mo-
the execution of code/data (local or remote) will result in less ment to offload [3]. Consequently, in order to address these chal-
computational effort for the mobile device [4]. lenges together with remote cloud, cloudlets and ad-hoc cloudlets,
This problem introduces the necessity of computational of- most recent works have proposed new frameworks, algorithms,
floading systems that can adapt themselves based on the infor- models, and middlewares that use profilers to enable periodic
mation regarding the resources that are being provided, to decide monitoring of several metrics that are later used to infer where the
where and when to perform offloading, as well as to infer the execution of code will require less computational effort (remote or
contextual information of mobile devices. In other words, it refers local).
to a system’s awareness of its surrounding environment, how it is Table 1 synthesizes the contributions of the most prominent
able to monitor, collect, select, process, and share an entity’s con- works related to this research. The references are categorized by
text information and how this is involved in decision-making and three aspects: (i) context sources, (ii) decision support, and (iii)
the execution of computational offloading. Context information is main contribution.
any kind of information that characterizes an entity in a specific Context sources refers to the physical and logical entities that
domain; an entity in this regard can be a person, mobile device, provide relevant context information. The Application delivers data
application, remote cloud, or network element [5]. related to its components, methods, instructions, and input/out-
Traditionally, existing offloading systems have proposed new put data, while Device is an information source about the local
frameworks, algorithms, models, and middlewares that depend on hardware, i.e., CPU and memory utilization, battery level, GPS,
periodic monitoring of several parameters to assist offloading deci- accelerometer, and other sensors. Wireless Network delivers data
sions. The problem is that most solutions design a single reasoning related to the state and performance of the main components of
technique for the offloading decision and it is not known how accu- a wireless network infrastructure, such as RTT, throughput, signal
rate and precise this technique is compared to the others available strength, and connection status. Lastly, Cloud/Cloudlet delivers data
in the scientific literature. Besides that, cloud resource monitoring related to access policies, availability, performance, and service
plays an important role in decision-making, since the cloud must cost. Information that can be monitored and captured are vCPU
have runtime support for the offloaded application in order to gain usage, virtual disk access time, and number of answered requests.
the advantage of computation offloading. In contrast, our system According to Table 1, our proposal is the only one that implements
implements and evaluates multiple classification algorithms to all the context sources (for details related to the profilers, please
seek the best offloading decision-making solution that achieves the see Section 3.1).
highest accuracy, without significant system overheads. Current offloading solutions, such as Thinkair [6], Mobibyte [7],
To tackle the issues mentioned above, this paper presents a Anyrun [8], CADA [9], MALMOS [10], Kwon et al. [11], OMMC [12],
Context-Sensitive Offloading System (CSOS), with the objective of mCloud [13], and Majeed et al. [14], develop dynamic profiler, op-
exploring how beneficial a system could be if it works based on timization solver, energy models, and communication managers,
its current context. To achieve this, CSOS handles raw context, which are all implemented on smartphones to cooperatively make
heterogeneity, and inaccurate decisions challenges, summarized as the optimized offloading decision. On the other hand, MAUI [15],
follows: EMCO [16], and Rego et al. [17] are some of the few studies that
execute the complex operations of the decision component outside
• We develop CSOS, which integrates middleware, machine-
the mobile device. Like most of the aforementioned works, the
learning classification algorithms, and a robust profiling sys-
CSOS executes all tasks related to the decision on the mobile device,
tem. Moreover, CSOS extrapolates features and experimental
because the computing-intensive operations related to the training
tests from benchmark applications that are easily config-
and testing of classification algorithms are performed in the de-
urable by the programmer to allow interaction with proposed
ployment phase of the system as an offline process. Therefore, the
middleware.
cost of processing at runtime is very low, since the mobile device
• We design a decision engine that is highly accurate for the
only has to parse the trained algorithm to decide when to offload a
adaptive and dynamic nature of mobile environments by
using main classifications algorithms (k-nearest neighbors, code or data. Besides that, the outsourcing of decision-making for
rules, decision tree, and Naive Bayes) for the decision-making a remote cloud may come at the cost of signaling overheads, delays
on whether the computation should be offloaded or not. and security issues.
• Our system transforms raw context elements to high level Decision support refers to technique that is used to assist the of-
context information at runtime. The goal is to eliminate two floading decision, as well as to infer when offloading will improve
characteristics of raw context: imperfection (i.e. unknown, performance. ThinkAir, MobiByte, CADA, and OMMC make offload-
ambiguous, imprecise, and/or erroneous elements) and un- ing decisions considering the energy involved in computation and
certainty. communication through reliable energy estimation models. On
• Through a real-world deployment, we demonstrate that by the other hand, MAUI solves a 0–1 integer linear programming
adopting CSOS, it is possible to improve the dynamic offload- problem on the remote server to decide where each method must
ing to a level that occurs only in favorable situations. be executed and periodically updates the mobile device partition
information. A primary requirement of linear programming is that
The rest of this paper is organized as follows: Section 2 discusses the objective function and every constraint must be linear. How-
related work. Section 3 presents our proposed CSOS along with its ever, in real world situations, just like in MCC environments, sev-
system implementation. In addition, we provide an overview of eral heterogeneity and mobility problems are non-linear in nature.
those steps a developer must follow to configure an application Other solutions including EMCO, MALMOS and Majeed et al.
to use CSOS. Section 4 presents the details and results of the ex- handle offloading decisions based on context reasoning decision
periments performed to evaluate the classification algorithms and techniques, such as Fuzzy Logic, Instance-Based Learning (IBL), Per-
CSOS. Finally, Section 6 concludes the paper and presents future ceptron, Naive Bayes, and Support Vector Machine (SVM). EMCO
directions that may be considered. proposes the use of a fuzzy logic system to aggregate the profiling
W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520 505

metrics and considers historical data for building an inference and ensures high accuracy of decision provided by the classifiers.
system that can be used by the mobile device to classify where Our work reached 95% accuracy with decision tree, rules-based,
the threads must be executed. The problem is that EMCO’s results and K-NN algorithms (more details see Section 4.2).
show only a few input parameters segregated by the fuzzy logic CSOS is the first offloading system developed and designed to
engine and consequently this ignores more complex, dynamic handle raw context and transform them into appropriate context
scenarios. On the other hand, MALMOS implements a runtime representation without human intervention. To the best of our
scheduler based on an online training mechanism by adopting a knowledge, until now there has been no work in the literature
machine-learning approach. It supports a policy that dynamically that uses and evaluates multiple classifiers from experimental
adapts scheduling decisions at runtime based upon the observation databases. CSOS is equipped with a decision engine that works
of previous offloading decisions and their correctness. The authors with the main classification algorithms. Its historical database was
measured the scheduling accuracy of MALMOS by offloading each built from experimental data, i.e. we first collected training exam-
application to four different remote servers, while varying the net- ples (tasks execution locally and remotely by varying the context),
work bandwidth and input size. The system proposed by Majeed then we labeled them according to the processing time results
et al. uses SVM for accurately scheduling the component remotely (offloading or not). In sum, CSOS adopts a history-based prediction
or locally. The SVM classifier adapts its decision according to ex- approach where we utilize the past profiled information as a basis
ternal context (network bandwidth) and internal environmental for performance inference for future tasks. Unlike previous works,
data (e.g. memory usage, execution time and CPU utilization). CSOS extrapolates features and experimental tests from bench-
Nevertheless, the solutions mentioned executes all the complex mark applications that are easily configurable by the programmer
operations of the training and testing at regular intervals inside the to allow interaction with proposed middleware. It also improves
mobile device, which can contribute to the overheads imposed on the applications’ performance as the offloading operations occur
the system. Furthermore, the solutions completely fails to address only in favorable contexts.
the important aspect of the amount of energy used by the online
training mechanisms.
3. CSOS system
Similar to CSOS, Rego et al. uses a decision tree-based approach
for handling offloading decisions through adaptive monitoring and
In this section, we first describe the proposed context-sensitive
historical data, while in the AnyRun Computing (ARC) system,
offloading system, its components, and interactions (Section 3.1).
the ‘stup’ component uses an inference engine based on a Naive
Next, we discuss the system’s development process and the imple-
Bayes decision model to assess the probability that offloading
mentation details (Section 3.2).
is advantageous compared to local execution. However, the first
solution depends on the decision tree creation and concepts of
3.1. Architecture
entropy and information gain to identify the most relevant metrics
for the offloading decision, which generates a high cost in terms of
communication and computing. In addition, these two approaches CSOS follows the standard client–server model. The CSOS Client
do not evaluate their reasoners with regard to the perspective of components are located on the mobile device, while the CSOS
energy consumption and application performance, unlike CSOS, Server components are located on the cloud or cloudlet. Fig. 1
which evaluates the impact of runtime offloading decision-making presents the overall architecture and provides the relationships
in different contextual situations. between the main components.
Kwon et al. [11], mCloud [13], CoSMOS [18], and Wu et al. [19] Within middleware, the architecture consists of three main
use other prediction techniques for offloading decision. For in- components: Decision Engine, Profilers, and Proxy Handler. These
stance, mCloud is a code offloading framework that provides code components interact with each other to execute context-sensitive
offloading decisions at runtime on the selection of wireless medium offloading. When an user initiates an application to process a task
and appropriate cloud resources. The authors apply Technique for (e.g., an image), the most recent results of the profilers are acquired
Order of Preference by Similarity to Ideal Solution (TOPSIS) [20] from the database for assisting the decision engine in making
for wireless medium selection by considering multiple criteria the correct inference. It can decide to execute the task locally or
(e.g. wireless medium availability, network congestion, cost energy remotely based on the classification algorithm used (e.g., J48, JRIP).
of the channel) and by using a cost estimation model that calculates We define components and data/control operations represented by
the execution cost for each offloading request. Kwon et al. pro- circled numbers as the following:
pose a feature-based prediction technique to overcome the input-
sensitivity problem of mobile application performance. It is called • Application: this represents the three possible benchmark-
fMantis, which generates a performance predictor for a mobile ing applications (image editor, face detection, and an online
application that predicts whether or not a certain method will be game) that undertake resource-intensive computing. In the
executed according performance metrics including execution time, application interface, the users can choose the classification
energy consumption, memory usage and state size. Finally, the algorithm that they wish to use with CSOS.
solutions proposed in CoSMOS and Wu et al. focus on the execution • Proxy Handler: its role is to intercept methods identified
time and energy consumption estimation as offloading-decision as offloading candidates with the @remotable markup ⃝ 1 . If

criteria. CoSMOS provides an offloading decision model, which the annotation is marked as ‘static’, the Proxy Handler sends
employs self-adaptive system architecture to decide efficiently the request for the method directly to the cloudlet or cloud,
when and which application components should be offloaded, ignoring the decision engine. Otherwise, if it is marked as ‘dy-
while Wu et al. propose an energy-efficient offloading-decision namic’, it is responsible for calculating the current application
algorithm to decide whether an application should run locally, or data size and sends the corresponding value along with the
remotely on a cloud, directly or via a cloudlet. These studies were classification algorithm name to the decision engine ⃝ 2 . After
not designed for supporting the dynamic nature of the MCC envi- that, it receives the result of the decision engine and runs
ronment, since they do not handle a wider range of environment either locally or remotely ⃝ 6 .
conditions (e.g., different states of the network, device, application, • Decision Engine: this component has three functions: (i) it is
cloud, and cloudlet) nor improve offloading-decision adaptiveness. responsible for loading a reasoning decision model (e.g., J48,
Main contribution defines the real benefit of using the associated JRIP, IBK) that is based on a training set to classify new
solution. The CSOS focuses on context-sensitive decision-making instances ⃝ 3 ; (ii) it analyzes each attribute of the most recent
506 W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520

Table 1
Qualitative comparison of the context-aware solutions.
Solutions Name Context sources Features
App Device Wireless network Cloud/Cloudlet Decision support Main contribution
MAUI ✓ ✓ ✓ ✗ Integer linear programming Energy-aware
ThinkAir ✓ ✓ ✓ ✗ Energy model Dynamic resource allocation
Mobibyte ✓ ✓ ✓ ✗ Energy model Context-aware
ARC ✗ ✓ ✓ ✗ Naïve Bayes Opportunistic computing
Kwon et al. ✓ ✓ ✗ ✗ Sparse Polynomial Regression Precise execution offloading
OMMC ✓ ✓ ✓ ✗ TOPSIS and Energy model Context-aware
mCloud ✓ ✓ ✓ ✗ TOPSIS and Cost model Context-aware
EMCO ✓ ✗ ✓ ✓ Fuzzy logic Evidence-aware
Rego et al. ✓ ✓ ✓ ✗ Decision tree Adaptive monitoring
CoSMOS ✗ ✓ ✗ ✗ Cost functions Context-aware
Wu et al. ✗ ✓ ✓ ✗ Lyapunov optimization Energy-efficient
MALMOS ✓ ✗ ✓ ✗ IBL, Perceptron, and Naïve Bayes Runtime scheduler
CADA ✗ ✓ ✓ ✗ Energy model Context-aware
Majeed et al. ✓ ✓ ✓ ✗ SVM Adaptive scheduler
CSOS ✓ ✓ ✓ ✓ (K-NN, Rules, Naïve Bayes, and Decision Tree) High Accuracy and context-sensitive

Fig. 1. CSOS Architecture.

instance in the database to collect information, such as the CPU load is also calculated from the application. We also
name, type, and possible values; (iii) it uses the Weka library1 capture the application name and calculate the data size to
to classify (local or remote) the latest instance stored in the be processed. The data size is calculated by the Proxy Handler
database ⃝ 5 and sends the result to the Proxy Handler. This
4 ⃝ in the moments before the decision.
Weka library is labeled as the ‘Classifier’ component in the • Context Database: the profiling system runs every 35 s to
architecture. gather raw context (such as application name, data size,
• Network Profiler: this aims to capture wireless network smartphone CPU usage, cloud/cloudlet vCPU usage, upload/-
information at runtime. The monitoring of network quality is download rate, and smartphone hardware) and transforms it
critical in MCC environments since a poor network can cause to high-level contextual information at runtime (for more de-
packet loss and a delay in communication between device tails see Section 3.2). This information is saved in a database
and cloud. We calculate the throughput by sending packets to that always returns the most recent instance when asked to
the CSOS server, which in turn estimates the device’s upload by the decision engine.
rate. The server then sends packets to the smartphone to
calculate the device’s download rate. 3.2. Development process and components details
• Cloud/cloudlet profiler: this is responsible for monitoring
and collecting cloud/cloudlet performance data in order to This section provides an overview of the steps a developer
ensure that the cloud has larger processing capacity than the must follow to configure an application to use CSOS, as well as
mobile device at a particular instant. This profiler calculates providing more detailed information regarding the operation of its
the vCPU load every second. The server’s vCPU load is then components.
encapsulated in the network profile to be sent back to the
device. 3.2.1. Development process
• App and device profiler: this service monitors and collects The development process is illustrated as following in Fig. 2.
application/hardware context data asynchronously at run- Each step is described below.
time. From the smartphone we gather the hardware values, In the first step (i), the developer must define those factors
more specifically the device’s RAM, number of cores, and the that are important to be analyzed as relevant information for the
maximum clock of each core. Beyond that, the smartphone’s offloading decision. Based on empirical evaluations, we identified
six factors: network throughput, smartphone hardware, applica-
1 Weka is a collection of ML algorithms. It permits the exportation of classifi- tion nature, data size, smartphone CPU, and cloudlet vCPU. Each
cation models to use them in personal Java code. http://www.cs.waikato.ac.nz/ml/ one of these metrics can change independently of the others, in
weka/downloading.html addition to some that periodically change over time, while others
W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520 507

remain static. For example, a smartphone CPU usage can change the Proxy Handler will perform the offloading operation without
every second, while the smartphone hardware retains the same considering the Decision Engine.
configuration. Moreover, the combination of each metric value
Algorithm 1: Procedure for offloading decision with classifiers.
represents a profiler. For example, network bandwidth can assume
congested, moderate, or free values, while the cloudlet vCPU can 1: procedure isRemoteAdvantage(DataSize, Classifier)
assume stressed, normal load, or relaxed values. Therefore, these 2: response ← false
various combinations can lead the decision engine to make differ- 3: if classifierModel ̸ = Classifier ∨ classifierModel = Null then
ent decisions based on the contextual information. 4: loadClassifier(Classifier)
The second step (ii) is the filling of a database with context 5: end if
information. In this phase, experiments are undertake that change 6: for all attribute A ∈ Database do
the contextual information and analyze the total processing time 7: Attributes[] atts ← getAttribute(A)
of each task between two configurations: static local, where whole 8: Values[] values ← getValue(A)
task is processed on smartphone; and static cloudlet, in which only 9: end for
the computing-intensive task is processed remotely in the cloudlet. 10: Instance ← createInstance(atts, atts.qtde)
At the end of this phase, the developer must compare the total 11: for i ← 0, atts.qtde do
runtime of each task (local and cloudlet) for each context, then 12: if atts.getName[i] = ’DataSize’ then
label with a ‘‘Yes’’ value those ones whose total runtime is shorter, 13: Instance.setValue[i] ← DataSize
and with a ‘‘No’’ value those whose total runtime is longer. 14: else
The next step (iii) covers the classifiers evaluation process. We 15: Instance.setValue[i] ← v alues[i]
developed a Java program2 to automate the training and testing of 16: end if
classification algorithms. This program measures the performance 17: end for
of each classifier by means of its accuracy and others metrics (for 18: result ← ClassifyInstance(Classifier , Instance)
more details see Section 4.1) in the test data by using 30 repetitions 19: if result ≥ 0.7 then
of a 10-fold cross-validation and varying the seed value in the 20: response ← true
range from 1 to 30. The results must be analyzed and compared 21: return response
by suitable statistical techniques such as a confusion matrix and 22: else
performance metrics (e.g., specificity, sensitivity, precision, and 23: return response
accuracy). 24: end if
Finally, in the last step (iv) the developer must generate the 25: end procedure
trained classifiers’ models corresponding to those that obtained
In Algorithm 1, we present the pseudocode of the decision
the highest accuracy. In addition, the files corresponding to the
engine developed to handle the context data and classify the most
generated models must be saved in the CSOS project to allow
recent instance in the database. The isRemoteAdvantage procedure
the decision engine to load them at runtime. The next activity
(line 1) receives as arguments the data size of the task to be
is to configure the remotable markup with a dynamic value in
processed (locally or remotely) and classifier name, respectively.
the methods identified as an offloading candidate. After that, the
Naturally, the data size metric must be captured by the application
developer must specify which classifier to use on the same markup
profiler. However, the value of this metric can only be known at
(for more details see Section 3.2.2). All classifiers to be used by
the instant of time that the application’s user selects the desired
developer must be declared in an enumeration class. Thus, each
image resolution for processing. Since the profiling system runs
classifier is translated into the respective generated model.
every 35 s, it would be impracticable to accurately capture the
After following the four steps of the development process, the
value of this metric. Therefore, the proxy handler captures this
mobile application is ready to use CSOS to enable the context- value at runtime and passes it to the decision engine. Between lines
sensitive offloading of their methods/data. 3 and 5, we check whether the object of the classification model
corresponds to the specified classifier. If false, it instantiates a new
3.2.2. Implementation details object of the requested classification algorithm. Next, we check
Next, we describe the offloading decision execution flow as each attribute of the most recent record in the database to collect
well as technical details of how CSOS operates the decision engine, information, such as name, type, and possible values (between
remotable markup, and transformation from the raw context to the lines 6 and 9).
high-level context. To enable classification from the Weka library, the instance
Fig. 3 presents a flowchart describing the offloading execution object needs to be created (line 10) along with the set of attributes
on mobile application. During the application execution, Proxy and their quantity. After that, the instance receives the value cor-
Handler intercepts a method call and verifies if such method has responding to the input size if the attribute name is equal to ‘Data-
a Remotable markup. If so, it verifies the annotation property and Size’; otherwise it receives the other values (lines 11–17). Classi-
gets the offloading type. If the dynamic offloading is defined, the fyInstance (line 18) classifies an instance with probabilistic values.
Proxy Handler checks whether the classification algorithm was Thus, when the instance is rated above 70%, the procedure returns
specified by programmer. If true, the Decision Engine decides a ‘‘true’’ value, indicating that offloading is favorable; otherwise it
whether the current context is favorable (e.g., network condition is returns a ‘‘false’’ value, indicating that it is unfavorable. The rate
adequate, server available, or device hardware is weak) for remote of 70% refers to probability for the ‘‘Yes’’ class, corresponding to
execution, i.e., computation offloading is beneficial for improving remote processing. We define this threshold to reduce the impact
the performance of mobile applications and the energy consump- on application execution due to a wrong decision by the algorithm,
tion of mobile devices or it is unfavorable, i.e., when the mobile since in this case the application will be executed locally.
device wastes less energy and time executing a computational task The following code examples illustrate the development of a
locally rather than remotely. When the static offloading is defined, face detection application that has two methods: detectFaces() and
getFaces(). Firstly, we created an enumeration to list the possible
2 An automation program for training and testing using a 10-fold cross- classifiers to be used by the decision engine, and a variable to save
validation is available to the community at the following website: https://github. the classifier that is going to be used to decide if this method is
com/ehammo/algorithmCompare going to be offloaded or not. Second, we modified the @remotable
508 W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520

Fig. 2. CSOS development process.

Fig. 3. CSOS’s offloading decision execution flow.

markup to receive the J48 classifier (line 2 in Listing 1). When the a library that analyzes an Android device’s specifications (RAM,
detectFaces method execution (line 3) is intercepted by the proxy CPU cores, and clock speed). This allowed the authors to modify its
handler, the decision engine will decide whether the method must behavior based on the capabilities of the smartphone’s hardware.
be executed locally or outside of the mobile device, based on the Regarding the Data size attribute, more specifically for the
J48 classifier. BenchImage and BenchFace applications, we calculated the size
When the CSOS is running, the user of the benchmarking appli- in Kilobyte (KB) of each picture and converted it to a megapixel
cation can choose which classifier to use. Therefore, our solution (MP) unit. As can be seen in Table 2, we mapped the values 708 KB
supports one interface for each classifier to allow the proxy handler to 2MP, 1186 KB to 4MP, and 4413 KB to 8MP. The calculation
to interpret the classifier specified by the user through markups for CollisionBalls is a bit different. Since each ball has 44 KB, our
(or Java annotation). To accurately decide whether to offload or strategy is to calculate this value by the amount of balls. So, we
not, we need profilers. We therefore made a task to run every mapped the first value of 649 KB to 250 balls, 1938 KB to 750 balls,
35 s, as a network profiler takes a long time to measure precisely and so on.
the upload and download rate. This task gathers low-level context
4. Performance analysis of classifiers
information or raw context, converts it to high-level context, and
then saves it on a context database. The following is an example of
In this section, we present the performance analysis of the
this conversion.
classification algorithms. The following subsections present the
We can see in Listing 2 that the getCPULabel method receives configuration environment and evaluation that are related to our
the raw value of CPU usage in a percentage (line 1). If the o value four classification algorithms.
is between 45 and 75, the variable ‘‘ret’’ receives the value ‘‘Nor-
mal_Load’’, indicating that the CPU is processing normally (lines 5 4.1. Evaluation setup
and 6).
Table 2 shows the mapping of each low-level context to a high- For this evaluation, four classification algorithms (C4.5, Rules,
level one. We performed several empirical tests to define the range K-NN, and Naive Bayes) were compared to each other, considering
of values (low-level context) for the attributes of numbers 1,3, and the main evaluation metrics. The (Java) implementation of C4.5
5; while the range of values of the attributes corresponding to in Weka [21] is referred to as J48, while the K-NN and Rules-
the numbers 2 and 4 were results of research carried out by the Based are respectively referred to as IBK and JRIP. We have used
authors. For instance, to define thresholds with respect to RAM these classifiers because they require less resource (e.g., process-
memory and the clock speed of the Phone(Hdw) attribute, we used ing, storage) than deep learning, and are fairly accurate [22]. These
W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520 509

1 public i n t e r f a c e DynamicDetectFacesJ48 extends DetectFaces {


2 @Remotable ( value =Remotable . O f f l o a d . DYNAMIC, s t a t u s =true , c l a s s i f i e r =Remotable . C l a s s i f i e r . J48 )
3 P r o p e r t i e s F a c e d e t e c t F a c e s ( S t r i n g c a s c a d e C l a s s i f i e r , byte [ ] o r i g i n a l I m a g e ) ;
4 [...]
5 }

Listing 1. Android markup code for specify the classifier.

1 public ResultTypes . ResultTypesCpu getCPULabel ( f l o a t t o t a l ) {


2 ResultTypes . ResultTypesCpu r e t ;
3 i f ( t o t a l < 45) {
4 r e t = ResultTypes . ResultTypesCpu . Relax ;
5 } e l s e i f ( t o t a l >= 45 && t o t a l < 75) {
6 r e t = ResultTypes . ResultTypesCpu . Normal_Load ;
7 } e l s e i f ( t o t a l == ( − 1) ) {
8 r e t = ResultTypes . ResultTypesCpu . Unknown;
9 } else {
10 r e t = ResultTypes . ResultTypesCpu . S t r e s s e d ;
11 }
12 return r e t ;
13 }

Listing 2. Transformation of low-level context to high-level context.

Table 2
Attributes and contextual values.
No Attribute name Low-level context High-level context
[up/down>20] Free
1 Thr: Throughput [2<up/down<=20] Moderate
[up/down<=2] Congested
BenchImage [708-1186-4413] 2MP-4MP-8MP
2 App/Data size BenchFace [2075-3758-4717] 3MP-6MP-8MP
CollisionBalls [649-1938-2583] 250-750-1500 Balls
[CPU>=75] Stressed
3 Phone(CPU) [40<CPU<=74] Normal Load
[CPU<=40] Relaxed
[2.3<RAM<=3 and FREQ<=1.8] Advanced-intermediate
4 Phone(Hdw) [1.5<RAM<=2.3] Intermediate
[RAM<=1 and FREQ<1.3] Weak
[CPU>=75] Stressed
5 Cloud(vCPU) [40<CPU<74] Normal Load
[CPU<=40] Relaxed

features are relevant to the offloading decision in order to discover Table 3


hidden knowledge in our own context database. In this regard, we Confusion matrix in our study.
benefited from the Weka library to develop a Java program and Actual Predicted
thus evaluate the algorithms. Offloading (Positive) No-Offloading (Negative)
A 10-fold cross-validation has been made for testing and eval- Positive TP FN
uation of the results. For our purpose, the database is divided into Negative FP TN
two groups: training and testing, where 90% is for training and 10%
is for testing. We have run each algorithm 30 times, varying the
seed value in the range from 1 to 30, in order to obtain a sample
Each one is described below.
of 120 results, which have been averaged for the dataset. Thus the
results, which will be further analyzed by the statistical techniques,
correspond to the average accuracies in test data. TP + TN
Accuracy = (1)
To investigate the performance of the four classification algo- TP + TN + FP + FN
rithms more accurately, we used the confusion matrix that can be 2TP
F1 = (2)
seen in Table 3. With this matrix the amount of each indicator is (2TP + FP + FN)
calculated and then results are compared. The confusion matrix TP
is a useful tool for analyzing how well a classifier can recognize Sensitiv ity = TPR = (3)
TP + FN
multiple of classes different. The ideal situation is when the most
TN
relevant data are on the main diameter matrix and the rest matrix Specificity = TNR = (4)
values are zero or near zero [23]. TN + FP
Various evaluation metrics – such as true negative rate (speci- TP
Precision = (5)
ficity), true positive rate (recall or sensitivity), false positive rate TP + FP
(FPR), false negative rate (FNR), precision and accuracy of assess- FP
ment categories – are calculated according to the formulas (1)–(7). FPR = = 1 − TNR (6)
FP + TN
510 W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520

Table 5
Mean ranking of each classifier.
Prediction algorithm
J48 JRIP IBK NAIVE Nemenyi critical distance
Position 1 2 3 4 –
Value 1.68 1.70 2.62 4.00 0.8563

for Naive Bayes with 89.27%. The sensitivity is maximum for JRIP
with a rate of 98.08% and minimum for Naive Bayes at 93.89%.
In summary, JRIP was the algorithm that most correctly predicted
positive records (offloading favorable) as well as outperforming J48
by 0.74%. The precision with the J48 algorithm was 94.69%, slightly
higher than IBK with a difference of 0.03%, and 0.48% compared to
Fig. 4. Average accuracy for different values of K. JRIP.
The FPR and FNR are referred to as the types of errors. Therefore,
Table 4 the first type of error is more important to mobile cloud solutions
Average of each measured metric for algorithms (%). for determining the drawbacks of code offloading than the second
Algorithm Specificity Sensitivity Precision FPR FNR F1 Accuracy one. According to Table 4, IBK and J48 have lower FPR (6.54% and
Naive Bayes 89.27 93.89 91.33 10.72 6.10 92.59 91.79 6.56%) compared to JRIP and Naive Bayes algorithms, while FNR is
IBK 93.45 96.40 94.66 6.54 3.59 95.52 95.06 lowest for JRIP with a difference of 27.65% compared to J48 and
J48 93.43 97.35 94.69 6.56 2.64 96.00 95.57
46.79% compared to IBK.
JRIP 92.77 98.08 94.23 7.22 1.91 96.11 95.67
Sensitivity and precision are two metrics widely employed in
applications where the successful detection of one of the classes is
considered more significant than that of the other classes. Unfortu-
FN nately, they are in conflict with each other due trade-off between
FNR = = 1 − TPR (7)
FN + TP them, since if we want retrieve more relevant records (i.e., for
Where: increasing the sensitivity rate), consequently more non-relevant
records will be retrieved as well (i.e., it will decrease the precision
TP = the number of positive examples correctly classified. rate). Therefore, F1 is proposed to be means to achieve harmony
TN = the number of negative examples correctly classified. between sensitivity and precision [21]. Fig. 5b clearly shows that
FP = the number of positive examples misclassified as negative. the JRIP and J48 techniques have F1 greater than the IBK and Naive
FN = the number of negative examples misclassified as positive. Bayes with rates of 96.11% and 96.00%, respectively.
Fig. 5a shows the comparative analysis in terms of accuracy.
The accuracy of a classifier for a given test dataset is indicated In MCC the high accuracy is very important due the adaptive and
by the percentage of the test dataset records that are correctly dynamic nature of mobile systems, which leads to an inaccurate
classified by the classifier. The sensitivity (or recall) measures the decision that in turn leads to high energy consumption and de-
fraction of positive examples correctly predicted by the classifier, grades performance. Our results show that the rules generated by
while specificity is the proportion of negative records that are cor- the JRIP algorithm have an accuracy slightly higher compared to
rectly identified. Precision determines the fraction of records that J48 and IBK amongst the contextual dataset. The accuracy of the
actually turn out to be positive in the group where the classifier JRIP algorithm is 95.67% comparable to the J48 algorithm, which
has been declared to be a positive class. Sensitivity and precision achieves 95.57%. The Naive Bayes had the worst accuracy with
are summarized into another metric known as the F1 measure (see 91.79% of records correctly classified.
formula (2)). The classification error (6) is referred to as FPR, the The good classifiers noted from the above results are JRIP, J48,
proportion of negative records that are not correctly identified. and IBK. It can be seen that in all cases, the Naive Bayes algo-
The classification error (7) is referred to as FNR, the proportion of rithm has shown to have the worst performance over our context
positive records that are not correctly identified [24]. database.
According to our evaluation results, the J48, JRIP, and IBK classi-
4.2. Classifiers performance fiers had acceptable and similar performance. To determine
whether the differences between these algorithms in terms of ac-
Before evaluating the aforementioned classifiers, we have cho- curacy are significant, we use the Friedman test with a confidence
sen the best K to K-NN algorithm (IBK), i.e. the K value with the interval of 95% [25,26]. The Friedman test is a non-parametric
most expressive accuracy. We decided to start with the value K =1 equivalent of the repeated-measures ANOVA [27]. It ranks the
and increase it up to the total number of records in our database algorithms for each cross-validation fold separately, with the top
(300 records). To do that, we have tested all possible combinations algorithm receiving the rank of 1, the second best receiving the
between Ks and seeds, then we have made an average accuracy to rank of 2, and so on. Thus, the worst performing algorithm receives
each K value with each seed aiming to choose the combination to a rank equal to the number of algorithms (in our case 4).
obtain the best accuracy. The result is show in Fig. 4. The best K is When the null hypothesis (all classifiers are equivalent) of the
equal to 5, with seed equal to 12, and an accuracy rate of 96.35%. Friedman test is rejected, we perform the Nemenyi post-hoc test
By using the confusion matrix and the above formulas, the to determine which classifiers are significantly different [28]. If the
values of the aforementioned metrics for four algorithms can be difference between the mean rankings of 2 classifiers is bigger than
specified. The results with the average are shown in Table 4. the critical difference (CD) of 0.8563, then the performance of these
Fig. 5b shows the comparative analysis of four classifiers in algorithms differs significantly.
terms of specificity, sensitivity, precision, FPR, FNR and F1. From Fig. 6 shows the results for Multiple Comparison with Best
the graph, we can observe that the specificity is highest and similar (MCB) statistical test. The ‘‘Ha:Different’’ label means that at least
for IBK and J48 i.e. 93.45% and 93.43% respectively, and lowest one algorithm is different. Consequently, the null hypothesis was
W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520 511

Fig. 5. Comparison between the performance of the classification algorithms using different indicators.

rejected, indicating that there is a significant difference between


the accuracy of the classifiers. Thus the Nemenyi post-hoc test was
performed to outline the algorithms rankings.
Table 5 displays the mean ranking of the classification algo-
rithms along with the critical difference to clearly show any al-
gorithms that are significantly different. It is possible to note that
the J48 has the best overall ranking position with 1.68. However,
the difference between J48 and JRIP is less than the CD value
(equivalent to 0.02), indicating that these classifiers are not sta-
tistically different. On the other hand, both J48 and JRIP are signif-
icantly different from the IBK with 0.94 and 0.92, respectively. In
a nutshell, the Friedman and Nemenyi statistical tests show that
the J48 and JRIP classifiers outperform the IBK and Naive Bayes
classifiers from our context database. Once that these classification
algorithms obtained similar performance results, we implemented
Fig. 6. Friedman and Nemenyi tests for nonparametric comparisons between algo- both in CSOS and used them in our real-world experiments (for
rithms. more details see Section 5).
512 W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520

Table 6
Rules generated by the J48 algorithm.
Num Rules
1 IF Thr = Free AND App = BenchFace THEN class Yes
2 IF Thr = Free AND App = BenchImage THEN class Yes
3 IF Thr = Free AND App = CollisionBalls AND Phone(Hdw) = Adv. Interm.
THEN class No
4 IF Thr = Free AND App = CollisionBalls AND Phone(Hdw) = Weak
AND DataSize <= 1186 THEN class No
5 IF Thr = Free AND App = CollisionBalls AND Phone(Hdw) = Weak
AND DataSize >1186 THEN class Yes
6 IF Thr = Free AND App = CollisionBalls AND Phone(Hdw) = Intermediate
THEN class No
7 IF Thr = Mdt AND App = BenchFace AND Cloudlet(vCPU) = Relaxed
THEN class Yes
8 IF Thr = Mdt AND App = BenchFace AND Cloudlet(vCPU) = Stressed
AND Phone(Hdw) = Adv. Interm. AND Phone(CPU) = Relaxed THEN class No
9 IF Thr = Mdt AND App = BenchFace AND Cloudlet(vCPU) = Stressed
AND Phone(Hdw) = Adv. Interm. AND Phone(CPU) = Normal Load THEN class Yes
10 IF Thr = Mdt AND App = BenchFace AND Cloudlet(vCPU) = Stressed
AND Phone(Hdw) = Adv. Interm. AND Phone(CPU) = Stressed THEN class Yes
11 IF Thr = Mdt AND App = BenchFace AND Cloudlet(vCPU) = Stressed
AND Phone(Hdw) = Weak THEN class Yes
12 IF Thr = Mdt AND App = BenchFace AND Cloudlet(vCPU) = Stressed
AND Phone(Hdw) = Intermediate THEN class Yes
13 IF Thr = Mdt AND App = BenchImage THEN class Yes
14 IF Thr = Mdt AND App = CollisionBalls AND Phone(Hdw) = Adv. Interm.
THEN class No
15 IF Thr = Mdt AND App = CollisionBalls AND Phone(Hdw) = Weak
AND DataSize <= 1186 THEN class No
16 IF Thr = Mdt AND App = CollisionBalls AND Phone(Hdw) = Weak
AND DataSize >1186 AND DataSize <= 2583 THEN class Yes
17 IF Thr = Mdt AND App = CollisionBalls AND Phone(Hdw) = Weak
AND DataSize >1186 AND DataSize >2583 THEN class No
18 IF Thr = Mdt AND App = CollisionBalls AND Phone(Hdw) = Intermediate
THEN class No
19 IF Thr = Cong THEN class No

One of the key points in the rules generated by J48 and JRIP
algorithms is that the rules are very simple. Therefore these rules
can be used very easily by resource-constrained mobile devices,
a critical issue in the MCC environment. The J48 algorithm uses a
statistical property derived from information theory, called the in-
formation gain, that measures how well a given attribute separates
the training instances according to their target classification.
Therefore, according to Fig. 7, predictor importance for through-
put, application, smartphone hardware, the smartphone’s CPU, and
cloud’s vCPU are 0.4348073, 0.1538575, 0.0256408, 0.0060328 and
0.0000928, respectively. These data reveal how much the wireless
network quality impacts on offloading operations.
0
The rules generated by this algorithm are shown in Table 6. The
depth of trees produced by this algorithm was 19, which is very
different compared to the JRIP algorithm. It is also clear that nine
Fig. 7. Importance of the factors in the prediction of offloading by using J48
rules have been generated for class No (class No is devoted to ’no algorithm.
offloading’) and ten rules for class Yes (class Yes is devoted to ’do
offloading’). The rules generated by the J48 algorithm shows that
this algorithm is suitable for the offloading decisions, due to the time to undertake offloading. But it should be noted that the rules
balance in the class values (Yes/No).
produced by J48 have provided more detail about both classes.
According to Table 7, it can be seen that eight rules have been
The rules in Tables 6 and 7 are simple and understandable
produced by the JRIP algorithm. It is also clear that seven rules have
been generated for class No and only one rule for class Yes. With for researchers and developers, meaning that these findings can
regard to the results and some rules, it can be deduced that JRIP be useful as an appropriate solution to identify an opportunist
can make incorrect decisions due to the omission of some rules for offloading event in the real environment. In other words, by using
class Yes. Generally, by comparing our results and the Tables, it can the results of this study, more effective rules for beneficial compu-
be said that J48 and JRIP are more successful in identifying the right tational offloading can be achieved.
W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520 513

Table 7
Rules generated by the JRIP algorithm.
Num Rules
1 IF Thr = Cong THEN class No
2 IF App = CollisionBalls AND Phone(Hdw) = Adv. Interm. THEN class No
3 IF App = CollisionBalls AND DataSize <= 649 THEN class No
4 IF App = CollisionBalls AND DataSize >= 3871 AND Thr = Mdt
AND Phone(CPU) = Stressed THEN class No
5 IF Thr = Mdt AND Cloudlet(vCPU) = Stressed AND DataSize >= 4717
AND Phone(Hdw) = Adv. Interm. THEN class No
6 IF App = CollisionBalls AND Cloudlet(vCPU) = Stressed AND DataSize >= 3871
AND Phone(CPU) = Relaxed THEN class No
7 IF Thr = Mdt AND App = BenchFace AND Phone(CPU) = Relaxed
AND Phone(Hdw) = Adv. Interm. AND Cloudlet(vCPU) = Stressed
AND DataSize <= 3758 THEN class No
8 ELSE class Yes

5. Evaluation and validation Table 8


Parameters used in the context-sensitive experiment.
Apps Parameters Value
In this section, we conducted several experiments to evaluate
the two classifiers with better performance: JRIP and J48. Sec- Detection algorithm Alt_tree
Size (MP) 3, 6, and 8
tions 5.1 and 5.2 describes the experimental environment and BenchFace
Number of faces 77
performance metrics, respectively; while the other Sections 5.3– Tasks number 30
5.5 discuss the results. Filter Cartoonizer
Size (MP) 2, 4, and 8
BenchImage
5.1. Experimental setup Image SkyLine
Tasks number 30

Table 8 shows all the parameters used in the experiment. Serialization Java built-in
Size (Kb) 1938
We benchmarked the three different computation-intensive apps, CollisionBalls
Number of balls 750
named BenchFace, BenchImage and CollisionBalls. Tasks number 10
The BenchFace app was developed by the authors for face
detection and uses Haar features based on cascading classifiers,
a method proposed by the researchers in [29]. The algorithm for
face detection uses a ML approach. It trains cascade functions with is an Android application that can display the real-time power con-
a set of positive images (images containing faces) and negative sumption on a smartphone or tablet. According to [31], it is the only
images (images that do not have faces). The application consists of application that reports accurate real-time power consumption.
a single image with 78 faces at different angles. The user can change We used the equipments described in Table 9. We have used
the same image to different resolutions and cascading classifier three smartphones from different manufacturers and with differ-
algorithms. ent hardware. In our experiment, the Moto X Style, Galaxy S4,
According to [30], BenchImage is an image processing applica- and LG K3 are mapped to advanced intermediate, intermediate,
tion that allows users to apply filters on pictures with different res- and weak smartphones, respectively. The communication between
olutions. The application provides the filters Sharpen, Cartoonizer, mobile device and cloudlet was performed using a wireless net-
and Red Tone, which have different computation requirements work (devices directly connected to the WiFi access point attached
and therefore different execution times. In addition, BenchImage to the cloudlet), with one TP-LINK AC1200 (802.11ac) access point.
provides an option to execute a benchmark procedure, in which In contexts where the bandwidth must be moderate or con-
the Cartoonizer filter is executed for each picture resolution (8MP, gested, we generate UDP background traffic from the Iperf [32] tool
4MP, and 2MP). for 2000 s and report the result every 3 s. Besides that, we configure
Rego et al. [30] define CollisionBalls as an application that the CpuRun [33] and CpuBurn [34] tools to utilize all available
simulates several balls bouncing around the screen of the mobile cores of the smartphone’s CPU and cloudlet’s vCPU respectively,
device. The application detects when the balls touch the edge of the aiming to achieve the context information, stressed and normal
screen or when the balls touch each other, and then calculates the load for our lab testbed. To properly test we first picked which
new direction of the balls. The amount of balls can be defined by the contexts are going to be used in the tests. As shown in Table 10 we
user as well as the type of serialization used: built-in Java and C# picked five favorable contexts, where the cloudlet is going to be a
serialization or manual serialization. It is a real-time application. better option than local execution, and five unfavorable contexts
For BenchFace and BenchImage we varied the data size every where the opposite is true. Last, we chose five unknown contexts,
10 tasks. But we did not vary the CollisionBall’s data size, since the which means that these contexts represent new instances of the
amount of balls could not be changed dynamically. The other big our database that have not participated in the training and testing
difference is in the amount of decisions. If the real time application process during the performance evaluation of the classification
asked the decision engine about the context after every new frame, algorithms.
it would be very detrimental to the application, since this delay The objective of the experiments’ related favorable and unfa-
would lower the frame per second rate. So, the decision was made vorable contexts is to evaluate the performance of the proposed
once every 10 s, and its result will apply to all marked methods CSOS and the classifiers implemented upon it, and upon prior
until the next decision. knowledge that our system must decide correctly. On the other
Regarding smartphone power consumption metering, this work hand, the objective of the experiments with related unknown
will focus on software-based power measurement. Trepn Profiler context, is to know if our system can decide correctly or not in real
514 W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520

Table 9
Technical specification of the mobiles devices in the context-sensitive experiment.
Units, equipment, and platform CPU (GHz) RAM (GB)
1, Moto X Style, Android Hexa-core 1.6 3
1, Samsung Galaxy S4, Android Quad-core 1.9 2
1, LG K3, Android Quad-core 1.1 1
1, Laptop cloudlet, Openstack Kilo (Ubuntu 14.04 LTS) Quad-core 2.7 8

Table 10
Context dataset for the experiments.
Decision Id Thr App Cloudlet(vCPU) Phone(CPU) Phone(Hdw)
C1 Free BenchImage Relaxed Relaxed Adv. Interm.
C2 Free BenchFace Relaxed Relaxed Adv. Interm.
Yes C3 Mdt BenchFace Relaxed Stressed Adv. Interm.
C4 Mdt BenchFace Stressed Relaxed Weak
C5 Free CollisionBalls Relaxed Normal Load Weak
C6 Free CollisionBalls Relaxed Relaxed Adv. Interm.
C7 Mdt CollisionBalls Relaxed Relaxed Adv. Interm.
No C8 Cong BenchImage Relaxed Stressed Intermediate
C9 Cong BenchImage Relaxed Normal Load Weak
C10 Mdt BenchFace Stressed Relaxed Adv. Interm.
C11 Mdt BenchFace Stressed Normal Load Adv. Interm.
C12 Mdt BenchFace Normal Load Relaxed Adv. Interm.
Unknown C13 Mdt BenchFace Stressed Relaxed Intermediate
C14 Free CollisionBalls Relaxed Normal Load Intermediate
C15 Mdt CollisionBalls Relaxed Normal Load Intermediate

time, and what the consequences are in terms of performance and have been carefully designed, as they have direct impact in mea-
energy. suring the behavior of mobile hardware. While energy required
All experiments utilized four strategies: for processing and communication with cloudlet has direct rela-
tion with energy consumption, the execution time is related to
• Dynamic J48 (labeled as J48): this provides the entire CSOS the wireless link quality, remote and local processing capability.
system, such as the decision engine, the Weka library, dis- Subsequent subsections describe the calculation methodology for
covery/deployment services and profilers. The application above-mentioned metrics.
relies on CSOS’s decision engine to decide where to process
offloading candidates methods (locally or remotely) based on 5.2.1. Execution time
the J48 classifier. The execution time or runtime metric is computed to estimate
• Dynamic JRIP (labeled as JRIP): this provides the entire CSOS the local and remote execution times, as each of them are influ-
system, i.e. it includes the decision engine, Weka library, enced by different factors. While local execution time takes into
discovery/deployment services and profilers. The application account time spent to execute the method m on mobile device (in
relies on CSOS’s decision engine to decide where to process seconds), remote execution time takes into account the time to
offloading candidates methods (locally or remotely) based on transfer the method data from mobile device to cloudlet and from
the JRIP classifier. cloudlet back to mobile device, queue waiting time on cloudlet,
• Static Cloudlet (labeled as Cloudlet): this provides the partial and time spent to execute the method m on cloudlet. The metric
CSOS system, i.e. it includes discovery/deployment services
is computed as follows:
and profilers. All applications are execute remotely on the
cloudlet server to acquire a performance baseline. Texeci = Ki ∗ Tlocali + (1 − Ki ) ∗ (Tuploadi + Tqueuei + Tremotei + Tdownloadi ) (8)
• Static Local (labeled as Local): this does not use the CSOS {
system, and all processing is done on the smartphone, i.e. all Ki = 0 if is remote execution time
applications must be executed locally to acquire a perfor- Ki ∈ {0, 1} =
Ki = 1 if is local execution time
mance baseline.
where, Texeci is the execution time of the ith execution up to 30
Finally, to enhance the reader’s comprehension, only 9 contexts repetitions. If a method is offloaded to a cloudlet, Tuploadi denotes
are going to be plotted instead of all 15. From each situation the time to upload the method data to the cloudlet, Tqueuei denotes
we obtained 3 graphs, showing runtime, energy and context. The the queueing time on the cloudlet (i.e., delay before beginning
runtime graph represents the amount of time spent on each task, the computation), which is variable and depends on the cloudlet
while the energy graph represents how much energy the smart- workload, Tremotei denotes the time to execute the method on
phone consumed over the whole 30 tasks. The context graph shows cloudlet, and Tdownloadi is the time to download the method data
the frequency of values for the current context’s attributes. This from the cloudlet. If a method is executed locally, Tlocali denotes
configuration remains true for the BenchFace and BenchImage the time to execute the method on the mobile device. Since there
applications, but for CollisionBalls there is a little difference since is a possibility of method data execution be locally or remotely due
it is a real-time application. The runtime graph is exchanged for a to the adoption of ’dynamic J48’ and ’dynamic JRIP’ strategies, the
Frame per Second (FPS) graph, which represents the amount of FPS variable Ki was added. The variable Ki receives the value ‘‘0’’ when
the application has at the moment of the decision. the method needs to be executed remotely and it receives the value
‘‘1’’, when the method needs to be executed locally.
5.2. Calculation methodology of the metrics
5.2.2. Energy consumption
Execution time and energy consumption are important metrics The energy consumption metric represents the energy con-
to be considered in any mobile cloud scenario. These metrics sumed by mobile device when the method is executed remotely or
W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520 515

locally. This metric depends of the aforementioned execution time J48 strategies, respectively. The J48 consumed more energy than
used for calculating speed of local or remote computing. Power cloudlet and JRIP because it has a greater number of rules regarding
estimation takes into account the frequency and load of every CPU JRIP, and thus demands a longer classification time.
core, Graphics Processing Unit (GPU), and brightness of the screen. The main difference between the C4 and C1 contexts are the
Since it was not published the model that Trepn Profiler uses to cloudlet’s vCPU, the smartphone’s hardware, the network band-
estimate power, its mathematical representation is out of scope of width, and the benchmarking application (see Table 10). The re-
the paper. sults of Fig. 8d, corresponding to the J48, JRIP, and cloudlet strate-
gies, show that the runtime suffers large variations as the image
Pelectrici = Voltagei ∗ Currenti (9) size increases, due to values assigned to the C4 context (see Fig. 8f),
Econsumi = Pelectrici ∗ Texeci (10) but remained below 6 s with a 3MP image, 12 s with 6MP, and
17 s with 8MP, while the local strategy peaks up to 18.40 s in the
where, Econsumi represents the energy consumption estimation of runtime values with 8MP images. Fig. 8e presents the distribution
the mobile device, which is a product of electric power (Pelectrici ) of the energy consumption for the same context. It is notable that
and execution time (Texeci ) required to process the method, on the the local strategy drains 56.27%, 61.28%, and 64.89% more energy
ith execution up to 30 repetitions. than cloudlet, JRIP and J48 strategies, respectively, while the three
The mean energy is computed through of the summation of last had similar energy drains.
the Econsumi values divided by the total number of repetitions. This Fig. 8g shows the strategies performance by the FPS metric,
metric is highest when the ’static local’ strategy is adopted. since the C5 context makes use of CollisionBalls, a real-time ap-
∑n
Econsumi plication. The value of this metric depends on the time spent
i
Emean = (11) in calculating the new position of the balls. According to [38], a
n
graphic computing application should run above 30 FPS, i.e. taking
where, Emean represents the mean of the energy consumption, and a maximum of 33.34 milliseconds (ms) to produce each frame,
n denotes the total number of repetitions. It is important to note so that animation can run seamlessly from the application user’s
that these metrics are not used as offloading-decision criteria, but perspective. The results indicate that for the 750 balls scenario the
for CSOS’s performance evaluation. The criteria used for offloading local execution presents a lower FPS than computational offloading
decision-making are listed in Table 2. Relevant studies [19,35–37] using any strategy, which means that given the amount of com-
that have formulated mathematical models related to response putation needed a resource-poor smartphone requires the use of
time and energy consumption as offloading decision criteria, ex- offloading. Besides that, for the JRIP, J48 and cloudlet strategies, the
plore the energy-performance tradeoff by using several combined offloading improves CollisionBalls’ performance by 1.65, 1.74, and
metrics, such as Energy-Response time Weighted Sum (ERWS), 1.67 times, respectively. This explains the fact that CSOS – even
Energy-Response time Product (ERP), and Energy-Response time using decision-making and running profilers before offloading to
Weighted Product (ERWP). cloudlet – does not impair the FPS quality, because the lowest
values of FPS with CSOS are similar to those in the local strategy
5.3. Results — favorable context (see 3 and 9 tasks). Regarding the results of Fig. 8h we can see
that local processing provides the largest battery discharge when
Fig. 8(a)–(i) shows the results of the three set of favorable compared to other strategies. This is due to the low processing
contexts chosen: C1, C4, and C5. capacity of the LG K3 (categorized as weak hardware) to perform a
Each line on the graph shows a different metric for the same lot of computing locally.
context, e.g. for the context labeled C1, we show the results corre-
sponding to the runtime, consumed energy, and frequency of the 5.4. Results — unfavorable context
context data (smartphone’s CPU, cloudlet’s vCPU, and throughput).
With regard to the context graph, it is possible to identify the Fig. 9(a)–(i) shows the results of the three sets of unfavorable
CPU usage by smartphone and cloudlet (both on the left y-axis). contexts chosen: C7, C9, and C10.
The network profiling between smartphone and cloudlet through Fig. 9a presents the FPS for the C7 context with the four strate-
the throughput (right y-axis) is also depicted. Specifically for the gies: J48, JRIP, cloudlet, and local. The results for cloudlet were the
BenchImage and BenchFace applications, the runtime and energy worst in this context. The wireless network in a moderate state
metrics were measured during the execution of a total of 30 tasks decreases the FPS quality of the application (see Fig. 9c). The mean
(see x-axis). The smartphone executed the first 10 tasks to the same FPS in the cloudlet was 68.48% and 77.67% lower than the local
image size (e.g. 2MP), and the next 10 tasks for a higher image size strategy and dynamic strategies, respectively. The J48 and JRIP
(e.g. 4MP), and so on. strategies maintained good FPS quality by deciding that the best
Fig. 8a presents the runtime in seconds for C1 context with the choice was to process locally. As we can see in Fig. 9b, the cloudlet
four strategies: Local, J48, JRIP, and Cloudlet. The results using the strategy saved approximately 70.85% of a mobile device’s energy
last three show that the runtime remained mostly below 13 s, even when compared to JRIP. In this context, even with the cloudlet
with the variation of the size of the images, indicating that the spending less of a device’s energy, it had a loss of FPS quality with
device did not perform substantial processing, since the dynamic an average of 5.31 frames per second.
strategies, J48 and JRip, had a similar result to the cloudlet strategy. Regarding Fig. 9d, the results show a mean runtime equal to
Therefore, the result using the local strategy presented runtime 45 s with 2MP, 87 s with 4MP, and 176 s with 8MP, indicating that
equal to 15 s with 2MP, 30 s with 4MP, and 62 s with 8MP, due the device performed substantial processing, once the dynamic
local processing capacity being less than the cloudlet’s. strategies, J48 and JRIP, had a similar result to the local strategy.
Fig. 8b illustrates the distribution of the energy consumption On the other hand, the cloudlet strategy could not be completed,
in miliwatts (mW) of the BenchImage application on Moto X Style because the network was so congested that the connection be-
corresponding to the C1 context. The results indicate that the tween smartphone and cloudlet was lost, causing the application to
local strategy drains more energy than all other strategies, which crash. Thus, the result using the cloudlet strategy was worse. Fig. 9e
means that processing efforts of the Moto X Style are higher due has similar results to the three strategies that executed locally and
to the overhead caused by the intensive processing of the filters presented a high number of outliers (lower and upper), justified by
in the images. The energy consumption corresponding to the JRIP the high energy drain. Cloudlet did not have any energy results to
strategy is 47.80% and 71.46% more efficient than the cloudlet and show since it could not reach the 30 tasks.
516 W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520

Fig. 8. Results with favorable offloading: C1, C4, and C5.

According to Fig. 9i, the cloudlet’s vCPU in a stressed state has a traffic (in the C10 context the bandwidth must be moderate) and
direct influence on the runtime when we use the cloudlet strategy the offloading operations. In contrast, the energy draining with J48
(see Fig. 9g), since it will always run the application remotely, strategy is 1.36 times higher than the cloudlet. In general, the JRIP
even when wireless network quality is not good. Consequently, and J48 strategies classify the C10 context with the No value, i.e. it is
the runtime with cloudlet is approximately 1.37, 1.22, and 1.02 better to run it locally. As a result, CSOS ensures a lower runtime as
times greater than the other strategies for 2MP, 4MP, and 8MP well as delivering an energy drain similar to the cloudlet strategy.
images, respectively. It is important to highlight the last four tasks
for runtime corresponding to cloudlet with values below the other 5.5. Results — unknown context
strategies. This situation occurs because the download and upload
rate of the background traffic reaches the value near the maximum Fig. 10(a)–(i) shows the results of the three set of unknown
threshold (e.g., 18 Mbps) for a moderate bandwidth. This is why contexts chosen: C12, C13, and C14. Fig. 10a presents the runtime
the runtime considers network fluctuations when an application’s in seconds for the C12 context with the four strategies: local, J48,
methods are executed remotely. JRIP and cloudlet. We can see that by using computation offloading
In the C10 context, mean energy reveals low power consump- with the JRIP strategy, it is possible to reduce method execution
tion by the JRIP strategy, despite concentrating a high number time by up to 1.57 times when compared to J48. Additionally, we
of outliers (see Fig. 9h). It consumes 1344 milliwatts while the can see that energy consumption using a JRIP strategy is 30% lower
cloudlet strategy consumes 1438 milliwatts, a difference of 6.53%, than using J48 and 29.43% lower than a local strategy (see Fig. 10b).
which means that both processing and connectivity efforts of the The results show that the best choice is to offload, i.e. JRIP classified
Moto X Style are higher due to the overhead caused by background the C12 context correctly, while the J48 classified it incorrectly.
W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520 517

Fig. 9. Results with unfavorable offloading: C7, C9, and C10.

Fig. 10d shows that the two algorithms classified the C13 con- the context (more specifically the smartphone’s hardware), com-
text correctly, since the offloading operations reduced the runtime putational offloading cannot improve CollisionBalls’ performance.
by 29.24% for both J48 and JRIP strategies, even when the cloudlet’s Regarding the energy, Fig. 10h show that the JRIP strategy saves
vCPU was stressed, i.e. 100% vCPU usage with a frequency of 71 38.96% more energy than the J48 strategy by using offloading
times in Fig. 10f. Such a performance gain of the cloudlet server operations with bandwidth free and cloudlet’s vCPU relaxed (see
compared to intermediate smartphone (corresponding to Galaxy Table 10). Even when the application is executed locally using a
S4) may be explained by the increase of computational overhead J48 strategy, the smartphone consumes 2.12 times less energy than
on the smartphone in terms of the runtime of the BenchFace appli- the local strategy. This situation is justified because the workload
cation. Fig. 10e illustrates the distribution of the energy consump- generated to stress the CPU reaches a value near the minimum
tion for C13 context. The results show that the energy consumption threshold (i.e. 45%) for smartphone’s CPU attribute mapped as the
with cloudlet, JRIP and J48 strategies is 40.34% lower than when normal load (see Fig. 10i).
using the local strategy, since they require less computational In summary, the set of experiments to unknown contexts pro-
effort, which leads the smartphone to save energy. vided insights to the benefits of using the offloading technique
The Fig. 10g presents the FPS during CollisionBalls’ execution for only when beneficial with the J48 and JRIP strategies of the CSOS.
the C14 Context. The results show that for the 750 balls scenario The J48 classifier in particular is a promising solution to handle
the local execution related J48 strategy presents a higher FPS than different application categories, since the rules produced by it are
offloading to a cloudlet server using the JRIP strategy. For instance, more detailed and the set of classes are balanced.
the local execution runs with approximately 69.14% more FPS These results show that the proposed CSOS outperforms the
than the cloudlet server. Such a result indicates that depending on baseline in all cases. In some contexts (e.g., C10 and C14 contexts),
518 W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520

Fig. 10. Results with unknown offloading: C12, C13, and C14.

the CSOS achieves performance and energy efficiency at the same conducted experiments using three benchmarking applications
time with a fair tradeoff between required objectives. Nonetheless, that perform intensive computing. The first experiment compared
there are contexts (e.g., C7 and C9 contexts) where CSOS achieves the runtime, FPS, and energy consumption with the CSOS enabled
a single objective and may sacrifice energy efficiency or perfor- and disabled for favorable contexts, where the remote execution
mance.
will be a better option than local execution. The second experiment
6. Conclusion and future direction is similar to first, with the difference that it uses unfavorable
contexts, where the opposite is true. The last experiment is similar
This paper presented CSOS, a system able to perform context- to previous two, with the difference that it uses unknown contexts,
sensitive computational offloading that uses machine-learning i.e. the authors do not know where the execution of code/data will
classifiers to ensure high accuracy in offloading decisions. The require less computational effort (remote or local).
proposed solution relies on our context database for the training The results show that CSOS proves to be extremely effective
and testing of four classification algorithms previously selected. compared to related works and baselines. CSOS achieved an ac-
These are J48, JRIP, IBK and Naive Bayes, of which two are chosen
curacy of 95%, which in turn helped in taking the right offloading
for implementation in CSOS – J48 and JRIP – since they had the
decision, thus saving energy consumption and improving runtime.
best performance over our database.
In order to evaluate the proposed solution, we developed a de- Currently, CSOS focuses on the binary classification of where is
cision engine that depends on J48 and JRIP classifiers for decision- better to process offloading candidates codes, locally or remotely
making and a profiling system that transforms raw context el- (Yes and No classes). Our engine decision does not consider dif-
ements to high-level context information at runtime. We then ferent cloud models when the remote execution is better than
W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520 519

local execution. Thus, further work consists on increasing the spec- [17] P.A.L. Rego, E. Cheong, E.F. Coutinho, F.A.M. Trinta, M.Z. Hasan, J.N. de Souza,
trum of offloading opportunities considering hybrid systems with Decision tree-based approaches for handling offloading decisions and per-
forming adaptive monitoring in MCC systems, in: 2017 5th IEEE International
cloudlet, D2D, and remote cloud [3]. Additionally, we intend to
Conference on Mobile Cloud Computing, Services, and Engineering, Mobile-
investigate a Multiple Classifier Systems (MCS) [39] approach to Cloud, IEEE, 2017, pp. 74–81, http://dx.doi.org/10.1109/MobileCloud.2017.
evolve CSOS and solve many real-world problems, such as ensur- 19.
ing the predictability of application performance, management of [18] F.A. Nakahara, D.M. Beder, A context-aware and self-adaptive offloading
multi-tasking environments, and adaptation of workload to multi- decision support model for mobile cloud computing system, J. Ambient Intell.
Humaniz. Comput. (2018) 1–12.
ple cloud models [40].
[19] H. Wu, Y. Sun, K. Wolter, Energy-efficient decision making for mobile cloud
offloading, IEEE Trans. Cloud Comput. (2018).
Acknowledgment [20] C.-L. Hwang, Y.-J. Lai, T.-Y. Liu, A new approach for multiple objective decision
making, Comput. Oper. Res. 20 (8) (1993) 889–899.
[21] I.H. Witten, E. Frank, M.A. Hall, C.J. Pal, Data Mining: Practical Machine
This work has been supported by Coordination for the Im- Learning Tools and Techniques, Morgan Kaufmann, 2016.
provement of the Higher Level Personnel (CAPES) of the Brazilian [22] C. Perera, A. Zaslavsky, P. Christen, D. Georgakopoulos, Context aware com-
Ministry of Education (MEC) under grant number 1443144. puting for the internet of things: A survey, IEEE Commun. Surv. Tutor. 16 (1)
(2014) 414–454, http://dx.doi.org/10.1109/SURV.2013.042313.00197.
[23] S. Alizadeh, M. Ghazanfari, B. Teimorpour, Data Mining and Knowledge Dis-
References covery, Publication of Iran University of Science and Technology, 2011.
[24] C.-H. Weng, T.C.-K. Huang, R.-P. Han, Disease prediction with different types
[1] N. Fernando, S.W. Loke, W. Rahayu, Mobile cloud computing: A survey, Future of neural network classifiers, Telemat. Inform. 33 (2) (2016) 277–292.
Gener. Comput. Syst. 1, 84–106, http://dx.doi.org/10.1016/j.future.2012.05. [25] M. Friedman, The use of ranks to avoid the assumption of normality implicit
023, in the analysis of variance, J. Amer. Statist. Assoc. 32 (200) (1937) 675–701,
[2] W. Junior, A. França, K. Dias, J.N. de Souza, Supporting mobility-aware com- http://dx.doi.org/10.1080/01621459.1937.10503522.
putational offloading in mobile cloud environment, J. Netw. Comput. Appl. 94
[26] M. Friedman, A comparison of alternative tests of significance for the problem
(July) (2017) 93–108, http://dx.doi.org/10.1016/j.jnca.2017.07.008.
of m rankings, Ann. Math. Stat. 11 (1) (1940) 86–92, http://www.jstor.org/
[3] H. Flores, R. Sharma, D. Ferreira, V. Kostakos, J. Manner, S. Tarkoma, P. Hui, Y. stable/2235971.
Li, Social-aware hybrid mobile offloading, Pervasive Mob. Comput. 36 (2017)
[27] J. Demšar, Statistical comparisons of classifiers over multiple data sets, J.
25–43, http://dx.doi.org/10.1016/j.pmcj.2016.09.014.
Mach. Learn. Res. 7 (Jan) (2006) 1–30.
[4] H. Flores, P. Hui, S. Tarkoma, Y. Li, S. Srirama, R. Buyya, Mobile code offloading:
[28] P. Nemenyi, Distribution-Free Multiple Comparisons, 1963, https://books.
from concept to practice and beyond, IEEE Commun. Mag. 53 (3) (2015) 80–
google.com.br/books?id=nhDMtgAACAAJ.
88, http://dx.doi.org/10.1109/MCOM.2015.7060486.
[29] P. Viola, M. Jones, Rapid object detection using a boosted cascade of simple
[5] C. Bettini, O. Brdiczka, K. Henricksen, J. Indulska, D. Nicklas, A. Ranganathan,
features, in: Proceedings of the 2001 IEEE Computer Society Conference on
D. Riboni, A survey of context modelling and reasoning techniques, Pervasive
Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, IEEE Comput.
Mob. Comput. 6 (2) (2010) 161–180, http://dx.doi.org/10.1016/j.pmcj.2009.
Soc, 2001,http://dx.doi.org/10.1109/CVPR.2001.990517, pp. I-511–I-518.
06.002.
[30] P.A.L. Rego, P.B. Costa, E.F. Coutinho, L.S. Rocha, F.A.M. Trinta, J.N. de Souza,
[6] S. Kosta, A. Aucinas, P. Hui, R. Mortier, X. Zhang, ThinkAir: Dynamic resource
Performing computation offloading on multiple platforms, Comput. Commun.
allocation and parallel execution in the cloud for mobile code offloading, in:
(2015) 1–13, http://dx.doi.org/10.1016/j.comcom.2016.07.017.
2012 Proceedings IEEE INFOCOM, IEEE, 2012, pp. 945–953.
[31] M.A. Hoque, M. Siekkinen, K.N. Khan, Y. Xiao, S. Tarkoma, Modeling, profiling,
[7] A.U.R. Khan, M. Othman, A.N. Khan, S.A. Abid, S.A. Madani, MobiByte: An
and debugging the energy consumption of mobile devices, ACM Comput.
application development model for mobile cloud computing, J. Grid Comput.
Surv. 48 (3) (2015) 1–40, http://dx.doi.org/10.1145/2840723.
13 (4) (2015) 605–628, http://dx.doi.org/10.1007/s10723-015-9335-x.
[8] A. Ferrari, S. Giordano, D. Puccinelli, Reducing your local footprint with anyrun [32] IPerf, The ultimate speed test tool for TCP, UDP and SCT, 2017, https://iperf.fr/.
computing, Comput. Commun. 81 (2016) 1–11, http://dx.doi.org/10.1016/j. [33] CpuRun, Tool to consume CPU resource by constant usage rate, 2017,
comcom.2016.01.006. https://play.google.com/store/apps/details?id=jp.gr.java{_}conf.toytech.
[9] T.-Y. Lin, T.-A. Lin, C.-H. Hsu, C.-T. King, Context-aware decision engine cpurun{&}hl=pt{_}BR.
for mobile cloud offloading, in: Wireless Communications and Networking [34] CpuBurn, The ultimate stability testing tool for overclockers, 2017, https:
Conference Workshops, WCNCW, 2013 IEEE, IEEE, 2013, pp. 111–116, http: //patrickmn.com/projects/cpuburn/.
//dx.doi.org/10.1109/WCNCW.2013.6533324. [35] H. Wu, Stochastic analysis of delayed mobile offloading in heterogeneous
[10] H. Eom, R. Figueiredo, H. Cai, Y. Zhang, G. Huang, MALMOS: Machine learning- networks, IEEE Trans. Mob. Comput. 17 (2) (2018) 461–474.
based mobile offloading scheduler with online training, in: 2015 3rd IEEE [36] H. Wu, Q. Wang, K. Wolter, Tradeoff between performance improvement
International Conference on Mobile Cloud Computing, Services, and Engi- and energy saving in mobile cloud offloading systems, in: Communications
neering, IEEE, 2015, pp. 51–60, http://dx.doi.org/10.1109/MobileCloud.2015. Workshops, ICC, 2013 IEEE International Conference on, IEEE, 2013, pp. 728–
19. 732.
[11] Y. Kwon, H. Yi, D. Kwon, S. Yang, Y. Cho, Y. Paek, Precise execution offloading [37] Z. Jiang, S. Mao, Energy delay tradeoff in cloud offloading for multi-core
for applications with dynamic behavior in mobile cloud computing, Pervasive mobile devices, IEEE Access 3 (2015) 2306–2316.
Mob. Comput. 27 (2016) 58–74, http://dx.doi.org/10.1016/j.pmcj.2015.10. [38] J.F. Hughes, J.D. Foley, Computer Graphics: Principles and Practice, Pearson
001. Education, 2014.
[12] S. Ghasemi-Falavarjani, M. Nematbakhsh, B. Shahgholi Ghahfarokhi, Context- [39] R.M. Cruz, R. Sabourin, G.D. Cavalcanti, Dynamic classifier selection: Recent
aware multi-objective resource allocation in mobile cloud, Comput. Electr. advances and perspectives, Inf. Fusion 41 (2018) 195–216, http://dx.doi.org/
Eng. 44 (2015) 218–240, http://dx.doi.org/10.1016/j.compeleceng.2015.02. 10.1016/j.inffus.2017.09.010.
006. [40] A. Bhattacharya, P. De, A survey of adaptation techniques in computation
[13] B. Zhou, A. Vahid Dastjerdi, R. Calheiros, S. Srirama, R. Buyya, mCloud: A offloading, J. Netw. Comput. Appl. 78 (October 2016) (2017) 97–115, http:
context-aware offloading framework for heterogeneous mobile cloud, IEEE //dx.doi.org/10.1016/j.jnca.2016.10.023.
Trans. Serv. Comput. PP (99) (2015) 1, http://dx.doi.org/10.1109/TSC.2015.
2511002.
[14] A.A. Majeed, A.U.R. Khan, R. UlAmin, J. Muhammad, S. Ayub, Code offloading
using support vector machine, in: 2016 Sixth International Conference on Warley M. Valente Junior received a Ph.D. degree in
Innovative Computing Technology, INTECH, IEEE, 2016, pp. 98–103, http: Computer Science from the Federal University of Pernam-
//dx.doi.org/10.1109/INTECH.2016.7845057. buco, Recife-Brazil, in 2018. Since 2018, he has been an
[15] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu, R. Chandra, P. adjunct professor at the Federal University of Southern
Bahl, MAUI : Making smartphones last longer with code offload, in: 8th Inter- and Southeastern Para. His current research interests are
national Conference on Mobile Systems, Applications, and Services, 2010, pp. Software-Defined Networking, Mobile Edge Computing,
49–62. Fog Computing, Mobile Cloud Computing and Machine-
[16] H. Flores, S. Srirama, Adaptive code offloading for mobile cloud applications: Learning.
Exploiting fuzzy sets and evidence-based learning, MCS ’13, 2013, pp. 9–16,
http://dx.doi.org/10.1145/2482981.2482984.
520 W. Junior et al. / Future Generation Computer Systems 90 (2019) 503–520

Eduardo H.A. Maia Mattos Oliveira is an undergraduate Kelvin Lopes Dias received a Ph.D. degree in Com-
in Computer Science from the Federal University of Per- puter Science from the Federal University of Pernam-
nambuco (UFPE), Recife-Brazil, since 2014. He is currently buco, Recife-Brazil, in 2004. Since 2010, he has been
working on mobile application as a Software Engineer at an associate professor at the Informatics Center in the
C.E.S.A.R, Recife Center for advanced studies and systems. UFPE. His current research interests are Software-defined
His research interests include Fog Computing, Mobile Networking, Network Function Virtualization, Cognitive
Cloud Computing, Software Development and Machine- Radio, Mobile Edge Computing and Internet of Things.
Learning.

Albertinin Mourato Santos is an undergraduate in Com-


puter Science from the Federal University of Pernambuco
(UFPE), Recife-Brazil. Currently, he is working as Software
Engineer Intern at Liferay Inc, creating internal tools for
‘We Deploy’, a Platform as a Service (PaaS) belonging
Liferay Inc. His research interests are Mobile Cloud Com-
puting, Machine-Learning and Mobile Development.

You might also like