You are on page 1of 13

NETWORKS & SECURITY

Adaptive Application Offloading Decision and


Transmission Scheduling for Mobile Cloud Computing
Junyi Wang 1,4, Jie Peng3, Yanheng Wei2, Didi Liu3, Jielin Fu*1
1
Guangxi Key Lab of Wireless Wideband Communication & Signal Processing, Guilin 541004, China
2
Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200000, China
3
School of Telecommunication Engineering Xian University, Xian 710049, China
4
Sci. and Tech. on Info. Transmission and Dissemination in Communication Networks Lab., Shijiazhuang 050000, China
*
The corresponding authoremail: fujielin@gmail.com

Abstract: Offloading application to cloud can cations, such as face recognition, large-scale
augment mobile devices computation capa- 3D game and augmented reality, surge up and
bilities for the emerging resource-hungry mo- attract great attentions [1, 2]. All these kinds
bile application, however it can also consume of mobile applications are typically user-inter-
both much time and energy for mobile device active and resource-hungry, thus require quick
offloading application remotely to cloud. In response and high energy consumption. In re-
this paper, we develop a newly adaptive ap- cent years, advanced technologies in hardware
plication offloading decision-transmission field have been developed, but general mobile
scheduling scheme which can solve above devices are still resource-constrained, having
problem efficiently. Specifically, we first pro- limited computation resources and limited
pose an adaptive application offloading model battery lives due to their physical size con-
which allows multiple target clouds coexist- straint. The tension between resource-hungry
ing. Second, based on Lyapunov optimization application and resource-constrained mobile
theory, a low complexity adaptive offloading device hence poses a significant challenge for
decision-transmission scheduling scheme has the future mobile platform development [3].
been proposed. And the performance analysis With the rapid deployment of broadband
is also given. Finally, simulation results show wireless networks, the newly emerging mobile
thatcompared with that all applications cloud computing [4] is envisioned as a prom-
are executed locally, mobile device can save ising solution to solve above tension. Mobile
68.557% average execution time and 67.095% cloud computing is introduced as an integra-
average energy consumption under situations. tion of cloud computing into the mobile envi-
Keywords: mobile cloud computing; applica- ronment, in which mobile device can access to
tion offloading decision; transmission schedul- clouds for resources by using wireless access
ing scheme; Lyapunov optimization technology together with the Internet technol-
ogy [5]. In general mobile cloud computing,
I. INTRODUCTION there exists three parts: the end, the tube
and the cloud [6]. As illustrated in Figure 1,
Received: Nov. 17, 2015
Revised: May 16, 2016 As smart mobile devices are gaining enormous vendors (such as Amazon Elastic Compute
Editor: Yong Cui popularity, more and more new mobile appli- Cloud (EC2), Amazon Virtual Private Cloud

169 China Communications March 2017


(VPC), and PacHosting) provide computing ered response time and energy consumption
resource (such as servers, storage, software simultaneously and made numerical results In this paper, the au-
and so on) for registered users. Thus, mobile analysis. However, the theoretical analysis thors have designed
device can remotely offload their application about their offloading model was not given. and implemented an
adaptive application
which cannot run or can generate much cost In this paper, based on all these researches we
offloading policy for
on mobile device to appropriate cloud [7]. But develop a good applicability offloading model multiple offloading
unlike cloud computing in wired network, the which optimizes these two key performance target clouds.
mobile-specific challenges of mobile cloud indexes at the same time. Based on Lyapun-
computing arise due to the unique character- ov optimization, the offloading problem was
istics of mobile network, which have severe solved and performance analysis was made.
resource constraints (such as the limitation Specifically, aiming at the stochastic input
of bandwidth, communication latency) and mobile application workloads which mainly
frequent variations [8]. When there are large depend on users needs, we develop a custom-
amount of data transferred in a wireless net- izable cost model which allows mobile user to
work, the network delay may be increased adjust the weight of application response time
significantly and become intolerable. So an and energy consumption based on battery life-
adaptive application offloading decision and time, wireless transmission channel, mobile
wireless transmission scheduling have to be CPU speed, clouds resources and so on. Ap-
made for mobile device to decide whether to plication execution targets including multiple
offload an application task or not and which different clouds can provide different avail-
cloud to choose. able cloud resource for mobile device on any
Now there have been many useful re- slot t, which improve the execution efficiency
search efforts dedicated to offloading data- of real-time application. In the mobile cloud
or computation-intensive programs from a computing, mobile device can access to dif-
resource-poor mobile device to optimize ap- ferent clouds through different wireless access
plication execution cost (energy and response networks, such as 3/4/5G, Wi-Fi and so on.
time). For energy saving, [9] proposed a In conclusion, based on the customizable cost
Lyapunov optimization theory based dynamic model, we propose an adaptive offloading pol-
offloading algorithm to identify offloading icy (including offloading decision and trans-
component without considering the impact mission scheduling) to determine whether to
of changeable wireless channel on offload-
ing algorithm. [10] investigated a theoretical The Cloud

framework and derived a threshold policy


for energy-optimal mobile cloud computing
under stochastic wireless channel. The in- Cloud servers with
virtua machine(VM)
sufficient of their works was no performance
evaluation of real applications with intensive Ac ce s s t o t he
c l oud t hr ough
Intnet
computation. For response time saving, [11] I nt e r net

implemented and evaluated offloading mecha- Cloud


nisms for offloading computationally intensive Computing

Mobile
Java programs to remote servers by using Ibis The Tube Wireless Access
Computing
Point
middleware. Their methods could not be ex-
tended to other execution environments since
3G /4G /5G /Wi -Fi
their works were related with specific soft-
ware packet. To our best knowledge, only few The End Smart Mobile
Device
works focused on saving energy and response
time at the same time. Among them, [12] pro-
posed an offloading framework which consid- Fig. 1 A basic framework of the mobile cloud computing

China Communications March 2017 170


offload application remotely and which cloud factors (such as bandwidth, routing and so
to choose for decreasing application execution on) and different objects (such as energy con-
cost. The main results and contributions of this sumption, response time and so on). Accord-
paper are summarized as follows: ing to their research goals, these works almost
l G
eneral offloading model formulation: can be classified into three categories.
We formulate an offloading model which 1) Works on energy saving: Extending
considers stochastic applications (i.e., the battery lifetime is one of the most crucial de-
input computation data size of application sign objectives of mobile device because of
is stochastic, so is the computation size) un- their limited battery capacity. [13] proposed a
der general network environment, by taking taxonomy study for energy-aware high perfor-
into account both response time and energy mance computing, they built energy model to
consumption of application. Our offloading approximate the energy consumption of off-
model has good universality and scalability. loading. [14] studied the feasibility of mobile
l A
daptive offloading decision and trans- computation offloading. They gave a feasibili-
mission scheduling scheme: Based on ty evaluation method and to evaluate the costs
Lyapunov optimization theory, we obtain of both off-clones and back-clones in terms
an optimal offloading policy with low com- of bandwidth and energy consumption. [15]
plexity. The performance analysis is also made an encoding scheme for mobile cloud
given. computing network, which could achieve low-
l pplication offloading reference: We de-
A er energy consumption by lowering the power
vise several simulation experiments based consumption of a CPU and a wireless network
on real network settings. The simulation interface card. [16][17] aimed at studying
results show that compared with that all novel routing methods for transmitting data
applications are executed on local mobile effectively to optimize nodes energy. All
device, mobile device can save 68.557% above researches aimed at maximizing battery
average response time and 67.095% av- lifetime for mobile devices.
erage energy consumption. Moreover, the 2) Works on response time saving: Re-
statistics results of application offloading sponsiveness of application is important,
probability can provide a reference for real especially for real-time and user-interactive
application offloading on mobile device. It applications. [18] developed an offloading
can reduce the overhead of offloading poli- middleware, which provided runtime offload-
cy itself. ing service to improve the responsiveness
The rest of this paper is organized as fol- of mobile device. [19] studied an exhaustive
lows. Section II describes some related works. search algorithm to examine all possible ap-
Section III gives the problem statement and plication partitions in order to find an optimal
model formulation. Section IV details the im- offloading partition. All partition methods per-
plementation of dynamic offloading. Section formance well for small-size applications. [20]
V shows the performance analysis of applica- proposed an effective virtual machine place-
tion offloading policy. Section VI shows the ment algorithm to reduce cloud service re-
simulation results and numerical analysis. And sponse time under wireless mesh environment.
Section VII concludes this paper. [21] studied multi-user computation partition-
ing problem, which considered partitioning
II. RELATED WORKS of multiple users computations together with
the scheduling of offloaded computations on
Many research works have investigated off- cloud resources. [22]-[24] aimed at studying
loading data- or computation-intensive pro- transmission mechanism or protocol to solving
grams from resource-poor mobile device. All applications transferring problems for the re-
the researches aimed at different important al-time application. Under mobile cloud com-

171 China Communications March 2017


puting, fast and effective data transmission is consumption simultaneously, the controller on
important. All of these can achieve minimum mobile device needs to determine whether or
average application completion time for mo- not to offload application to cloud based on
bile device. the current network, application itself, avail-
3) Works on energy and time saving: able cloud resource and so on. Let
Energy consumption and response time are be the set of clouds. If there is more than one
two key performance indexes of mobile ap- available cloud on slot t, the controller needs
plication. However, only few works have ad- to further choose the best one among all avail-
dressed these two targets simultaneously. [25] able clouds. Application with few input com-
implemented a framework named ThinkAir, putation data size (i.e., the program codes and
which could make developers simply migrate data) and high computation size (i.e., the total
their application workloads to the cloud. [26] number of CPU cycles) suits the offloading
proposed a game theoretic approach to achieve model better. Application component parti-
efficient computation offloading for optimiz- tioning is not considered.
ing execution costs.
3.2 Offloading model
All these works studied energy consump-
tion and response time for mobile cloud com- For the convenience of presentation, some pa-
puting in different ways, they provided many rameters are listed in Table I.
important references for our works. Based on
3.2.1 Application generation and ooading
all these works, we provide our own adaptive
decision
application offloading model from other as-
pects. Application request for mobile device is
stochastic, which mainly depends on the
III. MODEL FORMULATION demand of mobile user. Vector
is used to describe application characteristics
In this paper, we investigate an adaptive ap- on slot t. Different applications have different
plication offloading policy under mobile com- characteristics. For any slot t, and
puting cloud where mobile device can offload are both non-negative integers and satisfy
compute-intensive applications to one of the , respective-
multiple available clouds through various ly. and are both finite maximums.
available wireless networks. The key charac-
teristics of offloading model are described as
follows, and the offloading model is also for-
mulated as follows.

3.1 Network description


Network scenario is illustrated in Figure 2,
which contains one mobile device and multi-
ple target clouds. To simplify the offloading
model, we slot the system model with fixed
size slots . We assume that 3/4/5G
network is available for all locations while
the availability of Wi-Fi network depends on
locations. The data transmission rate of each
wireless network changes over different slots.
When there is a request for application exe-
cution on mobile device on slot t, in order to
improve user experiment and reduce energy Fig. 2 Network operation topology for application offloading

China Communications March 2017 172


Table I Notation table of data that can be transmitted by wireless
Factor Unit Definition link to cloud n on slot t, which is determined
The input computation data size of application (i.e., program by . is assumed to restrict all trans-
D(t) bit
and data) missions to non-negative and bounded rate, so
The computation size (i.e., the number of CPU processing cy- exists for all and all t.
S(t) MHz
cles)
Additional structure of can be imposed to
Mobile devices CPU speed (i.e., CPU running cycles per sec-
cpu MHz model the physical transmission capabilities of
ond)
wireless access network.
Pidle Watt Idle power of mobile device
Thus, the dynamic of input queue is
Pactive Watt Running power of mobile device
defined as:
Ptrans Watt Transmitting power of mobile device
Computation capability (i.e., CPU running cycles per second)
n (t) MHz  (1)
provided by cloud n on slot t
cld

which assumes that newly arriving data


n MHz The max computation capability of cloud n
cannot be transmitted on slot t.
cld(max)

We assume that vectors are inde- 3.3 Problem formulation


pendent and identically distributed (i. i. d.)
On slot t, mobile device firstly needs to esti-
over slots and may have a joint probability
mate application executing cost (including re-
distribution that is arbitrary. This distribution
sponse time and execution energy). According
is not necessarily known. As for an appli-
to application offloading decision, the execut-
cation request, let be
ing time is given by:
execution indicator variable on slot t, which
determines whether or not to offload ap-
plication to the cloud n. If exists, (2)
application is offloaded remotely to cloud n.
On the other hand, If exists, the ap-
where is the estimated
plication is executed on local mobile device.
transmission time for offloading application
Let represents the
to cloud n on slot t, is the
decision vector of mobile device on slot t.
According to the assumption that an applica- estimated execution time for cloud n running
tion which needs to be offloaded can only be application, and is the
offloaded to one cloud during one application estimated execution time for local mobile
request, must satisfy. device executing application. The first part
on the right-hand-side of (2) is the offloading
3.2.2 Wireless transmission link response time (including transmission time
If the controller of mobile device decides and running time), and the second part is the
to offload application to cloud n on slot t, it response time executing locally. Specifically,
could put all input computation data of the according to the assumption that one applica-
application into input queue at network tion which needs to be offloaded can only be
layer. Then mobile device transmits all the offloaded to one cloud, so based on the value
computation input data to cloud through wire- of vector , the response time is denoted by
less access network and Internet as shown in either the first part of (2) if offloading, or the
Figure 2. Let be the vector of traffic load second part of (2) if executing locally. Here
and state of currently available wireless access we neglect the propagation delay of input
network on slot t. is assumed to be i. i. d. codes and data. And we also do not consider
over slots and may have a probability distribu- the time spent for the result data transmitted
tion that is arbitrary. Define from cloud to the mobile device, because the
as transmission vector, as the amount size of data returning from cloud is always

173 China Communications March 2017


insignificant compared with the amount of raw Based on the problem description and mod-
input data and code of application. eling above, the objective is to solve:
Based on (2), the energy consumption
is given as follows: (7)

(3) where we use the following stability definition


[27]:
where is the Definition 1. Queue is
estimated transmission energy for off- strongly stable if
loading application to cloud n on slot t,
is the estimated ex- Intuitively, this means that a queue is
ecution energy when running on cloud n, and strongly stable if its average backlog is finite.
is the estimated
execution energy when executing on local mo- IV. DYNAMIC APPLICATION
bile device. OFFLOADING POLICY
Due to the different units of response time
and energy consumption, a naive cost function The above problem (7) can be cast as a sto-
that sums up response time and energy con- chastic network optimization. Early works
sumption is not appropriate to our problem. [9] used Lyapunov optimization theory to
In this paper, we normalize and as show that the queuing naturally fits in the
follows: related application offloading problem. Rath-
er than trying to solve problem (7) (which
(4) require a-priori knowledge of the application
, queue backlogs and so on),
(5)
we use Lyapunov optimization theory to solve
and are the normalization represen- (7) that only requires knowledge of the current
tations of response time and energy consump- network state and queues backlogs [28]. It can
tion respectively. reduce the complexity of solving the offload-
Then, based on (4) and (5), we give a com- ing problem [9].
posite cost function:
(6) 4.1 Lyapunov optimization
where is a user-specified variable in the Let represent a concatenated
range of [0, 1], which is used for differen- vector of all queues. The quadratic Lyapunov
tiating the importance of response time and function is:
energy consumption of application. If =0,
(8)
it implies that the energy consumption is the
only criterion in determining the offloading Define as Lyapunov drift,
decision and offloading target cloud. On the then use (1) to calculate the upper bound of
other hand, if =1 the response time is the Lyapunov drift function:
only criterion. is upper bounded by the
constant :
(9)
Define as the time-averaged expected ex-
ecution cost:
where in the final inequality we have used
the fact that for any , we

China Communications March 2017 174


have: Where we omit the constant on slot t which
have nothing to do with the decision variable
And the constant B in (9) is calculated as
an(t), such as B, V and .
follows:
4.2 Offloading decision
Based on Lyapunov theory, in order to min- Due to the fact that all target clouds are inde-
imize in (7), the drift-plus-penalty function pendent, we can further simplify (12) by ex-
is consider. From (9), ploiting the separable structure as follows: Ev-
the drift-plus-penalty expression satisfies: ery slot t, if any application occurs,
the application offloading policy observes
, , and for each . Based
on all these observed values, the application
(10) offloading policy makes offloading decision
in (10), is a constant that determines for every cloud:
a tradeoff between queue size and execution
cost.
Then based on the expression of and
(10), we have:

(13)
For convenience of description, here we
define as:

Based on the observed values, (13) can be


solved. We can choose :

(14)

Now we obtain all suitable target clouds,


(11)
i.e. a(t). But do not al-
Every slot t, our adaptive offloading policy
ways hold here. So the mobile device needs to
is design to observe , , and
choose the best one among all fit target clouds.
, then make application offloading
decision and transmission scheduling to min- 4.3 Transmission scheduling
imize the right-hand-side of the expression
After making offloading decision, vector
(11). That is, application offloading decision
should be obtained. In other words, we obtain
and transmission scheduling are made to
solve the following problem: all possible target clouds. If , it
means that mobile device has more than one
appropriate target cloud to offload on slot t.
Based on and , we make further dis-
cussion to choose the best one:

(12)

175 China Communications March 2017


(15) (17)
Then we choose the best cloud which sat- Proof. The proof of Theorem 1 follows
isfies . Otherwise, if , [28].
Based on -only policy in [28], for any
it means the application is executed locally on
there exist a stationary, randomized off-
slot t. If , it means that mobile loading policy that chooses feasible controlled
device cloud offload application to the only variable to yield the following steady
correct cloud n which satisfies on state values:
slot t. (18)
In conclusion, based on Lyapunov theory, (19)
there is no need to obtain the future/history The value of is based on . (19)
network state values which may bring delay can ensure system stable for feasible solutions
and energy consumption. And the solving existing.
complexity just depends on the structure of the Substitute an offloading policy into (10)
decision spaces (the set of all possible deci- and take expectations gives:
sion variables). The offloading decision set in
our paper contains a finite number of possible
control actions, so offloading policy can sim-
ply evaluates the problem over each decision
option and chooses the best one. The Lyapun-
(20)
ov technique has low complexity for this kind
here we used the fact that and are
of problem.
independent under the offloading policy. Then
using (18)-(19) into the right-hand-side of (20)
V. PERFORMANCE ANALYSIS
gives:
Based on Lyapunov optimization theory, the
following theorem details the performance
analysis of the adaptive application offloading
This inequality is valid for any .
policy.
Therefore:
Theorem 1 (Performance of Offloading
Policy) Assume queues are initially empty, so
Summing from t=0 to T-1:
that for all n. For any control pa-
rameter , the solution derived by our dy-
namic offloading policy stabilizes the system
and has the following inequalities: Using , and dividing by
1) Time average expected cost satisfies: gives:
(21)
(16)
which proves (16) by taking a limit supremum
Where is the optimal solution of problem as .
(7) over all control policies that satisfy all re- We additionally assume all constraints of
quired constraints. Constant B is derived from the network can be achieved based on slack-
(9). ness [28], which implies the feasibility of the
2) Supposing there are and offloading model. In other words, there are
for which the Slater condition values and ( ) and an
holds [28], stated below in (22)-(23). Then: offloading policy choosing decision variable
that satisfies:
(22)

China Communications March 2017 176


(23) the other hand, it also can be seen that our
(23) shows that there exists a policy which offloading policy can maintain the stability
makes average transmission rate higher than of queues while driving the time average cost
average arrival rate for every queue in net- to its exact optimum value. All of these con-
work. It ensures network stability. firmed the effectiveness of our application
Using (22)-(23) into right-hand-side of (20) offloading policy. In the next section, we will
gives: further verify our offloading policy by real
simulation.

VI. SIMULATION RESULTS


Thus:
In this section, we conducted several experi-
ments on OMNeT++ to evaluate our offload-
ing policy. Simulation settings are as follows:
Summing from t=0 to T-1: There are one mobile device and two remote
clouds in wireless network scenario, which
is similar to Figure 2. The mobile device is
equipped with one 1-GHz processor and Wi-Fi
Using , , dividing by IEEE 802.11 b/g interface (Taking HTC Nex-
and rearranging terms above gives: us One as an example). Its idle power, running
power, and transmitting power is 0.886W,
(24) 1.539W, 2.262W respectively. The cloud is
simulated by a resource pool [12]. Cloud 0
contains 1000 concurrent servers in which
However, because our algorithm satisfies
every server is with a 3.9-GHz Intel proces-
all of the desired constraints of the optimi-
sor, cloud 1 contains 10 concurrent servers in
zation problem (7), its limiting time average
which every server is with a 1.8-GHz Inter
expectation for cannot be better than :
processor. As for wireless access network, 3G
(25) and Wi-Fi networks are available in the whole
Taking a limit supremum of (24) as network. To distinguish wireless 3G and Wi-
and using (25) yields: Fi, mobile device offloads application to cloud
0 just through wireless 3G network, and to
(26) cloud 1 through Wi-Fi network. The orders
of magnitude of transmission rate for 3G and
which proves (17). Wi-Fi are 10Kb/s and 103Kb/s, respectively
In conclusion, Theorem 1 characterizes the [9]. And the real-time application generated
performance of our dynamic offloading: For by mobile device is considered as a random
any parameter , the time average expect- process. Their input computation data size
ed penalty (i.e., execution cost) satisfies (16) and computation size are 102KB and 10GHz
and hence is either less than the target value in terms of order of magnitude, respectively.
, or differs from by no more than a The control weights are V=0.98 and =0.55.
fudge factor , which can be made arbi- However, we adopt these settings just for ex-
trarily small as increased. (17) indicates that position purpose. The analysis in the previous
all queues are strongly stability. However, the section does not depend on these settings.
time average queue backlog bound increases 6.1 Performance evaluation of
linearly in the parameter. Thus, this presents adaptive offloading policy
a cost-backlog tradeoff of . On
To gain insight on our offloading policy,

177 China Communications March 2017


we compare our offloading policy with one cost. All of these further implied the effective-
scheme that all applications are executed on ness of our offloading decision algorithm and
local mobile device (i.e., no offloading and no transmission scheduling scheme.
scheduling for all application). For the con-
6.2 Offloading reference for various
venience of description, L scheme is used to
kinds of application
represent the scheme that all applications are
executed on local mobile device. Above simulation numerical results demon-
We first show the mobile devices execution strate that our offloading policy is effective
cost comparison between offloading policy for the kind of application whose average off-
and L scheme. Figure 3(a) and Figure 3(b) loading probability is 85.54% (It comes from
respectively show the comparison of time-av-
eraged energy consumption and response time
between offloading policy and L scheme. It
demonstrates that the offloading policy can
help to save about 68.557% average response
time and 67.095% average energy, which
proves that the offloading policy is effective.
On the other hand, we also use Figure
(a) Execution energy
4(a) and Figure 4(b) to respectively compare
time accumulative energy consumption and
response time between offloading policy and
L scheme. It clearly shows that our offload-
ing policy can extend the battery lifetime of
mobile device and improve mobile user ex-
perience at the same time. Conclusively, our
(b) Response time
application offloading is indeed helpful for the
mobile device which has limited resources to
Fig. 3 Comparison of time-average execution cost between our offloading and L
run the sophisticated and real-time application.
scheme
We next show the impact of offloading pol-
icy on queues backlogs under above wireless
network scenario. On every slot t, the input
computation data of application which needs
to be uploaded to cloud 0 caches in queue 0.
Similarly, the input computation data of ap-
plication which needs to be uploaded to cloud
1 caches in queue 1. Figure 5(a) and Figure
5(b) reveal the dynamics of queue backlogs
in queue 0 and queue 1, respectively. Both (a) Execution energy

queues lengths are finite for any slot, which


indicates that queues are stable (i.e., network
stability). Meanwhile, simulation statistics
show that the total amount of application
offloaded to cloud 0 is 26 and to cloud 1 is
814, which means most applications are sent
through Wi-Fi network. It indicates that the (b) Response time

input computation data migration cost among


mobile device and clouds caused by wireless Fig. 4 Comparison of time-accumulated execution cost between our offloading
communication dominate the whole execution and L scheme

China Communications March 2017 178


creases with the computation size. This is be-
cause as the computation size increases, mo-
bile device would rather choose to utilize the
cloud computing via computation offloading
to mitigate the heavy cost of local computing.
(2) Fix the computation size of application,
the offloading probability decreases with the
(a) Backlogs of queue 0 increasing size of the input computation data
due to the fact that a larger data size requires
higher overhead for computation offloading
via wireless communication. (3) The ratio of
the computation size to the input computation
data size is no less than 104 (the offloading
probability is 54.03%), the proposed offload-
ing policy works well because low offloading
(b) Backlogs of queue 1
probability indicates poor offloading policy
according to previous model theory analysis.
Fig. 5 Queue backlogs on every slot t under our offloading policy
In fact, all these statistical results can do
help for offloading software implementation
Table II Offloading probability for all kinds applications under our network envi- for mobile device and can reduce offloading
ronment
itself running overhead at the same time: when
Application (including OP(%)
the programmers design offloading program,
OMDS and OMCS) 101KB 102KB 103KB 104KB
they can put all statistics collected under all
10 MHz
0
0.86 0 0 0 different network environment similar to Ta-
101MHz 6.75 0.86 0 0 ble II into the program. In this way, when a
10 MHz
2
54.03 6.75 0.86 0 mobile device needs to deal with the real-time
10 MHz
3
85.54 54.03 6.75 0.86 application, Table statistics can provide a ref-
104MHz 97.62 85.54 54.03 6.75 erence factor for application offloading, reduc-
10 MHz
5
99.06 97.62 85.54 54.03 ing the overhead of the application offloading
106MHz 99.38 99.06 97.62 85.54 itself running.
10 MHz
7
99.38 99.38 99.06 97.62
108MHz 99.38 99.38 99.38 99.06
VII. CONCLUSIONS

Saving energy and quick response in mo-


above simulation). To investigate the impact bile computing are becoming increasingly
of general application on adaptive offloading important due to the challenge of running
policy, we next implement the simulations for computationally intensive applications on
different applications. The simulation results resource-constrained mobile devices. In this
are summarized in Table II. For the conve- paper, we have designed and implemented an
nience of exposition, some abbreviations are adaptive application offloading policy. Un-
used in Table II: OMDS represents the Order like previous works, which consider only one
of Magnitudes of Data Size; OP represents the target cloud, our offloading policy is suitable
Offloading Probability of application; OMCS for multiple offloading target clouds. And the
represents the Order of Magnitudes of Com- numerical results show that mobile device can
putation Size. save about 68.557% average response time
From Table II, we can get three conclu- and 67.095% average energy consumption.
sions: (1) Fix the input computation data size Our future work includes three aspects.
of application, the offloading probability in- First, we will implement an offloading policy

179 China Communications March 2017


among multiple mobile devices and target wireless heterogeneous networks for mobile
cloud computing. IEEE Transactions on Wire-
clouds in mobile cloud computing. Second, we
less Communications, vol. 20, no. 3, pp. 34-44,
plan to investigate the applications partitions 2013.
scheme for some special applications. Finally, [6] J Zhao, How to embrace mobile cloud com-
we will study the overhead caused by offload- puting for operators, http://www.cnii.com.cn/
incloud/2015-02/12/content_1533469.htm.
ing policy itself.
[7] Y Lu, S.P Li, H.F Shen, et al, Virtualized screen:
A third element cloud-mobile convergence,
ACKNOWLEDGEMENTS IEEE Journal on Multimedia, vol. 18, no. 2, pp.
4-11, 2011.
The authors would like to thank the re- [8] H.T Dinh, C Lee, D Niyato, A Survey of Mobile
viewers for their detailed reviews and con- Cloud Computing: Architecture, Applications,
and Approaches, Wireless Communications
structive comments, which have helped to and Mobile Computing, vol. 13, no. 18, pp.
improve the quality of this paper. This work 1587-1611, 2013.
was supported by National Natural Science [9] D Huang, P Wang, D Niyato, A dynamic off-
Foundation of China (Grant No. 61261017, loading algorithm for mobile computing, IEEE
Transactions on Wireless Communications, vol.
No. 61571143 and No. 61561014); Guangxi 11, no. 6, pp. 19911995, 2012.
Natural Science Foundation (2013GXNS- [10] W.W Zhang, Y.G Wen, K Guan, et al, Ener-
FAA019334 and 2014GXNSFAA118387); gy-Optimal Mobile Cloud Computing under
Key Laboratory of Cognitive Radio and In- Stochastic Wireless Channel. IEEE Transactions
on Wireless Communications, vol. 12, no. 9, pp.
formation Processing, Ministry of Education 4569-4581, 2013.
(No. CRKL150112); Guangxi Key Lab of [11] R Kemp, N Palmer, T Kielmann, et al, Eyedentify:
Wireless Wideband Communication & Signal Multimedia cyber foraging from a smartphone,
Processing (GXKL0614202, GXKL0614101 Proceedings of 11th IEEE International Sympo-
sium on Multimedia, pp. 14-16, 2009.
and GXKL061501); Sci. and Tech. on Info. [12] Y.D Lin, T Edward, Y.C Lai, et al, Time-and-ener-
Transmission and Dissemination in Commu- gy aware computation offloading in handheld
nication Networks Lab (No. ITD-U14008/ devices to coprocessors and clouds, IEEE Jour-
KX142600015); and Graduate Student Re- nals & Magazines, vol. 9, no. 2, pp. 393-405,
2015.
search Innovation Project of Guilin University [13] C Cai, L Wang, S U.Khan, et al, Energy-aware
of Electronic Technology (YJCXS201523). high performance computing: A taxonomy
study, Proceedings of 17th IEEE International
References Conference on Parallel and Distributed Systems,
[1] T Soyata, R Muraleedharan, C Funai, et al, pp. 7-9, 2011.
Cloud-vision: Real-time face recognition using [14] M V. Barbera, S Kosta, A Mei, et al, To offload
a mobile-cloudlet-cloud acceleration architec- or not to offload? The bandwidth and energy
ture, Proceedings of Symposium on Computers costs of mobile cloud computing, Proceedings
and Communications, pp. 59-66, 2012. of IEEE on INFOCOM, pp. 14-19, 2013.
[2] J Cohen, Embedded speech recognition appli- [15] E Boyun, L Hyunwoo, L Choonhwa, Pow-
cations in mobile phones: Status, trends, and er-Aware Remote Display Protocol for Mobile
challenges, Proceedings of IEEE International Cloud, Proceedings of International Confer-
Conference on Acoustics, Speech and Signal ence on Information Science and Applications
Processing, pp. 5352-5355, 2008. (ICISA), pp. 1-3, 2014.
[3] E Cuervo, A Balasubramanian, D Cho, et al, [16] D.G Zhang, K Zheng, T Zhang, A Novel Multi-
MAUI: making smartphones last longer with cast Routing Method with Minimum Transmis-
code offload, Proceedings of the 8th interna- sion for WSN of Cloud Computing Service, Soft
tional conference on Mobile systems, applica- Computing, vol. 19, no. 7, pp. 1817-1827, 2015.
tions and services, pp. 49-62, 2010. [17] D Zhang, G Li, K Zheng, An energy-balanced
[4] C Gallen, Mobile cloud computing subscribers routing method based on forward-aware factor
to total nearly one billion by 2014, http://www. for Wireless Sensor Network, IEEE Transactions
abiresearche.com/press/1484 -Mobile+Cloud+- on Industrial Informatics, vol. 10, no. 1, pp. 766-
Computing+Subscribers+to+Total+Near- 773, 2014.
ly+One+Billion+by+2014. [18] S Ou, K Yang, J Zhang, An effective offloading
[5] L Lei, Z.D Zhong, K Zheng, et al, Challenges on middleware for pervasive services on mobile
devices, Pervasive and Mobile Computing, vol.

China Communications March 2017 180


3, no. 4, pp. 362-385, 2007. Jie Peng, is currently pursu-
[19] I Giurgiu, O Riva, I Krivulev, et al, Calling the ing the Ph.D. degree with the
cloud: Enabling mobile phones as interfaces School of Telecommunication
to cloud applications, Proceedings of the 10th Engineering, Xidian University,
ACM/IFIP/USENIX International Conference on Xian. Her research interests
Middleware, pp. 1-20, 2009. include mobile and wireless
[20] D Chang, G Xu, L Hu, et al, A network-aware network, resource allocation
virtual machine placement algorithm in mobile i n m o b i l e c l o u d c o m p u t-
cloud computing environment, Proceedings of ing and stochastic network optimization. Email:
IEEE Wireless Communications and Networking pjie86668@163.com.
Conference Workshops (WCNCW), pp. 117-122,
2013. Yanheng Wei, received his
[21] L Yang, J Cao, H Cheng, et al, Multi-user Com- M.S. degree in electronics and
putation Partitioning for Latency Sensitive Mo- communication engineering
bile Cloud Applications, IEEE Transactions on from Guilin University of elec-
Computers, vol. 64, no. 8, pp. 2253-2266, 2015. tronic Technology in 2014. His
[22] D Zhang, X Song, X Wang, New Agent-based research interests include sto-
Proactive Migration Method and System for Big chastic network optimization
Data Environment (BDE), Engineering Compu- and network coding.
tations, vol. 32, no. 8, pp. 2443-2466, 2015.
[23] D Zhang, K Zheng, D Zhao, Novel Quick Start Didi Liu, is an associate pro-
(QS) Method for Optimization of TCP, Wireless fessor with the College of Elec-
Networks, vol. 22, no. 1, pp. 211-222, 2016. tronic Engineering, Guangxi
[24] D Zhang, A new approach and system for at- N o r m a l U n i v e r s i t y. S h e i s
tentive mobile learning based on seamless mi- currently pursuing the Ph.D.
gration, Applied Intelligence, vol. 36, no. 1, pp. degree with the School of Tele-
75-89, 2012. communication Engineering,
[25] S Tosta, A Aucinas, P Hui, et al, ThinkAir: Dy- Xidian University, Xian. Her
namic resource allocation and parallel execu- research interests include stochastic network optimi-
tion in the cloud for mobile code offloading, zation and signal processing.
Proceedings of IEEE INFOCOM, pp. 945-953,
2012. Jielin Fu, received the B.S.
[26] X Chen, Decentralized Computation Offloading degree in communication en-
Game for Mobile Cloud Computing, IEEE Trans- gineering from Beijing jiaotong
actions on Parallel and Distributed Systems, vol. University, Beijing, China, in
26, no. 4, pp. 974-983, 2014. 1996. And he received his M.S.
[27] L Georgiadis, M Neely, L Tassiulas, Resource degree in communication engi-
Allocation and Cross-layer Control in Wireless neering from Guilin University
Networks, Foundations and Trends in Network- of electronic Technology, Guilin,
ing, 2006. China, in 2001. He is currently an associate professor
[28] M Neely, Stochastic Network Optimization with with the Academy of Information and Communica-
Application to Communication and Queueing tion, Guilin University of electronic Technology. His
Systems, Morgan & Claypool, 2010. research interests focus on wireless and broadband
communication network. Email: fujielin@gmail.com.
Biographies
Junyi Wang, received his M.S.
degree in fundamental mathe-
matics from Xiangtan Universi-
ty, and the Ph.D. degree in sig-
nal and information processing
from Beijing University of Posts
and Telecommunications, Bei-
jing, China, in 2003 and 2008
respectively. He is currently an associate professor
with the Academy of Information and Communica-
tion, Guilin University of electronic Technology. His
current research interests include stochastic network
optimization and network coding. Email: wangjy@
guet.edu.cn.

181 China Communications March 2017

You might also like