Professional Documents
Culture Documents
highlights
article info a b s t r a c t
Article history: The execution of much sophisticated applications on the resource-constrained mobile device will lead to
Received 22 March 2017 the fast exhaustion of the battery of mobile device. Therefore, mobile cloud computing (MCC) is regarded
Received in revised form 25 August 2017 as an energy-effective approach by offloading tasks from mobile device to the resource-enough cloud,
Accepted 22 October 2017
which cannot only save energy for mobile devices but also prolong the operation time of battery. However,
Available online 2 November 2017
it still remains a challenging issue to coordinate task offloading among mobile devices and get offloading
Keywords: results quickly at the same time. In this paper, we propose an agent-based MCC framework to enable
Computation offloading the device to receive offloading results faster by making offloading decision on the agent. Moreover,
Energy saving to get an offloading strategy, we formulate the problem of maximizing energy savings among multiple
Task filtering users under the completion time and bandwidth constraints. To solve the optimization problem, we
Completion time constraint propose a Dynamic Programming After Filtering (DPAF) algorithm. In the algorithm, firstly, the original
Mobile cloud computing offloading problem is transformed to the classic 0–1 Knapsack problem by the filtering process on the
agent. Furthermore, we adopt dynamic programming algorithm to find an optimal offloading strategy.
Simulation results show that the framework can more quickly get response from agent than other
schemes and the DPAF algorithm outperforms other solutions in energy saving.
© 2017 Elsevier B.V. All rights reserved.
https://doi.org/10.1016/j.future.2017.10.034
0167-739X/© 2017 Elsevier B.V. All rights reserved.
Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176 167
battery. Some works [4–6] focused on virtual machine migration, The rest of this paper is organized as follows: Section 2 intro-
which show that such migration can save energy. The works in [7– duces related work on offloading problem. Section 3 gives system
9] were dedicated to fine-grained granularity. These works indicate framework and its operation procedure. Section 4 proposes the
that the mobile devices can benefit from offloading. problem statement and a detailed algorithm. Section 5 shows ex-
Although mobile devices are allowed to take advantages of perimental result and performance evaluation. Section 6 concludes
cloud computing to alleviate computation workload as well as this work and makes prospects to the future.
prolong battery lifetime, it still remains challenging to coordi-
nate offloading tasks among mobile devices and achieve quick 2. Related work
response for mobile users at the same time. In most existing frame-
works [4,7,10,11], mobile devices send the computation offloading Related works are introduced in this section, which are sepa-
requests to the cloud, and then the offloading decision made by rated into two aspects. One is the computation offloading scheme
the cloud will be sent back to the mobile devices. However, mobile and the other is offloading policies. There have been a lot of works
users will waste long time for receiving offloading decision from adopting the computation offloading model [7,10,11]. Cuervo et al.
the cloud without considering whether they can benefit from provided a method-level dynamic offloading framework MAUI to
offloading. Especially for invalid requests, they not only suffer of- maximize the energy saving in [7]. Only if the method is remote-
floading failures from clouds but also prolong the execution time of able in the mechanism, MAUI collects the parameters required by
the tasks. Besides, most existing works consider either the comple- offloading and sends the requests to center cloud. The solver on
tion time constraint [7,12] or the bandwidth constraint [13,14] to the cloud decides whether to execute the task remotely. In [15],
achieve energy savings. However, it is necessary to take both of the Yang et al. studied multi-user partitioning and offloading problem
constraints into consideration to reduce energy consumption for where PaaS middleware of the cloud makes the partitioning de-
all users. Therefore, we consider two issues in our work: (1) How cisions. These works adopted a traditional offloading framework
to design a framework in case of waiting for a long time to retrieve that the cloud makes decisions according to the information sent
the reply of computation offloading request? (2) How to make an by mobile devices. However, there are two drawbacks of such a
efficient computation offloading strategy among mobile devices framework. One is that the response to offloading request from the
under the constraints of completion time and limited bandwidth? cloud to device needs a long latency, and the other is that mobile
To address the above issues, in this paper, we propose an devices will upload requests containing offloading parameters to
agent-based quick-response computation offloading framework the cloud while not considering whether the tasks can benefit from
to ensure that the mobile user can get the response of the of- offloading, which causes extra energy consumption.
floading request more quickly. In addition, we propose Dynamic To shorten the latency between distant cloud and mobile de-
Programming After Filtering (DPAF) algorithm to solve the multi- vice, the agent-based framework was proposed. Liu et al. in [16]
user computation offloading problem. proposed a BWRS scheme to improve the QoS of real-time streams
The framework consists of distant cloud, agents as well as and the overall performance of MCC network in the mobile agent-
multiple mobile devices. In the proposed framework, we adopt based architecture. In [14], Nir et al. investigated a scheduler model
the request filtering on the device and agent so as to effectively on the broker node which solves tasks assignment to minimize
achieve the quick response. Our proposed framework has the total energy consumption. The advantages of this framework are
following advantages. First, mobile devices only send beneficial introduced in [17,18]. First, it can reduce information interaction
offloading requests to the agent, which not only can avoid the and data delivery delay. Second, the agent can periodically apply
energy consumption from invalid offloading requests, but also for a collection of resources to satisfy multiple users instead of
reduce the computation workload on the agent and alleviate the accessing cloud resources by every user.
communication burden of the network. Second, compared with Compared with the traditional framework, the agent-based
sending computation offloading requests to the remote cloud, framework can make offloading decisions on the nearby agents
mobile devices retrieve the offloading decision more quickly and instead of uploading the requests to the center cloud for decisions.
shorten the waiting time for the offloading result. First, the agent receives the requests from mobile devices and then
More specifically, our proposed framework aims to provide requires the information from the cloud and makes offloading de-
an optimal offloading strategy to achieve multi-user maximum cision finally. Obviously, the place for making decision just moves
energy saving. To that end, we first formulate the computation from distance cloud to nearby agent. That is why the agent-based
offloading problem as an optimization problem. Then, we convert framework reduces transmission delay. However, the number of
the original optimization problem into the classic 0–1 Knapsack uploading requests is still not changed in both frameworks. Thus,
problem by employing filtering process based on task completion the agent with limited computation capacity has to deal with a
time constraint. The main contribution of this paper can be sum- great number of requests in the agent-based framework, which
marized below. may cause the heavy computation workload. In our framework,
however, only the beneficial offloading requests are send to the
• First, we propose an agent-based computation offloading agent by checking the energy constraint, which cannot only reduce
framework aiming to shorten the delay of computation of- the number of the invalid requests but also save the energy of
floading request for mobile users, alleviate communication uploading requests. Various energy-based offloading policies are
overhead in the network and avoid energy consumption of also studied [12,13,19–22]. In [19], Kumar and Lu proposed a
transmitting invalid requests. simple energy model to quickly estimate the energy saved from
• Second, we formulate the offloading optimization problem remote execution for the single user. However, the completion
into the maximum energy saving problem under constraints time constraint is not considered, which may lead to an incor-
of task completion time and network bandwidth. rect offloading decision. In [20], Liu et al. proposed an offloading
• Third, to solve the optimization problem, we design a DPAF decision method to minimize the energy consumption of mobile
algorithm to provide a policy of computation offloading device with an acceptable time delay and communication quality.
selection among mobile devices. Besides, in [21], the authors studied to schedule the offloading
• Finally, we demonstrate the energy saving performance of tasks to minimize the energy consumption of mobile devices for
the proposed algorithm by numerical results and evaluate one application under the constraint of total completion time.
the quick-response feature of the agent-based framework. Moreover, in [13,22], the authors aimed to minimize both energy
168 Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176
invalid requests sent to the agent, avoid the extra energy consump-
tion on the device and reduce the number of requests processed by
the agent.
In addition, the matching mechanism also works on the agent’s
profiler. After the agent receives the beneficial offloading tasks
from devices, the agent’s profiler will run the filtering process to
check whether the offloaded tasks can satisfy the completion time
constraint. Therefore, the task filtering process is designed, which
can be regarded as further matching. If the tasks cannot meet the
completion time constraint, then they will be rejected by the agent
and will be executed on the devices. Otherwise, the scheduler will
allocate task execution destination. And the detailed explanation
about matching procedure on the agent will be given in Section 5.
Table 1 framework. On the one hand, from the user’s perspective, benefi-
Notations used in this paper. cial offloading prolongs the battery life. On the other hand, from the
Notation Description service provider’s point of view, when determining the wireless
Um The device m access schedule for computation offloading, we need to ensure that
Tmc /Tml Cloud execution/local execution time of Um the selected device should benefit from cloud execution.
Em c
/Eml Remote/local energy consumption of Um
Therefore, in our framework, only beneficial offloading can
Tmmax The maximum time tolerance of Um
Pm t
/Pme Transmitting power/executing power of Um request the agent for remote execution. Accordingly, we design a
rm /R Available bandwidth of Um /total available bandwidth filtering process on mobile device to check whether the task can
Bm /Cm Bits/cycles of offloading task of Um achieve beneficial offloading, which is an effective way to reduce
fml /f c CPU frequency of Um /remote cloud request delay and avoid energy consumption on requesting. The
am The binary variable of Um
key of the filtering process is to find a parameter to judge whether
the offloadable task satisfies the energy constraint. Combining (2)
(4) with (5), we can obtain the bandwidth threshold rthc in (6),
the number of CPU cycles executed per second, which reflects the which is a suitable parameter in the filtering.
computation capability of the local device.
Rm = (f c , rm ), where f c represents the cloud computation ca- c
Em l
⩽ Em (5)
pability and rm denotes the uplink bandwidth for computation
offloading between the mobile device and the agent. One band- fml Pm
t
Bm
width unit specifies the minimum bandwidth required to support rm ⩾ rthc = e
. (6)
Cm Pm
computation offloading. Then, the total bandwidth R as well as
uplink bandwidth rm can be expressed as the integer times of the It is not difficult to observe from (6) that the bandwidth thresh-
bandwidth unit. Therefore, we assume that only the integer times old rthc can be calculated by the parameters given by the profiler
of basic bandwidth unit are allocated for wireless resource [29]. of the mobile device. Moreover, rthc is not fixed, which can be
am is a decision variable for Um , which indicates whether the updated periodically according to the runtime of the mobile device
device m decides to execute the task locally (am = 0) or offload and complexity of the task. If the estimated bandwidth rm is less
it to the cloud (am = 1). For the convenience of presentation, the than the bandwidth threshold rthc , the device cannot benefit from
notations used in this paper are listed in Table 1. offloading and no request will be sent to the agent. Otherwise, the
When the Um decides to run the task locally, the execution request of beneficial offloading will be sent to the agent and then
energy consumption Em l
and the execution time Tml are defined as the agent will judge whether the task meets other constraints. Fil-
follows, respectively: tering operation of the beneficial offloading is executed on mobile
device, which is a part of the complete workflow of the filtering
Cm
Tml = (1) procedure, as illustrated in Fig. 3.
fml
Cm 4.3. Optimization problem formulation
l
Em = e
Pm . (2)
fml
In this subsection, we will formulate the offloading problem
If the task is permitted to execute on the cloud, then the energy
c among multiple users. However, there are two kinds of processing
consumption Em and the computation offloading delay Tmc includ-
approaches for offloading, online processing and batch processing.
ing the transmission time and the execution time on the cloud are
One of the advantages of online processing is that it can get the
given by, respectively,
computation result very fast. Commonly, offloading tasks are pro-
Bm Cm cessed online in the way of FCFS (First Come First Serve) without
Tmc = + (3)
rm fc requiring for information about all offloading tasks. Therefore, the
Bm drawback of this approach is that it cannot get the optimal solution.
c
Em = t
Pm . (4) In contrast, batch processing can get the optimal solution among
rm
tasks. But its shortcoming is that it needs offloading information of
In this work, similar to existing work [13,19,32], we neglect the all tasks before processing. It is clear that the delay-sensitive appli-
time and energy consumption that the cloud returns the com- cations are more suitable for online processing. In our scenario, we
putation outcome back to mobile devices. This is because the consider that multiple computation-intensive tasks are uploaded
computation input data not only includes program codes and input simultaneously to achieve maximum energy saving. Therefore, we
parameters but also contains the related data stored in the heap need to gather the offloading information of all tasks in advance
or stack in the mobile device. Therefore, compared with the size and employ batch processing to achieve our goal.
of computation input data, the computation outcome is generally In order to achieve the goal of maximizing energy saving for
much smaller for many applications (e.g., face recognition). all of the devices (MESA) in the proposed computation offloading
framework, we will formulate the MESA problem over decision
4.2. Beneficial offloading variables (am ), as
n
In this subsection, we will give the definition of beneficial ∑
l
offloading.
OPT − 1 max am (Em − Emc ) (7)
{a1 ,...,an }
m=1
where completion time constraint (8) specifies that the task com-
pletion time should not exceed the maximum completion time,
i.e., completion task deadline, Tmmax . Bandwidth constraint (9) en-
sures that the overall bandwidth required by all mobile devices
for offloading tasks cannot exceed the total available bandwidth
R. Offloading decision constraint (10) means that the offloading
decision variable am of each mobile device need to be 0 or 1.
5. DPAF algorithm
transmission time for Um . rths is the threshold, which denotes The problem (14) is equivalent to the following 0–1 Knapsack
the minimum bandwidth. Therefore, we can adopt rths to check problem:
whether the time constraint of the offloading task is satisfied. If n
∑ n
∑
rm ≥ rths , the task can be one of suitable offloading candidates to max vi xi , s.t . wi xi ⩽ W and xi ∈ {0, 1}. (15)
run on remote cloud. Otherwise, if rm < rths , the request from Um i=1 i=1
will be rejected by the agent and the task is not allowed to execute
Therefore, the MESA problem is equivalently transformed to the
on the cloud. The detailed filtering process on the agent is shown
classical 0–1 Knapsack problem, with Ems
, rm , R corresponding to vi ,
in Fig. 3.
It is noting that the completion time constraint is checked on
wi , W . ■
the agent. This is because the frequency parameter of cloud f c
needs to be sent continuously to all of mobile devices via the 5.2. Task offloading selection
agent. Otherwise, if the completion time constraint is executed
on the mobile device, the frequent transmission of the frequency In this subsection, we will adopt dynamic programming method
parameter f c between the agent and the mobile device will lead to solve 0–1 Knapsack problem. The basic idea of dynamic pro-
to a great burden on the mobile device. The task filtering process gramming is to regard λ offloading requests as a series of decisions
in Fig. 3 aims to shorten the response time of the offloading task whether the tasks are executed on distant cloud. Thus, we employ
from the remote cloud, furthermore, greatly reduce the number of the recursive equation of dynamic programming to calculate the
invalid offloading tasks. Although the number of original offloading optimal energy saving with the bandwidth constraint. We assume
requests is N, after filtering on the agent, the number of offloading that dp[m][l] indicates the optimal energy saving for m devices
requests is reduced to λ. under l bandwidth units.
Therefore, we can transform the MESA optimization problem ⎧
0 l=0
OPT-1 as ⎪
rm > l
⎪
⎨dp[m − 1][l]
λ dp[m][l] = (16)
max(dp[m − 1][l − rm ] + esm ,
∑
OPT − 2 max l
am (Em − c
Em ) am ∈ {0, 1} (13)
⎪
⎪
{a1 ,...,aλ }
rm ⩽ l.
⎩
m=1 dp[m − 1][l])
172 Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176
The Eq. (16) shows how the value of dp[m][l] can be recursively 6. Performance evaluation
computed. In the following, we will discuss the calculation of
dp[m][l] from three aspects. In this section, we first introduce the experiment settings and
then give experimental results to demonstrate the advantages of
• If no available bandwidth can be utilized to offload tasks, our proposed DPAF algorithm and computation offloading scheme.
i.e., l = 0, then there does not exist the optimal energy In the experiments, we first evaluate our proposed DPAF algorithm
saving, i.e., dp[m][l] = 0. by comparison with other approaches. Second, we compare our
• If available bandwidth is less than the utilized bandwidth of offloading framework with other schemes. Finally, we show our
framework has low latency and less energy consumption.
the offloading task from Um , i.e., rm > l, then the optimal
energy saving is equal to dp[m − 1][l].
6.1. Experiment setup
• If available bandwidth is able to allow the remote execution
of the task from Um , i.e., rm ⩽ l, then the agent needs to make We run all of the algorithms on Ubuntu 14.04LTS system using
a decision whether to permit Um to offload. Python 2.7. All of the parameters follow the random uniform dis-
tribution.
If am = 0, then the offloading task is rejected by the agent and The CPU clock frequency of each device is set from 1 GHz to
the dp[m][l] can be derived according to dp[m − 1][l]. If am = 1, 1.5 GHz randomly. The computation capability of the cloud server
then the agent accepts this task and dp[m][l] can be derived by is 3.4 GHz. We assume that WiFi radio power is between 257 and
subtracting the utilized bandwidth units rm and adding the energy 325 mW and the processing power of the device is between 644
saving esm of task offloading based on dp[m − 1][l]. Therefore, it and 700 mW.
should be considered whether to choose the offloading task of Um We let the available bandwidth between mobile devices and
to be executed remotely in order to achieve the maximum energy the agent be from 100kbps to 800kbps. The total bandwidth varies
saving, which can be given by dp[m][l] = max(dp[m − 1][l − rm ] + from 10 to 20 Mbps. The offloading task includes uploading bits
esm , dp[m − 1][l]). and required cycles. Since different tasks have different execution
Therefore, the DPAF algorithm can be summarized in Algorithm features, Cm is set from 200 to 2000 Mega cycles and Bm is from
10 kB to 1 MB. The number of mobile devices varies from 50 to 300
1.
for each virtual machine instance. The maximum tolerant delay
Tmmax is taken as a variable coefficient from 1.0 to 2.0 based on local
Algorithm 1 DPAF algorithm Based on Dynamic Programming
processing time Tml .
Input: Total available bandwidth unit, R; the number of offloading We adopt the load-input Data Ratio (LDR) to characterize the
requests after filtering, λ; bandwidth utilized and energy saved complexity of task offloading. Similar to [33], we let LDR = Cm /Bm .
for Um : rm and em , ∀m ∈ {1, ..., λ}; The high LDR implies that the task is computation-intensive and
Output: the set of selected offloading requests, O; maximum en- suitable for execution on center cloud. On the contrary, the low LDR
ergy saving, Esum ; means that the task is communication-intensive and can benefit
1: Initialize the dp[m][0] = 0, dp[0][l] = 0, ∀m ∈ {1, ..., λ};∀l ∈ from local execution.
{1, ..., R};
2: for m = 1 to λ do
6.2. Performance of DPAF Algorithm
3: for l =1 to R do
4: if (bm > l) then 6.2.1. Performance on request filtering
5: dp[m][k] = dp[m − 1][k] We first evaluate the performance of DPAF algorithm in terms of
6: else request filtering. The filtering process aims to reduce the offloading
7: dp[m][l] = max(dp[m − 1][l − rm ] + esm , dp[m − 1][l]) requests that cannot meet constraints and shorten the decision-
8: end if making latency. Fig. 4 depicts the changes of offloading requests
9: end for before/after request filtering for different number of users [100,
10: end for
200, 300], which is corresponding to cases A, B, C, respectively. It
11: Esum = dp[λ][R]
is not difficult to observe that for all cases the offloaded requests
12: l = R
can be greatly reduced after filtering. The valid offloading requests
13: for m = 1 to λ do
increase with total offloading requests. Moreover, the filtering
14: if (dp[m][l] == dp[m − 1][l]) then process works better on the tasks with lower LDR since they are
15: am = 0; communication-intensive and easy to break the completion time
16: else constraint. Therefore, in each case, the number of offloading tasks
17: am = 1; with LDR = 1.5 is larger than that with LDR = 1.0. In other words,
18: l = l − em the higher the LDR of offloading task is, the more likely the task
19: am insert into set O will be offloaded.
20: end if
21: end for
6.2.2. Performance on energy saving
Before studying the relationship between the energy saving
It is not difficult to observe that the DPAF algorithm is classified
and offloading tasks as well as bandwidth, we first focus on the
as two stages. One is from line 1 to 12, where we figure out the
largest instance size that the algorithm can solve through dynamic
maximum energy savings based on dynamic programming. The
programming. According to the complexity of Algorithm 1 O(λR),
other is from line 13 to 22, where we are able to get a solution it is obviously that the processing time is increasing with the
of such maximum energy saving. Clearly, the time complexity number of offloading tasks and the size of total bandwidth, which
of the algorithm is O(λR) since the time complexity of getting is illustrated in Fig. 5. Although our algorithm adopts batch pro-
the maximum energy saving is O(λR) and the time complexity of cessing, it is not realistic for mobile devices to wait a long time
getting the offloading strategy under the maximum energy saving for receiving the offloading response from the cloud. Therefore,
is O(λ). the maximum acceptable time should be set in advance. According
Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176 173
Fig. 4. The offloaded requests are reduced after offloading task filtering.
Fig. 6. The comparisons of different bandwidths in multiple tasks.
Fig. 8. The energy consumption of different offloading schemes. Fig. 10. The number of offloading tasks for different LDRs.
6.3.1. Comparison of different offloading schemes saving. With the increase of available bandwidth, the same LDR
In this simulation, we assume that the bandwidth is enough so also has different performance. For example, when LDR = 5 and R
that all of the selected tasks can be offloaded to the cloud. Fig. 8 = 5000, 10 000, 20 000 kbps, the saved energy is 1525 mJ, 2941 mJ,
depicts the energy consumption of the three offloading schemes 5502 mJ, respectively. Therefore, the saved energy is increasing
over different number of offloading tasks. It is not difficult to with the available bandwidth for a given LDR.
observe that the energy consumption of our offloading scheme is Furthermore, we can observe from Fig. 10 that the number of
approximately 50% of energy consumption of all local execution. It offloading tasks also increases when the LDR and available band-
is unrealistic with the bandwidth constraint that all tasks are of- width increase. The reason is that larger LDR in given bandwidth
floaded to the cloud simultaneously. All local execution consumes can accept more tasks. For example, the number of offloading tasks
most energy. And not all of tasks can benefit from remote execution increases from 0 to 7 as LDR increases when R = 10 000 kbps. And
because of low bandwidth or LDR. However, our offloading solution it is clear that more tasks can be executed if larger bandwidth is
can schedule tasks on the agent, which makes the mobile device provided when LDR is fixed. Besides, the task with highest LDR is
consume the least energy. firstly chosen to offload to remote cloud.
[7] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu, R. Chandra, P. [33] S. Guo, B. Xiao, Y. Yang, Y. Yang, Energy-efficient dynamic offloading and
Bahl, MAUI: Making smartphones last longer with code offload, in: Proceed- resource scheduling in mobile cloud computing, in: IEEE INFOCOM 2016 -
ings of the 8th International Conference on Mobile Systems, Applications, and The 35th Annual IEEE International Conference on Computer Communications,
Services, 2010, pp. 49–62. 2016, pp. 1–9.
[8] Y. Li, W. Gao, Code offload with least context migration in the mobile cloud, [34] M.-R. Ra, A. Sheth, L. Mummert, P. Pillai, D. Wetherall, R. Govindan, Odessa:
in: 2015 IEEE Conference on Computer Communications, INFOCOM 2015, Enabling interactive perception applications on mobile devices, in: Proceed-
Kowloon, Hong Kong, April 26–May 1, 2015, 2015, pp. 1876–1884. ings of the 9th International Conference on Mobile Systems, Applications, and
[9] S. Yang, D. Kwon, H. Yi, Y. Cho, Y. Kwon, Y. Paek, Techniques to minimize state Services, MobiSys’11, 2011, pp. 43–56.
transfer costs for dynamic execution offloading in mobile cloud computing, [35] Y.D. Lin, E.T.H. Chu, Y.C. Lai, T.J. Huang, Time-and-energy-aware computation
IEEE Trans. Mob. Comput. 13 (11) (2014) 2648–2660. offloading in handheld devices to coprocessors and clouds, IEEE Syst. J. 9 (2)
[10] S. Deng, L. Huang, J. Taheri, A.Y. Zomaya, Computation offloading for service (2015) 393–405.
workflow in mobile cloud computing, IEEE Trans. Parallel Distrib. Syst. 26 (12)
(2015) 3317–3329.
[11] Y. Liu, M.J. Lee, An effective dynamic programming offloading algorithm in
mobile cloud computing system, in: 2014 IEEE Wireless Communications and Zhikai Kuang received the B.S. degree in telecommunica-
Networking Conference, WCNC, 2014, pp. 1868–1873. tions engineering from Southwest University, Chongqing,
[12] E. Meskar, T.D. Todd, D. Zhao, G. Karakostas, Energy aware offloading for com- China, in 2015. She is currently working toward the Ph.D.’s
peting users on a shared communication channel, IEEE Trans. Mob. Comput. degree in signal and information processing, Southwest
16 (1) (2017) 87–96. University. Her research interests include stream schedul-
[13] X. Chen, Decentralized computation offloading game for mobile cloud com- ing in data center networks and software defined net-
puting, IEEE Trans. Parallel Distrib. Syst. 26 (4) (2015) 974–983. working.
[14] M. Nir, A. Matrawy, M. St-Hilaire, An energy optimizing scheduler for mobile
cloud computing environments, in: 2014 IEEE Conference on Computer Com-
munications Workshops, INFOCOM WKSHPS, 2014, pp. 404–409.
[15] L. Yang, J. Cao, H. Cheng, Y. Ji, Multi-user computation partitioning for latency
sensitive mobile cloud applications, IEEE Trans. Comput. 64 (8) (2015) 2253–
2266.
[16] X. Liu, Y. Li, H.H. Chen, Wireless resource scheduling based on backoff for Songtao Guo received the B.S., M.S., and Ph.D. degrees in
multiuser multiservice mobile cloud computing, IEEE Trans. Veh. Technol. computer software and theory from Chongqing Univer-
65 (11) (2016) 9247–9259. sity, Chongqing, China, in 1999, 2003, and 2008, respec-
[17] K.M. Sim, Agent-based cloud computing, IEEE Trans. Serv. Comput. 5 (4) (2012) tively. He was a professor from 2011 to 2012 at Chongqing
564–577. University. He is currently a full professor at Southwest
[18] S. Choi, M. Baik, H. Kim, E. Byun, H. Choo, A reliable communication protocol University, China. He was a senior research associate at
for multiregion mobile agent environments, IEEE Trans. Parallel Distrib. Syst. the City University of Hong Kong from 2010 to 2011, and
21 (1) (2010) 72–85. a visiting scholar at Stony Brook University, New York,
[19] K. Kumar, Y.H. Lu, Cloud computing for mobile users: Can offloading compu- from May 2011 to May 2012. His research interests in-
tation save energy? Computer 43 (4) (2010) 51–56. clude wireless sensor networks, wireless ad hoc networks
[20] K. Liu, J. Peng, H. Li, X. Zhang, W. Liu, Multi-device task offloading with time- and parallel and distributed computing. He has published
constraints for energy efficiency in mobile cloud computing, Future Gener. more than 30 scientific papers in leading refereed journals and conferences. He has
Comput. Syst. 64 (2016) 1–14. received many research grants as a principal investigator from the National Science
[21] T. Liu, F. Chen, Y. Ma, Y. Xie, An energy-efficient task scheduling for mobile Foundation of China and Chongqing and the Postdoctoral Science Foundation of
devices based on cloud assistant, Future Gener. Comput. Syst. 61 (2016) 1–12. China.
[22] X. Chen, L. Jiao, W. Li, X. Fu, Efficient multi-user computation offloading for
mobile-edge cloud computing, IEEE/ACM Trans. Netw. 24 (5) (2016) 2795–
2808. Jiadi Liu received the B.S. degree in computer science and
[23] C. Wang, C. Liang, F.R. Yu, Q. Chen, L. Tang, Computation offloading and technology from Jiangnan University in 2009 and the M.S.
resource allocation in wireless cellular networks with mobile edge computing, degree in computer application from Jiangnan University,
IEEE Trans. Wireless Commun. 16 (8) (2017) 4924–4938. Jiangsu, Wuxi, China, in 2013. He is currently pursuing
[24] I. Behera, C.R. Tripathy, Performance modelling and analysis of mobile grid the Ph.D. degree in the college of Electrical Information
computing systems, Int. J. Grid Util. Comput. 5 (1) (2014) 11–20. and Engineering, Southwest University, Chongqing, China.
[25] T.M. Lynar, R.D. Herbert, S. Chivers, W.J. Chivers, Resource allocation to con- His current research interests include game theory, mobile
serve energy in distributed computing, Int. J. Grid Util. Comput. 2 (1) (2011) cloud computing, machine learning, convex optimization
1–10. theory and its application.
[26] T.M. Lynar, R.D. Herbert, W.J. Chivers, Simon, Reducing energy consumption in
distributed computing through economic resource allocation, Int. J. Grid Util.
Comput. 4 (4) (2013) 231–241.
[27] G. Feng, R. Buyya, Maximum revenue-oriented resource allocation in cloud,
Int. J. Grid Util. Comput. 7 (1) (2016) 12–21. Yuanyuan Yang received the B.Eng. and M.S. degrees in
[28] S.C. Shah, M.-S. Park, F.H. Chandio, Resource allocation scheme to reduce computer science and engineering from Tsinghua Univer-
communication cost in mobile ad hoc computational grids, IJSSC 1 (4) (2011) sity, Beijing, China, and the M.S.E. and Ph.D. degrees in
270–280. computer science from Johns Hopkins University, Balti-
[29] Y. Liu, M.J. Lee, Y. Zheng, Adaptive multi-resource allocation for cloudlet-based more, Maryland. She is a professor of computer engineer-
mobile cloud computing system, IEEE Trans. Mob. Comput. 15 (10) (2016) ing and computer science at Stony Brook University, New
2398–2410. York. Her research interests include wireless networks,
[30] L. Yang, J. Cao, S. Tang, T. Li, A.T.S. Chan, A framework for partitioning and data center networks, optical networks and high-speed
execution of data stream applications in mobile cloud computing, in: Cloud networks. She has published over 300 papers in major
Computing (CLOUD), 2012 IEEE 5th International Conference on, 2012, pp. journals and refereed conference proceedings and holds
794–802. seven US patents in these areas. She has served as an As-
[31] K. Elgazzar, P. Martin, H.S. Hassanein, Cloud-assisted computation offloading sociate Editor-in-Chief and an Associated Editor for IEEE Transactions on Computers
to support mobile services, IEEE Trans. Cloud Comput. 4 (3) (2016) 279–292. and an Associate Editor for IEEE Transactions on Parallel and Distributed Systems.
[32] D. Huang, P. Wang, D. Niyato, A dynamic offloading algorithm for mobile She has also served as a general chair, program chair, or vice chair for several major
computing, IEEE Trans. Wireless Commun. 11 (6) (2012) 1991–1995. conferences and a program committee member for numerous conferences. She is
an IEEE Fellow.