You are on page 1of 11

Future Generation Computer Systems 81 (2018) 166–176

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

A quick-response framework for multi-user computation offloading


in mobile cloud computing
Zhikai Kuang a , Songtao Guo a, *, Jiadi Liu a , Yuanyuan Yang a,b, *
a
Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest
University, Chongqing, 400715, China
b
Department of Electrical and Computer Engineering, Stony Brook University, Stony Brook, NY 11794, USA

highlights

• We propose an agent-based mobile cloud computing framework.


• We formulate the offloading optimization problem into the maximum energy saving problem.
• We propose a Dynamic Programming After Filtering (DPAF) algorithm.

article info a b s t r a c t
Article history: The execution of much sophisticated applications on the resource-constrained mobile device will lead to
Received 22 March 2017 the fast exhaustion of the battery of mobile device. Therefore, mobile cloud computing (MCC) is regarded
Received in revised form 25 August 2017 as an energy-effective approach by offloading tasks from mobile device to the resource-enough cloud,
Accepted 22 October 2017
which cannot only save energy for mobile devices but also prolong the operation time of battery. However,
Available online 2 November 2017
it still remains a challenging issue to coordinate task offloading among mobile devices and get offloading
Keywords: results quickly at the same time. In this paper, we propose an agent-based MCC framework to enable
Computation offloading the device to receive offloading results faster by making offloading decision on the agent. Moreover,
Energy saving to get an offloading strategy, we formulate the problem of maximizing energy savings among multiple
Task filtering users under the completion time and bandwidth constraints. To solve the optimization problem, we
Completion time constraint propose a Dynamic Programming After Filtering (DPAF) algorithm. In the algorithm, firstly, the original
Mobile cloud computing offloading problem is transformed to the classic 0–1 Knapsack problem by the filtering process on the
agent. Furthermore, we adopt dynamic programming algorithm to find an optimal offloading strategy.
Simulation results show that the framework can more quickly get response from agent than other
schemes and the DPAF algorithm outperforms other solutions in energy saving.
© 2017 Elsevier B.V. All rights reserved.

1. Introduction Mobile cloud computing (MCC) is envisioned as a promising


method to address such a challenge. Computing capabilities of
Nowadays, smartphones are gaining enormous popularity due smart mobile devices can be augmented by migrating computation
to that versatile mobile applications satisfy the needs of the users. tasks via wireless access to the resource-rich cloud, referred to as
Most computation-intensive applications, such as face recogni- computation offloading. To achieve computation offloading, the
tion, augment reality, are complex and widely installed on smart computation-intensive application needs to be partitioned into
devices, which causes high energy consumption [1]. However, many tasks, which can be separated into two categories. One is the
hardware resource (e.g., CPU computation capacity, storage, etc.) non-offloadable tasks, which can be executed on mobile devices
and energy supply are constrained on mobile devices. In particular, due to the dependencies among tasks or hardware dependencies.
energy supply is still the primary bottleneck for mobile devices [2]. The other is the offloadable task, which can implement remote
Thus, how to run such complex application on the energy-limited execution by offloading [3]. Obviously, offloadable tasks selectively
mobile device remains a challenge. executed on clouds can accommodate mobile devices to run so-
phisticated applications. The advantage of this approach is that it
can alleviate the high computation workload on mobile devices.
* Corresponding authors.
E-mail addresses: songtao_guo@163.com (S. Guo), Moreover, the computation offloading technique is beneficial to
yuanyuan.yang@stonybrook.edu (Y. Yang). save energy for mobile devices and prolongs the operation time of

https://doi.org/10.1016/j.future.2017.10.034
0167-739X/© 2017 Elsevier B.V. All rights reserved.
Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176 167

battery. Some works [4–6] focused on virtual machine migration, The rest of this paper is organized as follows: Section 2 intro-
which show that such migration can save energy. The works in [7– duces related work on offloading problem. Section 3 gives system
9] were dedicated to fine-grained granularity. These works indicate framework and its operation procedure. Section 4 proposes the
that the mobile devices can benefit from offloading. problem statement and a detailed algorithm. Section 5 shows ex-
Although mobile devices are allowed to take advantages of perimental result and performance evaluation. Section 6 concludes
cloud computing to alleviate computation workload as well as this work and makes prospects to the future.
prolong battery lifetime, it still remains challenging to coordi-
nate offloading tasks among mobile devices and achieve quick 2. Related work
response for mobile users at the same time. In most existing frame-
works [4,7,10,11], mobile devices send the computation offloading Related works are introduced in this section, which are sepa-
requests to the cloud, and then the offloading decision made by rated into two aspects. One is the computation offloading scheme
the cloud will be sent back to the mobile devices. However, mobile and the other is offloading policies. There have been a lot of works
users will waste long time for receiving offloading decision from adopting the computation offloading model [7,10,11]. Cuervo et al.
the cloud without considering whether they can benefit from provided a method-level dynamic offloading framework MAUI to
offloading. Especially for invalid requests, they not only suffer of- maximize the energy saving in [7]. Only if the method is remote-
floading failures from clouds but also prolong the execution time of able in the mechanism, MAUI collects the parameters required by
the tasks. Besides, most existing works consider either the comple- offloading and sends the requests to center cloud. The solver on
tion time constraint [7,12] or the bandwidth constraint [13,14] to the cloud decides whether to execute the task remotely. In [15],
achieve energy savings. However, it is necessary to take both of the Yang et al. studied multi-user partitioning and offloading problem
constraints into consideration to reduce energy consumption for where PaaS middleware of the cloud makes the partitioning de-
all users. Therefore, we consider two issues in our work: (1) How cisions. These works adopted a traditional offloading framework
to design a framework in case of waiting for a long time to retrieve that the cloud makes decisions according to the information sent
the reply of computation offloading request? (2) How to make an by mobile devices. However, there are two drawbacks of such a
efficient computation offloading strategy among mobile devices framework. One is that the response to offloading request from the
under the constraints of completion time and limited bandwidth? cloud to device needs a long latency, and the other is that mobile
To address the above issues, in this paper, we propose an devices will upload requests containing offloading parameters to
agent-based quick-response computation offloading framework the cloud while not considering whether the tasks can benefit from
to ensure that the mobile user can get the response of the of- offloading, which causes extra energy consumption.
floading request more quickly. In addition, we propose Dynamic To shorten the latency between distant cloud and mobile de-
Programming After Filtering (DPAF) algorithm to solve the multi- vice, the agent-based framework was proposed. Liu et al. in [16]
user computation offloading problem. proposed a BWRS scheme to improve the QoS of real-time streams
The framework consists of distant cloud, agents as well as and the overall performance of MCC network in the mobile agent-
multiple mobile devices. In the proposed framework, we adopt based architecture. In [14], Nir et al. investigated a scheduler model
the request filtering on the device and agent so as to effectively on the broker node which solves tasks assignment to minimize
achieve the quick response. Our proposed framework has the total energy consumption. The advantages of this framework are
following advantages. First, mobile devices only send beneficial introduced in [17,18]. First, it can reduce information interaction
offloading requests to the agent, which not only can avoid the and data delivery delay. Second, the agent can periodically apply
energy consumption from invalid offloading requests, but also for a collection of resources to satisfy multiple users instead of
reduce the computation workload on the agent and alleviate the accessing cloud resources by every user.
communication burden of the network. Second, compared with Compared with the traditional framework, the agent-based
sending computation offloading requests to the remote cloud, framework can make offloading decisions on the nearby agents
mobile devices retrieve the offloading decision more quickly and instead of uploading the requests to the center cloud for decisions.
shorten the waiting time for the offloading result. First, the agent receives the requests from mobile devices and then
More specifically, our proposed framework aims to provide requires the information from the cloud and makes offloading de-
an optimal offloading strategy to achieve multi-user maximum cision finally. Obviously, the place for making decision just moves
energy saving. To that end, we first formulate the computation from distance cloud to nearby agent. That is why the agent-based
offloading problem as an optimization problem. Then, we convert framework reduces transmission delay. However, the number of
the original optimization problem into the classic 0–1 Knapsack uploading requests is still not changed in both frameworks. Thus,
problem by employing filtering process based on task completion the agent with limited computation capacity has to deal with a
time constraint. The main contribution of this paper can be sum- great number of requests in the agent-based framework, which
marized below. may cause the heavy computation workload. In our framework,
however, only the beneficial offloading requests are send to the
• First, we propose an agent-based computation offloading agent by checking the energy constraint, which cannot only reduce
framework aiming to shorten the delay of computation of- the number of the invalid requests but also save the energy of
floading request for mobile users, alleviate communication uploading requests. Various energy-based offloading policies are
overhead in the network and avoid energy consumption of also studied [12,13,19–22]. In [19], Kumar and Lu proposed a
transmitting invalid requests. simple energy model to quickly estimate the energy saved from
• Second, we formulate the offloading optimization problem remote execution for the single user. However, the completion
into the maximum energy saving problem under constraints time constraint is not considered, which may lead to an incor-
of task completion time and network bandwidth. rect offloading decision. In [20], Liu et al. proposed an offloading
• Third, to solve the optimization problem, we design a DPAF decision method to minimize the energy consumption of mobile
algorithm to provide a policy of computation offloading device with an acceptable time delay and communication quality.
selection among mobile devices. Besides, in [21], the authors studied to schedule the offloading
• Finally, we demonstrate the energy saving performance of tasks to minimize the energy consumption of mobile devices for
the proposed algorithm by numerical results and evaluate one application under the constraint of total completion time.
the quick-response feature of the agent-based framework. Moreover, in [13,22], the authors aimed to minimize both energy
168 Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176

consumption and processing time in multi-user offloading, but did


not refer the completion time at all. Meskar et al. in [12], considered
the maximum time tolerance in multi-user offloading problem to
maximize energy saving.
However, the resource allocation should be taken into consid-
eration, since it can influence the offloading decisions. In [23],
offloading decisions and resource allocation were considered to
formulate the optimization problem which is solved by alternat-
ing direction method of multipliers-based algorithm. Some works
also considered how to allocate resources on tasks to conserve
energy [24–28]. Behera et al. considered load balance with fault tol-
erance in [24]. Besides, in [27], Guo et al. studied that the services
from customers are allocated into clouds dynamically to achieve
maximum revenue, which takes some QoS factors into account, like
pricing, arrival rates, service rates and available resources. And, the
work in [28] aimed to minimize communication cost by a central-
ized resource allocation scheme to distribute the tasks. However,
the above works did not take the limited bandwidth constraint
into consideration to formulate the energy-based policy. In the Fig. 1. Our proposed computation offloading framework.
perspective of resource allocation, we mainly focus on how to al-
location bandwidth to achieve maximum energy saving. Although
the works [29,30] took the bandwidth into account, they aimed therefore, we design the middlewares on both agent and device to
to achieve optimal bandwidth allocation and maximum through- implement the three mechanisms. The middleware on the device
put, separately. Therefore, compared to these existing works, we contains profiler, solver and executor, and the middleware on
consider the completion time constraint as well as the bandwidth the agent includes profiler, scheduler and executor. The profiler
constraint to maximize energy saving. on device is responsible for detecting information on network
bandwidth, device and task, which can be used in (6) in Section 4.
3. Agent-based computation offloading framework The solver on device and the profiler on agent focus on matching
mechanism to filter the useless requests, which is illustrated in
In this section, we will present our agent-based computation both Sections 4 and 5. The scheduler makes the offloading deci-
offloading framework from two aspects : framework constitu- sions among multiple devices, which is the optimization problem
tion and its operations. Since the computation offloading frame- formulated in Section 4 and the proposed solutions is in Section 5.
work needs discovery, matching and allocation mechanisms to Finally, the executor will follow the offloading decisions made by
cooperate each other, the related components to achieve these scheduler to allocate task execution and achieve offloading.
mechanisms will be introduced and the cooperations among these Different from traditional framework, we set middlewares on
components will also be illustrated. devices and agents instead of remote cloud to achieve offloading.
There are two reasons that one middleware is deployed on agents.
3.1. Constitution of framework One is that, the latency from mobile devices to the agent is much
shorter than that to the center cloud. The other is that, the mobile
Our proposed computation offloading framework is composed device can quickly receive the offloading failure information from
of a resource-rich remote cloud (distant cloud or center cloud), and the agent instead of waiting for a long time to get the result
a crowd of mobile devices in the coverage of the access point near from the cloud. Finally, mobile users will access the resource-rich
to the agent, as shown in Fig. 1. The distant cloud is connected to remote cloud once the offloading requirement is satisfied. Overall,
the broker (agent) node through the high-speed wireline network. the agent is responsible for determining which tasks should be
Besides, each device has at least one computation-intensive task executed remotely.
to be executed and is willing to offload it onto the cloud for
reducing energy consumption. Therefore, the offloading requests 3.2. Operations in framework
sent by mobile devices can be handled by the agent through
access point (AP). Moreover, although mobile devices can access In the proposed framework, all of the components coopera-
distant cloud through base station, they can connect to the distant tively achieve mobile cloud computing. The detailed operation in
cloud through WiFi created by APs. Nevertheless, compared with the framework and workflows between the agent and devices are
WiFi connection, base station access has long latency, changeable shown in Figs. 1 and 2. First, we present the whole offloading
bandwidth and high energy consumption. We assume that the procedure to identify the remote execution workflow. Second, we
WiFi connection would not greatly change during the transmission will introduce the role of each component on the middleware to
period between the distant cloud and mobile devices. Therefore, achieve discovery, matching and allocation mechanisms.
we only focus on WiFi connection in our work.
In our framework, we assume that there are sufficient compu- 3.2.1. Workflow of remote execution
tation and storage resources in remote cloud, which implies that In this part, we will overview the framework how tasks operate
the remote cloud can directly execute the offloaded tasks without in the remote execution. First, the device gathers the information
any delay. However, the transmission condition is imperfect, which to judge whether the uploaded task can save energy and the
results in that the cloud may only process limited tasks. In other device will send offloading request to the agent if the tasks satisfy
words, all of the tasks cannot be executed on the cloud at the same the energy constraint. Second, the agent negotiates with remote
time and the cloud should selectively allow tasks to be offloaded. cloud for reserving resources, selects the offloading tasks under the
Therefore, the offloading selection problem need to be consid- time and bandwidth constraints to maximize energy saving, and
ered in the framework. However, offloading would require discov- returns the response to the device. These step are described as ①
ery, matching and allocation mechanisms [7,10]. As shown in Fig. 2, and ② in Fig. 1. Additionally, from ③ to ⑥, the agent receives the
Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176 169

invalid requests sent to the agent, avoid the extra energy consump-
tion on the device and reduce the number of requests processed by
the agent.
In addition, the matching mechanism also works on the agent’s
profiler. After the agent receives the beneficial offloading tasks
from devices, the agent’s profiler will run the filtering process to
check whether the offloaded tasks can satisfy the completion time
constraint. Therefore, the task filtering process is designed, which
can be regarded as further matching. If the tasks cannot meet the
completion time constraint, then they will be rejected by the agent
and will be executed on the devices. Otherwise, the scheduler will
allocate task execution destination. And the detailed explanation
about matching procedure on the agent will be given in Section 5.

3.2.4. Allocation mechanism


Fig. 2. The implement of offloading mechanism on middlewares.
It is known that the requests sent to the scheduler should
satisfy the energy and time constraints. The major objective of
the scheduler is to allocate the limited bandwidth resources to
offloading task and establishes a connection between the remote the devices, aiming to find an optimal offloading strategy. In other
cloud and the device though Internet for data transmission. Finally, words, the scheduler will allocate resources by running Algorithm
the processing result will be returned back to the devices. 1 to maximize energy saving of all devices, which will be given in
In this procedure, the agent is a crucial component that runs Section 5.
offloading task selection algorithm according to the parameters After the scheduler finishes algorithm running, if the tasks are
gathered from both remote cloud and mobile devices. However, not selected for remote execution, the executor will return an
all these interactions between the agent and mobile device as offloading failure to the mobile device. Otherwise, the executor on
well as cloud rely on the middleware. Next, we will introduce the agent will implement a series of interactions with the remote
the discovery, matching and allocation mechanisms based on the cloud as well as the mobile devices to finish the task computation
middleware. offloading, which is illustrated from ③ to ⑥ in Fig. 1. The allocation
procedures on the middleware are as follows. To start with, the
3.2.2. Discovery mechanism executor offloads the tasks to the agent, and furthermore, the agent
In order to achieve offloading, the information of device, task is on behalf of mobile devices to deliver all of the tasks to the cloud
and network bandwidth is required. Thus, in the middleware, for execution. After retrieving the outcomes of the tasks from the
the profiler is responsible for discovering or detecting the factors cloud, the agent will hand over the results to the mobile devices
required in the offloading. The device’s profiler monitors the condi- finally.
tion of the device such as CPU frequency and power consumption. Based on the middlewares on the device and agent, therefore,
Besides, it also needs to detect the characteristics of the offloadable the discovery, matching and allocation mechanisms can work co-
tasks, such as the number of CPU cycles required for its execution operatively to realize offloading.
and the size of task. Moreover, the network bandwidth is also mea-
sured by the profiler. The profiler on the device gathers the above 4. Problem formulation
parameters and delivers them to the solver for further execution.
Therefore, the parameters used in Section 4 can be obtained by 4.1. Computation task model
device’s profiler.
In our framework, we assume that the tasks to be executed are
3.2.3. Matching mechanism computation-intensive, i.e., the size of the task is small but the
Matching mechanism focuses on verifying whether the tasks computation is much heavy.
are suitable for remote execution. The solver on the device and Um can be represented as the device m, where m ∈ 1, 2, . . . , n,
the scheduler on the agent cooperatively achieve this mechanism. and n represents the total number of mobile devices. Moreover,
First, we will introduce the matching mechanism on the device’s the offloading parameters of Um can be regarded as a tuple (Jm ,
solver. Dm , Rm , am ). Jm indicates the offloading task of Um , Dm reflects the
The solver runs the device’s filtering process based on the device condition, Rm denotes the computation resource utilized on
received information and judges whether the tasks can satisfy the remote cloud and am is the offloading decision. These parameters
offloading constraints. Here, we introduce the concept of beneficial are described in details as follows.
offloading, which is referred to as that the energy consumption Jm = (Cm , Bm , Tmmax ), where Cm represents the number of required
of remote execution is no more than that of local execution. The CPU cycles to execute task Jm , and Bm denotes the amount of bits
device’s solver will judge which tasks are beneficial offloading. If that Um needs to upload to remote cloud. It is worth mentioning
a task can benefit from offloading, then the mobile device will that Cm and Bm are two independent parameters. For example, a
deliver the gathered parameters in the form of an XML file [31] as a small size of uploading code could consume a large number of
request to the agent. Otherwise, the profiler will keep collecting the CPU cycles, and vice versa. The uploading time is relevant to the
parameters without sending request. According to the definition uploading bits. Cm depends on the computation level of the task
of beneficial offloading, the bandwidth threshold can be derived, and both local execution time Tml and energy consumption Em l
is
max
which can be considered as the criterion of beneficial offloading relevant to Cm . Tm represents the task completion deadline for
and the detailed explanation will be given in Section 4. Based on Um .
t e
this threshold, the preliminary decision whether the offloading Dm = (Pm , Pm , fml ), where Pm
t
denotes the wireless transmission
e
requirement is satisfied can be made on the device. Clearly, the power of Um if the task is executed remotely. Pm means the exe-
matching mechanism in the proposed framework can reduce the cution power of Um if the task is executed locally, and fml indicates
170 Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176

Table 1 framework. On the one hand, from the user’s perspective, benefi-
Notations used in this paper. cial offloading prolongs the battery life. On the other hand, from the
Notation Description service provider’s point of view, when determining the wireless
Um The device m access schedule for computation offloading, we need to ensure that
Tmc /Tml Cloud execution/local execution time of Um the selected device should benefit from cloud execution.
Em c
/Eml Remote/local energy consumption of Um
Therefore, in our framework, only beneficial offloading can
Tmmax The maximum time tolerance of Um
Pm t
/Pme Transmitting power/executing power of Um request the agent for remote execution. Accordingly, we design a
rm /R Available bandwidth of Um /total available bandwidth filtering process on mobile device to check whether the task can
Bm /Cm Bits/cycles of offloading task of Um achieve beneficial offloading, which is an effective way to reduce
fml /f c CPU frequency of Um /remote cloud request delay and avoid energy consumption on requesting. The
am The binary variable of Um
key of the filtering process is to find a parameter to judge whether
the offloadable task satisfies the energy constraint. Combining (2)
(4) with (5), we can obtain the bandwidth threshold rthc in (6),
the number of CPU cycles executed per second, which reflects the which is a suitable parameter in the filtering.
computation capability of the local device.
Rm = (f c , rm ), where f c represents the cloud computation ca- c
Em l
⩽ Em (5)
pability and rm denotes the uplink bandwidth for computation
offloading between the mobile device and the agent. One band- fml Pm
t
Bm
width unit specifies the minimum bandwidth required to support rm ⩾ rthc = e
. (6)
Cm Pm
computation offloading. Then, the total bandwidth R as well as
uplink bandwidth rm can be expressed as the integer times of the It is not difficult to observe from (6) that the bandwidth thresh-
bandwidth unit. Therefore, we assume that only the integer times old rthc can be calculated by the parameters given by the profiler
of basic bandwidth unit are allocated for wireless resource [29]. of the mobile device. Moreover, rthc is not fixed, which can be
am is a decision variable for Um , which indicates whether the updated periodically according to the runtime of the mobile device
device m decides to execute the task locally (am = 0) or offload and complexity of the task. If the estimated bandwidth rm is less
it to the cloud (am = 1). For the convenience of presentation, the than the bandwidth threshold rthc , the device cannot benefit from
notations used in this paper are listed in Table 1. offloading and no request will be sent to the agent. Otherwise, the
When the Um decides to run the task locally, the execution request of beneficial offloading will be sent to the agent and then
energy consumption Em l
and the execution time Tml are defined as the agent will judge whether the task meets other constraints. Fil-
follows, respectively: tering operation of the beneficial offloading is executed on mobile
device, which is a part of the complete workflow of the filtering
Cm
Tml = (1) procedure, as illustrated in Fig. 3.
fml
Cm 4.3. Optimization problem formulation
l
Em = e
Pm . (2)
fml
In this subsection, we will formulate the offloading problem
If the task is permitted to execute on the cloud, then the energy
c among multiple users. However, there are two kinds of processing
consumption Em and the computation offloading delay Tmc includ-
approaches for offloading, online processing and batch processing.
ing the transmission time and the execution time on the cloud are
One of the advantages of online processing is that it can get the
given by, respectively,
computation result very fast. Commonly, offloading tasks are pro-
Bm Cm cessed online in the way of FCFS (First Come First Serve) without
Tmc = + (3)
rm fc requiring for information about all offloading tasks. Therefore, the
Bm drawback of this approach is that it cannot get the optimal solution.
c
Em = t
Pm . (4) In contrast, batch processing can get the optimal solution among
rm
tasks. But its shortcoming is that it needs offloading information of
In this work, similar to existing work [13,19,32], we neglect the all tasks before processing. It is clear that the delay-sensitive appli-
time and energy consumption that the cloud returns the com- cations are more suitable for online processing. In our scenario, we
putation outcome back to mobile devices. This is because the consider that multiple computation-intensive tasks are uploaded
computation input data not only includes program codes and input simultaneously to achieve maximum energy saving. Therefore, we
parameters but also contains the related data stored in the heap need to gather the offloading information of all tasks in advance
or stack in the mobile device. Therefore, compared with the size and employ batch processing to achieve our goal.
of computation input data, the computation outcome is generally In order to achieve the goal of maximizing energy saving for
much smaller for many applications (e.g., face recognition). all of the devices (MESA) in the proposed computation offloading
framework, we will formulate the MESA problem over decision
4.2. Beneficial offloading variables (am ), as
n
In this subsection, we will give the definition of beneficial ∑
l
offloading.
OPT − 1 max am (Em − Emc ) (7)
{a1 ,...,an }
m=1

Definition 1 (Beneficial Offloading). Given an offloadable task, if subject to


cloud execution does not incur higher energy consumption than
local execution, the device can benefit from such computation (1 − am )Tml + am Tmc ≤ Tmmax , ∀m ∈ {1, . . . , n}, (8)
offloading behavior, which can be called as beneficial offloading. n

rm am ⩽ R, (9)
According to the definition of beneficial offloading, it can be
m=1
described in (5), and considered as the energy constraint for task
offloading. Beneficial offloading plays an important role in our am ∈ {0, 1}, ∀m ∈ {1, . . . , n}. (10)
Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176 171

where completion time constraint (8) specifies that the task com-
pletion time should not exceed the maximum completion time,
i.e., completion task deadline, Tmmax . Bandwidth constraint (9) en-
sures that the overall bandwidth required by all mobile devices
for offloading tasks cannot exceed the total available bandwidth
R. Offloading decision constraint (10) means that the offloading
decision variable am of each mobile device need to be 0 or 1.

5. DPAF algorithm

In this section, we will propose a dynamic programming after


filtering (DPAF) algorithm to solve the optimization problem OPT-
1. First, we design a task filtering process based on the completion
time constraint in (8) and then further transform the problem OPT-
1 into a 0–1 Knapsack problem after checking the completion time
constraint. Finally, we employ adynamic programming to solve the
Knapsack problem.

5.1. Offloading task filtering

Since not all of offloading requests satisfy the constraints in (8)–


(10), we need to design a filtering process on the agent aiming Fig. 3. Completion time constraint based task filtering process.
to reduce the number of offloading tasks, which can improve the
efficiency of offloading selection. Therefore, we can check whether
an offloading task satisfies the completion time constraint in (8).
subject to
Integrated with (1) and (3), the completion deadline constraint
in (8) can be deduced as follows. λ

r m am ⩽ R .
Bm Cm Cm Cm
am ( + − ) ≤ Tmmax − . (11) m=1
rm fc fml fml
Next, we will prove the transformed problem OPT-2 is NP-hard.
Now we discuss the deduced completion deadline constraint
(11) according to different offloading decision am . If the task is Theorem 1. The problem of OPT-2 is NP-hard.
executed on the device, i.e., am = 0, then (11) is satisfied. It is
reasonable since in this case the maximum time tolerance Tmmax s
Proof. Let Em be the saved energy by offloading task for Um . Thus
allows local execution of task Tml , i.e., Tmmax ≥ Cml . If the task is s
we have Em = Em l
− Emc . Let O denote the set of valid offloading
fm
offloaded to cloud, i.e., am = 1, we can get the following result, requests and rm be the bandwidth required by Um . Then, the MESA
problem can be transformed as
Bm
rm ⩾ rths = Cm
. (12) λ λ
Tmmax − ∑ ∑
fc max s
Em am , s.t . rm am ⩽ R and am ∈ {0, 1}. (14)
Cm
In (12), the term − Tmmax
can be considered as the maximum
fc
i=1 i=1

transmission time for Um . rths is the threshold, which denotes The problem (14) is equivalent to the following 0–1 Knapsack
the minimum bandwidth. Therefore, we can adopt rths to check problem:
whether the time constraint of the offloading task is satisfied. If n
∑ n

rm ≥ rths , the task can be one of suitable offloading candidates to max vi xi , s.t . wi xi ⩽ W and xi ∈ {0, 1}. (15)
run on remote cloud. Otherwise, if rm < rths , the request from Um i=1 i=1
will be rejected by the agent and the task is not allowed to execute
Therefore, the MESA problem is equivalently transformed to the
on the cloud. The detailed filtering process on the agent is shown
classical 0–1 Knapsack problem, with Ems
, rm , R corresponding to vi ,
in Fig. 3.
It is noting that the completion time constraint is checked on
wi , W . ■
the agent. This is because the frequency parameter of cloud f c
needs to be sent continuously to all of mobile devices via the 5.2. Task offloading selection
agent. Otherwise, if the completion time constraint is executed
on the mobile device, the frequent transmission of the frequency In this subsection, we will adopt dynamic programming method
parameter f c between the agent and the mobile device will lead to solve 0–1 Knapsack problem. The basic idea of dynamic pro-
to a great burden on the mobile device. The task filtering process gramming is to regard λ offloading requests as a series of decisions
in Fig. 3 aims to shorten the response time of the offloading task whether the tasks are executed on distant cloud. Thus, we employ
from the remote cloud, furthermore, greatly reduce the number of the recursive equation of dynamic programming to calculate the
invalid offloading tasks. Although the number of original offloading optimal energy saving with the bandwidth constraint. We assume
requests is N, after filtering on the agent, the number of offloading that dp[m][l] indicates the optimal energy saving for m devices
requests is reduced to λ. under l bandwidth units.
Therefore, we can transform the MESA optimization problem ⎧
0 l=0
OPT-1 as ⎪
rm > l

⎨dp[m − 1][l]
λ dp[m][l] = (16)
max(dp[m − 1][l − rm ] + esm ,

OPT − 2 max l
am (Em − c
Em ) am ∈ {0, 1} (13)


{a1 ,...,aλ }
rm ⩽ l.

m=1 dp[m − 1][l])
172 Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176

The Eq. (16) shows how the value of dp[m][l] can be recursively 6. Performance evaluation
computed. In the following, we will discuss the calculation of
dp[m][l] from three aspects. In this section, we first introduce the experiment settings and
then give experimental results to demonstrate the advantages of
• If no available bandwidth can be utilized to offload tasks, our proposed DPAF algorithm and computation offloading scheme.
i.e., l = 0, then there does not exist the optimal energy In the experiments, we first evaluate our proposed DPAF algorithm
saving, i.e., dp[m][l] = 0. by comparison with other approaches. Second, we compare our
• If available bandwidth is less than the utilized bandwidth of offloading framework with other schemes. Finally, we show our
framework has low latency and less energy consumption.
the offloading task from Um , i.e., rm > l, then the optimal
energy saving is equal to dp[m − 1][l].
6.1. Experiment setup
• If available bandwidth is able to allow the remote execution
of the task from Um , i.e., rm ⩽ l, then the agent needs to make We run all of the algorithms on Ubuntu 14.04LTS system using
a decision whether to permit Um to offload. Python 2.7. All of the parameters follow the random uniform dis-
tribution.
If am = 0, then the offloading task is rejected by the agent and The CPU clock frequency of each device is set from 1 GHz to
the dp[m][l] can be derived according to dp[m − 1][l]. If am = 1, 1.5 GHz randomly. The computation capability of the cloud server
then the agent accepts this task and dp[m][l] can be derived by is 3.4 GHz. We assume that WiFi radio power is between 257 and
subtracting the utilized bandwidth units rm and adding the energy 325 mW and the processing power of the device is between 644
saving esm of task offloading based on dp[m − 1][l]. Therefore, it and 700 mW.
should be considered whether to choose the offloading task of Um We let the available bandwidth between mobile devices and
to be executed remotely in order to achieve the maximum energy the agent be from 100kbps to 800kbps. The total bandwidth varies
saving, which can be given by dp[m][l] = max(dp[m − 1][l − rm ] + from 10 to 20 Mbps. The offloading task includes uploading bits
esm , dp[m − 1][l]). and required cycles. Since different tasks have different execution
Therefore, the DPAF algorithm can be summarized in Algorithm features, Cm is set from 200 to 2000 Mega cycles and Bm is from
10 kB to 1 MB. The number of mobile devices varies from 50 to 300
1.
for each virtual machine instance. The maximum tolerant delay
Tmmax is taken as a variable coefficient from 1.0 to 2.0 based on local
Algorithm 1 DPAF algorithm Based on Dynamic Programming
processing time Tml .
Input: Total available bandwidth unit, R; the number of offloading We adopt the load-input Data Ratio (LDR) to characterize the
requests after filtering, λ; bandwidth utilized and energy saved complexity of task offloading. Similar to [33], we let LDR = Cm /Bm .
for Um : rm and em , ∀m ∈ {1, ..., λ}; The high LDR implies that the task is computation-intensive and
Output: the set of selected offloading requests, O; maximum en- suitable for execution on center cloud. On the contrary, the low LDR
ergy saving, Esum ; means that the task is communication-intensive and can benefit
1: Initialize the dp[m][0] = 0, dp[0][l] = 0, ∀m ∈ {1, ..., λ};∀l ∈ from local execution.
{1, ..., R};
2: for m = 1 to λ do
6.2. Performance of DPAF Algorithm
3: for l =1 to R do
4: if (bm > l) then 6.2.1. Performance on request filtering
5: dp[m][k] = dp[m − 1][k] We first evaluate the performance of DPAF algorithm in terms of
6: else request filtering. The filtering process aims to reduce the offloading
7: dp[m][l] = max(dp[m − 1][l − rm ] + esm , dp[m − 1][l]) requests that cannot meet constraints and shorten the decision-
8: end if making latency. Fig. 4 depicts the changes of offloading requests
9: end for before/after request filtering for different number of users [100,
10: end for
200, 300], which is corresponding to cases A, B, C, respectively. It
11: Esum = dp[λ][R]
is not difficult to observe that for all cases the offloaded requests
12: l = R
can be greatly reduced after filtering. The valid offloading requests
13: for m = 1 to λ do
increase with total offloading requests. Moreover, the filtering
14: if (dp[m][l] == dp[m − 1][l]) then process works better on the tasks with lower LDR since they are
15: am = 0; communication-intensive and easy to break the completion time
16: else constraint. Therefore, in each case, the number of offloading tasks
17: am = 1; with LDR = 1.5 is larger than that with LDR = 1.0. In other words,
18: l = l − em the higher the LDR of offloading task is, the more likely the task
19: am insert into set O will be offloaded.
20: end if
21: end for
6.2.2. Performance on energy saving
Before studying the relationship between the energy saving
It is not difficult to observe that the DPAF algorithm is classified
and offloading tasks as well as bandwidth, we first focus on the
as two stages. One is from line 1 to 12, where we figure out the
largest instance size that the algorithm can solve through dynamic
maximum energy savings based on dynamic programming. The
programming. According to the complexity of Algorithm 1 O(λR),
other is from line 13 to 22, where we are able to get a solution it is obviously that the processing time is increasing with the
of such maximum energy saving. Clearly, the time complexity number of offloading tasks and the size of total bandwidth, which
of the algorithm is O(λR) since the time complexity of getting is illustrated in Fig. 5. Although our algorithm adopts batch pro-
the maximum energy saving is O(λR) and the time complexity of cessing, it is not realistic for mobile devices to wait a long time
getting the offloading strategy under the maximum energy saving for receiving the offloading response from the cloud. Therefore,
is O(λ). the maximum acceptable time should be set in advance. According
Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176 173

Fig. 4. The offloaded requests are reduced after offloading task filtering.
Fig. 6. The comparisons of different bandwidths in multiple tasks.

bandwidth is provided, the more tasks can be executed remotely


and the more energy can be saved for devices. In addition, when
the total bandwidth R is fixed, there are more suitable devices that
can consume less energy. However, it is possible that there is no
obvious increasement on energy saving. For example, the saved
energy is basically stable for 200 to 300 tasks, which is because
the offloaded tasks fully occupy the bandwidth and there is no
available bandwidth for other tasks.

6.2.3. Comparison among DPAF, DPWF and GOAF algorithms


In this part, we compare the proposed DPAF algorithm with
other algorithms, such as dynamic programming algorithm with-
out filtering process (DPWF), which is also the solution to the prob-
lem OPT-1, and greedy offloading algorithm after filtering(GOAF)
in [34].
Fig. 7 depicts the evolution of DPAF, DPWF and GOAF in terms
of energy saving. We randomly generate 200 mobile devices’ con-
Fig. 5. The processing time under different conditions. figuration for different bandwidths, i.e., [5, 10, 15, 20] Mbps. It is
clear that DPWF is worst and has not any effect in energy saving.
The reason is that the users selected by dynamic program satisfy
to [5], the response time is not acceptable if it is longer than 1 s. the bandwidth constraint to achieve maximum energy saving but
Consequently, we adopt 1 s as the maximum processing time. may not meet completion time constraint. Therefore, the filtering
From Fig. 5, we can observe that the response time is larger process is essential to energy saving.
than 1 s when the bandwidth is 20 Mbps and the tasks are 250 In addition, we can also observe from Fig. 7 that DPAF outper-
and 300. This is why we do not consider 20 Mbps in Fig. 6. How- forms GOAF. For example, when the total bandwidth is 5 Mbps,
ever, according to the different acceptable time and bandwidth DPAF can save more 315 mJ than GOAF. As the total bandwidth in-
condition, the instance size that the algorithm can solve may vary. crease, DPAF can save 569 mJ, 1469 mJ, 1392 mJ energy, separately.
For example, the largest instance of offloading tasks that the DPAF DPAF can achieve more energy saving than GOAF. This is because
can solve is about 300 under the condition of acceptable time the greedy offloading GOAF always considers the maximum energy
1 s and maximum bandwidth 15 Mbps. But, in essence, the size saving at the current stage. However, DPAF always select the task
of the instances that can be solved by dynamic programming is with higher LDR to be offloaded under the bandwidth constraint.
about 130. Because the filtering process of DPAF also plays the
important role in reducing invalid requests. Thus, compared with
6.3. Performance of offloading scheme
only adopting dynamic programming, DPAF can work at lower
time complexity.
Furthermore, we focus on how much energy can be saved with In this subsection, in order to show the flexibility of our offload-
the different number of offloading tasks and variable bandwidth. ing scheme, we first compare our computation offloading scheme
In the simulation, each time we randomly generate more than 50 with other two baseline schemes: all local execution and all remote
tasks based on last round offloading tasks. The number of offload- execution. All local execution is referred to as that all of the tasks
ing tasks n varies from 50 to 300, and the total bandwidth R is set are processed on the local device. All remote execution means that
from 5000 to 15 000 kbps. As illustrated in Fig. 6, when the number all of the tasks will be offloaded to the cloud. Furthermore, we
of offloading tasks is fixed, the total energy can be much saved with will evaluate the impact of different LDRs on energy saving of our
the increase of available bandwidth. This is because more available offloading scheme.
174 Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176

Fig. 9. The changes of energy saving for different LDRs.


Fig. 7. Comparisons of DPAF, DPWF and GOAF.

Fig. 8. The energy consumption of different offloading schemes. Fig. 10. The number of offloading tasks for different LDRs.

6.3.1. Comparison of different offloading schemes saving. With the increase of available bandwidth, the same LDR
In this simulation, we assume that the bandwidth is enough so also has different performance. For example, when LDR = 5 and R
that all of the selected tasks can be offloaded to the cloud. Fig. 8 = 5000, 10 000, 20 000 kbps, the saved energy is 1525 mJ, 2941 mJ,
depicts the energy consumption of the three offloading schemes 5502 mJ, respectively. Therefore, the saved energy is increasing
over different number of offloading tasks. It is not difficult to with the available bandwidth for a given LDR.
observe that the energy consumption of our offloading scheme is Furthermore, we can observe from Fig. 10 that the number of
approximately 50% of energy consumption of all local execution. It offloading tasks also increases when the LDR and available band-
is unrealistic with the bandwidth constraint that all tasks are of- width increase. The reason is that larger LDR in given bandwidth
floaded to the cloud simultaneously. All local execution consumes can accept more tasks. For example, the number of offloading tasks
most energy. And not all of tasks can benefit from remote execution increases from 0 to 7 as LDR increases when R = 10 000 kbps. And
because of low bandwidth or LDR. However, our offloading solution it is clear that more tasks can be executed if larger bandwidth is
can schedule tasks on the agent, which makes the mobile device provided when LDR is fixed. Besides, the task with highest LDR is
consume the least energy. firstly chosen to offload to remote cloud.

6.3.2. Impact of different LDRs on energy saving


6.4. Comparison of request delay and energy consumption
We further focus on what kind of tasks are more likely to be
executed remotely and evaluate the impact of different LDRs on the
energy saving of our offloading scheme. We set LDR of offloading In this subsection, we will evaluate the request response delay
tasks be 0.1, 0.5, 1, 5, 10. We randomly generate 100 offloading of our agent-based computation offloading framework by compar-
tasks, then change the LDRs of the tasks. ing other offloading frameworks.
As illustrated in Fig. 9, under the same device parameters, the In order to show the request response delay, we utilize Ping
higher the LDR is, the more energy will be saved. For instance, instruction to test the request transmission status between mobile
when LDR = 0.1 or 0.5, there is no task to be offloaded to the cloud, device to the agent. For each measurement, the Ping instruction
since all of the tasks are communication-intensive and the remote first sends the packets of ICMP-Request from mobile device to the
execution leads to more energy consumption. However, when LDR agent or cloud and then receives the packets sent back by the agent
= 5 or 10, our offloading scheme can achieve significant energy or cloud [35]. We compare the response delay of three frameworks:
Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176 175

Fig. 12. The comparison of energy consumption for request.

Fig. 11. The comparison of the request delay.


link bandwidth constraint aiming to maximize the energy saving
for all devices (MESA), which makes the offloading scheme more
realistic. Furthermore, we formulate the MESA problem into a
• Traditional offloading framework (Framework 1): There are classical 0–1 Knapsack problem. To solve the problem, we propose
no agents in the framework. Each device sends its uploading a DPAF algorithm, which can reduce the energy consumption as
request to remote cloud for making an offloading decision much as possible under the constraints of completion time and
and then the cloud return the offloading decision to the link bandwidth. In particular, the filtering process we adopt can
device. greatly reduce the amount of offloading requests, which is helpful
• Agent-based offloading framework (Framework 2): There in reducing the complexity of DPAF algorithm. Simulation results
are agents in the framework. Requests are sent to the agent show that our offloading framework performs other offloading
for making an offloading decision instead of remote cloud. schemes or all local execution scheme in terms of energy saving
• Our framework: There are also agents in our framework. The and request response delay. Also, our DPAF algorithm can achieves
devices enable a filtering process to check whether the tasks more energy saving than other offloading algorithms.
satisfy the energy constraint before uploading requests to Our future work includes two aspects. First, we intend to con-
the agent. Thus, only beneficial offloading tasks are permit-
sider cloudlet system to maximize energy saving for multiple users.
ted to send requests to the agent for remote execution.
Second, we will discuss more complicated offloading task relation-
We set the size of XML request be 1KB and the number of the ship.
requests sent out be 100. Figs. 11 and 12 depict the average request
delay and total energy consumption of 100 requests by the three Acknowledgments
frameworks, respectively. It is not difficult to observe from Fig.
11 that the offloading delay of the requests sent to the agent in This work was supported by the National Natural Science Foun-
Framework 2 and our framework is greatly shorter than that to dation of China (No. 61373179, 61373178, 61402381, 61503309,
center cloud in Framework 1. In particular, the average request 61772432), Natural Science Key Foundation of Chongqing (cstc2015
delay to the agent is less than 10 ms but the request delay to remote jcyjBX0094), Natural Science Foundation of Chongqing (CSTC2016
cloud is about 50 ms. The reason is that the agent is one-hop near JCYJA0449), China Postdoctoral Science Foundation (2016M592
to devices whereas the access to the cloud needs to go through 619), Chongqing Postdoctoral Science Foundation (XM2016002),
complex networks. and the Fundamental Research Funds for the Central Universities
Besides, we can observe from Fig. 12 that the energy con- (XDJK2015C010, XDJK2015D023, XDJK2016A011, XDJK2016D047,
sumption of Framework 1 is about five times more than that of XDJK201710635069).
Framework 2. This is because the longer round trip time (RRT) in
Framework 1 leads to more energy consumption [7]. In particular, References
our framework can achieve the least average request delay and
total energy consumption compared to other two frameworks, [1] T. Soyata, R. Muraleedharan, C. Funai, M. Kwon, W. Heinzelman, Cloud-vision:
Real-time face recognition using a mobile-cloudlet-cloud acceleration archi-
which is because the tasks that cannot meet beneficial offloading tecture, in: 2012 IEEE Symposium on Computers and Communications, ISCC,
requirement would not upload their requests to the agent so that 2012, pp. 000059–000066.
the time and energy will be saved. Therefore, our proposed frame- [2] M.R. Palacn, Recent advances in rechargeable battery materials: a chemist’s
work cannot only achieve quick response to offloading request for perspective, Chem. Soc. Rev. 38 (9) (2009) 2565.
[3] J. Kwak, Y. Kim, J. Lee, S. Chong, DREAM: Dynamic resource and task allocation
mobile devices but also reduce energy consumption during the
for energy minimization in mobile cloud systems, IEEE J. Sel. Areas Commun.
period of requesting. 33 (12) (2015) 2510–2523.
[4] B.-G. Chun, S. Ihm, P. Maniatis, M. Naik, A. Patti, CloneCloud: Elastic execution
7. Conclusions and future work between mobile device and cloud, in: Proceedings of the Sixth Conference on
Computer Systems, 2011, pp. 301–314.
[5] M. Satyanarayanan, P. Bahl, R. Cáceres, N. Davies, The case for VM-based
In this paper, we propose an agent-based computation of- cloudlets in mobile computing, IEEE Pervasive Comput. 8 (4) (2009) 14–23.
floading framework with short request response latency. In the [6] D. Duolikun, T. Enokido, M. Takizawa, An energy-aware algorithm to migrate
framework, we consider the task completion time constraint and virtual machines in a server cluster, IJSSC 7 (1) (2017) 32–42.
176 Z. Kuang et al. / Future Generation Computer Systems 81 (2018) 166–176

[7] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu, R. Chandra, P. [33] S. Guo, B. Xiao, Y. Yang, Y. Yang, Energy-efficient dynamic offloading and
Bahl, MAUI: Making smartphones last longer with code offload, in: Proceed- resource scheduling in mobile cloud computing, in: IEEE INFOCOM 2016 -
ings of the 8th International Conference on Mobile Systems, Applications, and The 35th Annual IEEE International Conference on Computer Communications,
Services, 2010, pp. 49–62. 2016, pp. 1–9.
[8] Y. Li, W. Gao, Code offload with least context migration in the mobile cloud, [34] M.-R. Ra, A. Sheth, L. Mummert, P. Pillai, D. Wetherall, R. Govindan, Odessa:
in: 2015 IEEE Conference on Computer Communications, INFOCOM 2015, Enabling interactive perception applications on mobile devices, in: Proceed-
Kowloon, Hong Kong, April 26–May 1, 2015, 2015, pp. 1876–1884. ings of the 9th International Conference on Mobile Systems, Applications, and
[9] S. Yang, D. Kwon, H. Yi, Y. Cho, Y. Kwon, Y. Paek, Techniques to minimize state Services, MobiSys’11, 2011, pp. 43–56.
transfer costs for dynamic execution offloading in mobile cloud computing, [35] Y.D. Lin, E.T.H. Chu, Y.C. Lai, T.J. Huang, Time-and-energy-aware computation
IEEE Trans. Mob. Comput. 13 (11) (2014) 2648–2660. offloading in handheld devices to coprocessors and clouds, IEEE Syst. J. 9 (2)
[10] S. Deng, L. Huang, J. Taheri, A.Y. Zomaya, Computation offloading for service (2015) 393–405.
workflow in mobile cloud computing, IEEE Trans. Parallel Distrib. Syst. 26 (12)
(2015) 3317–3329.
[11] Y. Liu, M.J. Lee, An effective dynamic programming offloading algorithm in
mobile cloud computing system, in: 2014 IEEE Wireless Communications and Zhikai Kuang received the B.S. degree in telecommunica-
Networking Conference, WCNC, 2014, pp. 1868–1873. tions engineering from Southwest University, Chongqing,
[12] E. Meskar, T.D. Todd, D. Zhao, G. Karakostas, Energy aware offloading for com- China, in 2015. She is currently working toward the Ph.D.’s
peting users on a shared communication channel, IEEE Trans. Mob. Comput. degree in signal and information processing, Southwest
16 (1) (2017) 87–96. University. Her research interests include stream schedul-
[13] X. Chen, Decentralized computation offloading game for mobile cloud com- ing in data center networks and software defined net-
puting, IEEE Trans. Parallel Distrib. Syst. 26 (4) (2015) 974–983. working.
[14] M. Nir, A. Matrawy, M. St-Hilaire, An energy optimizing scheduler for mobile
cloud computing environments, in: 2014 IEEE Conference on Computer Com-
munications Workshops, INFOCOM WKSHPS, 2014, pp. 404–409.
[15] L. Yang, J. Cao, H. Cheng, Y. Ji, Multi-user computation partitioning for latency
sensitive mobile cloud applications, IEEE Trans. Comput. 64 (8) (2015) 2253–
2266.
[16] X. Liu, Y. Li, H.H. Chen, Wireless resource scheduling based on backoff for Songtao Guo received the B.S., M.S., and Ph.D. degrees in
multiuser multiservice mobile cloud computing, IEEE Trans. Veh. Technol. computer software and theory from Chongqing Univer-
65 (11) (2016) 9247–9259. sity, Chongqing, China, in 1999, 2003, and 2008, respec-
[17] K.M. Sim, Agent-based cloud computing, IEEE Trans. Serv. Comput. 5 (4) (2012) tively. He was a professor from 2011 to 2012 at Chongqing
564–577. University. He is currently a full professor at Southwest
[18] S. Choi, M. Baik, H. Kim, E. Byun, H. Choo, A reliable communication protocol University, China. He was a senior research associate at
for multiregion mobile agent environments, IEEE Trans. Parallel Distrib. Syst. the City University of Hong Kong from 2010 to 2011, and
21 (1) (2010) 72–85. a visiting scholar at Stony Brook University, New York,
[19] K. Kumar, Y.H. Lu, Cloud computing for mobile users: Can offloading compu- from May 2011 to May 2012. His research interests in-
tation save energy? Computer 43 (4) (2010) 51–56. clude wireless sensor networks, wireless ad hoc networks
[20] K. Liu, J. Peng, H. Li, X. Zhang, W. Liu, Multi-device task offloading with time- and parallel and distributed computing. He has published
constraints for energy efficiency in mobile cloud computing, Future Gener. more than 30 scientific papers in leading refereed journals and conferences. He has
Comput. Syst. 64 (2016) 1–14. received many research grants as a principal investigator from the National Science
[21] T. Liu, F. Chen, Y. Ma, Y. Xie, An energy-efficient task scheduling for mobile Foundation of China and Chongqing and the Postdoctoral Science Foundation of
devices based on cloud assistant, Future Gener. Comput. Syst. 61 (2016) 1–12. China.
[22] X. Chen, L. Jiao, W. Li, X. Fu, Efficient multi-user computation offloading for
mobile-edge cloud computing, IEEE/ACM Trans. Netw. 24 (5) (2016) 2795–
2808. Jiadi Liu received the B.S. degree in computer science and
[23] C. Wang, C. Liang, F.R. Yu, Q. Chen, L. Tang, Computation offloading and technology from Jiangnan University in 2009 and the M.S.
resource allocation in wireless cellular networks with mobile edge computing, degree in computer application from Jiangnan University,
IEEE Trans. Wireless Commun. 16 (8) (2017) 4924–4938. Jiangsu, Wuxi, China, in 2013. He is currently pursuing
[24] I. Behera, C.R. Tripathy, Performance modelling and analysis of mobile grid the Ph.D. degree in the college of Electrical Information
computing systems, Int. J. Grid Util. Comput. 5 (1) (2014) 11–20. and Engineering, Southwest University, Chongqing, China.
[25] T.M. Lynar, R.D. Herbert, S. Chivers, W.J. Chivers, Resource allocation to con- His current research interests include game theory, mobile
serve energy in distributed computing, Int. J. Grid Util. Comput. 2 (1) (2011) cloud computing, machine learning, convex optimization
1–10. theory and its application.
[26] T.M. Lynar, R.D. Herbert, W.J. Chivers, Simon, Reducing energy consumption in
distributed computing through economic resource allocation, Int. J. Grid Util.
Comput. 4 (4) (2013) 231–241.
[27] G. Feng, R. Buyya, Maximum revenue-oriented resource allocation in cloud,
Int. J. Grid Util. Comput. 7 (1) (2016) 12–21. Yuanyuan Yang received the B.Eng. and M.S. degrees in
[28] S.C. Shah, M.-S. Park, F.H. Chandio, Resource allocation scheme to reduce computer science and engineering from Tsinghua Univer-
communication cost in mobile ad hoc computational grids, IJSSC 1 (4) (2011) sity, Beijing, China, and the M.S.E. and Ph.D. degrees in
270–280. computer science from Johns Hopkins University, Balti-
[29] Y. Liu, M.J. Lee, Y. Zheng, Adaptive multi-resource allocation for cloudlet-based more, Maryland. She is a professor of computer engineer-
mobile cloud computing system, IEEE Trans. Mob. Comput. 15 (10) (2016) ing and computer science at Stony Brook University, New
2398–2410. York. Her research interests include wireless networks,
[30] L. Yang, J. Cao, S. Tang, T. Li, A.T.S. Chan, A framework for partitioning and data center networks, optical networks and high-speed
execution of data stream applications in mobile cloud computing, in: Cloud networks. She has published over 300 papers in major
Computing (CLOUD), 2012 IEEE 5th International Conference on, 2012, pp. journals and refereed conference proceedings and holds
794–802. seven US patents in these areas. She has served as an As-
[31] K. Elgazzar, P. Martin, H.S. Hassanein, Cloud-assisted computation offloading sociate Editor-in-Chief and an Associated Editor for IEEE Transactions on Computers
to support mobile services, IEEE Trans. Cloud Comput. 4 (3) (2016) 279–292. and an Associate Editor for IEEE Transactions on Parallel and Distributed Systems.
[32] D. Huang, P. Wang, D. Niyato, A dynamic offloading algorithm for mobile She has also served as a general chair, program chair, or vice chair for several major
computing, IEEE Trans. Wireless Commun. 11 (6) (2012) 1991–1995. conferences and a program committee member for numerous conferences. She is
an IEEE Fellow.

You might also like