You are on page 1of 13

Journal Pre-proof

Adaptive delay-energy balanced partial offloading strategy in Mobile Edge Computing


networks

Shumei Liu, Yao Yu, Lei Guo, Phee Lep Yeoh, Branka Vucetic, Yonghui Li

PII: S2352-8648(22)00122-5
DOI: https://doi.org/10.1016/j.dcan.2022.05.029
Reference: DCAN 454

To appear in: Digital Communications and Networks

Received Date: 14 April 2021


Revised Date: 24 May 2022
Accepted Date: 29 May 2022

Please cite this article as: S. Liu, Y. Yu, L. Guo, P.L. Yeoh, B. Vucetic, Y. Li, Adaptive delay-energy
balanced partial offloading strategy in Mobile Edge Computing networks, Digital Communications and
Networks (2022), doi: https://doi.org/10.1016/j.dcan.2022.05.029.

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition
of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of
record. This version will undergo additional copyediting, typesetting and review before it is published
in its final form, but we are providing this version to give early visibility of the article. Please note that,
during the production process, errors may be discovered which could affect the content, and all legal
disclaimers that apply to the journal pertain.

© 2022 Chongqing University of Posts and Telecommunications. Production and hosting by Elsevier
B.V. on behalf of KeAi Communications Co. Ltd.
Digital Communications and Networks(DCN)

journal homepage: www.elsevier.com/locate/dcan

Adaptive delay-energy balanced partial


offloading strategy in Mobile Edge
Computing networks

Shumei Liua,b , Yao Yu∗a,b , Lei Guoc , Phee Lep Yeohd , Branka Vuceticd , Yonghui Lid

of
a School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
b Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China

ro
c School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications,

Chongqing 400065, China


d School of Electrical and Information Engineering, University of Sydney, Sydney NSW 2006, Australia
-p
re
Abstract
lP

Mobile Edge Computing (MEC)-based computation offloading is a promising application paradigm for serving large numbers of
users with various delay and energy requirements. In this paper, we propose a flexible MEC-based requirement-adaptive partial
offloading model to accommodate each user’s specific preference regarding delay and energy consumption. To address the
dimensional differences between time and energy, we introduce two normalized parameters and then derive the computational
na

overhead of processing tasks. Different from existing works, this paper considers practical variations in the user request patterns,
and exploits a flexible partial offloading mode to minimize computation overheads subject to tolerable delay, task workload and
power constraints. Since the resulting problem is non-convex, we decouple it into two convex subproblems and present an
ur

iterative algorithm to obtain a feasible offloading solution. Numerical experiments show that our proposed scheme achieves a
significant improvement in computation overheads compared with existing schemes.
Jo

c 2015 Published by Elsevier Ltd.



KEYWORDS:
Mobile Edge Computing (MEC), Delay, Energy consumption, Dynamic balance, Partial computation offloading

1. Introduction to the remote cloud for computing and processing due


to their limited computation capability [3]. Neverthe-
With the emerging Internet of Things (IoT) era, a less, the explosive growth of participating users and
large number of mobile users will be connected to computationally intensive IoT applications will bring
wireless networks, resulting in new applications and challenging burdens to the cloud server and transmis-
services such as connected vehicles, interactive gam- sion links, resulting in significant performance degra-
ing and Augmented Reality (AR) [1, 2]. Tradition- dations in terms of high delay and energy consumption
ally, mobile users need to send their application data [4, 5].
Mobile Edge Computing (MEC) is expected to ad-
* Corresponding author. dress the above challenges. It aims at pushing rich
E-email Addresses: liusmneu@163.com (S. Liu), computation resources and network functions to the
yuyao@mail.neu.edu.cn (Y. Yu), guolei@cqupt.edu.cn
(L. Guo), phee.yeoh@sydney.edu.au (P. L. network edge by deploying MEC servers such as Ac-
Yeoh), branka.vucetic@sydney.edu.au (B. Vucetic), cess Points (APs) [6, 7]. MEC will enable mobile de-
yonghui.li@sydney.edu.au (Y. Li). vices to offload computation tasks to nearby APs for
This work was supported in part by the National Natural Sci-
ence Foundation of China under Grant 62171113 and 61941113,
efficient execution via high data-rate wireless links [8,
and in part by the Fundamental Research Funds for the Central Uni- 9]. By doing so, MEC can support a variety of emerg-
versities under Grant N2116003 and N2116011. ing computation-intensive applications, and bring to
2 Shumei Liu, et al.

mobile users significant benefits, such as lower service ing mode. Given this background, how to adaptively
delay, lower energy consumption, and better Quality and fairly balance the tradeoff requirement by adopt-
of Experience (QoE) [10]. There are two offloading ing partial offloading mode in MEC networks is a chal-
modes in MEC networks, namely, binary and partial lenging open problem, and therefore is the focus of
computation offloading [11]. In binary mode [11], the this paper.
computation tasks are indivisible, and each user can In this paper, motivated by the above, we pro-
either complete them using local computing or offload pose and analyze a practical MEC-based requirement-
them entirely to the MEC server for processing. In adaptive partial offloading system. We first introduce
partial mode [12], each user can divide their compu- two normalized parameters and a control weight to
tation tasks into two parts, which can be processed in fairly balance each user i’s preference to delay and
parallel by offloading one part to the MEC server and energy consumption. Then the computation over-
completing the other part using local computing. Ob- head of processing user i’s tasks is defined as the
viously, binary offloading is a special case of a partial sum of weighted parameters. The weighted value is
one. Hence, in this paper, we consider an MEC net- determined by user i according to his/her preference
work with partial offloading due to its general flexibil- for energy saving relative to delay reduction. Based
ity and practicability. on these, we aim to provide a unique computation
Computation offloading in MEC networks requires offloading strategy regarding delay and energy con-

of
the efficient allocation of various wireless resources sumption for each user’s tasks, thus minimizing the
(e.g., transmit power, bandwidth, and computing re- computation overheads of processing tasks. In this
context, we aim to design the optimal partial offload-

ro
sources) and offloading rate [13, 14]. It is known that
different task offloading requirements correspond to ing MEC solution to satisfy dynamic delay-energy
strikingly different resource allocation solutions [15]. balanced requirements and address the following two
For users, service delay and device energy consump-
tion greatly affect users’ task satisfaction and long-
-p
questions: 1) What is the volume of the computation
tasks that should be offloaded? 2) How much offload-
re
term QoE [16]. Therefore, delay and energy consump- ing power should be allocated to the offloading tasks?
tion are the two main concerns of users in the compu- Specifically, the main technical contributions of this
tation offloading process. In reality, the degree and ur- paper are as follows:
lP

gency of delay and energy consumption requirements • We consider a requirement-adaptive partial of-
will vary in different scenarios and situations. For ex- floading system based on MEC networks, and
ample, the applications in 3D games and tactile Inter-
na

propose a Delay-Energy Balanced Partial Of-


net usually have very stringent requirements to guar- floading (DEB-PO) scheme. Our goal is to min-
antee low delay performance, while the requirements imize computation overheads by jointly optimiz-
on energy consumption are relatively less strict [16].
ur

ing offloading workloads and offloading power


However, a device with short battery life always pri- under tolerable delay constraints.
oritizes energy consumption compared to their delay
Jo

requirements. Therefore, to satisfy the QoE among • Since the resulting optimization problem is non-
different participating users, it is of great importance convex, we explore its layered structure and de-
to dynamically balance the tradeoff performance be- couple it into two easily solvable subproblems,
tween delay and energy consumption according to the namely, power allocation problem and workload
specific requirements of users. allocation problem. Then a low-complexity iter-
Recently, the delay-energy tradeoff in wireless ative algorithm is presented to obtain an approxi-
networks has been extensively studied [5, 17-19]. mate offloading solution.
Through investigating the existing research, we find • By conducting exhaustive experimental simula-
that several critical limitations still remain to be solved tions, we verify the advantages of the proposed
in practical MEC-based offloading applications. First, DEB-PO scheme in terms of computation over-
most existing research considers fixed delay-energy head and the proposed scheme is shown to sig-
tradeoff services for users under specific requirements nificantly outperform the existing ones in [6], [8]
scenarios. In reality, users always have different tasks and [33].
that correspond to various requirements for delay and
energy consumption. Meanwhile, most studies focus The structure of the remainder of this paper is as
on minimizing the weighted sum of processing delay follows. Section 2 introduces the relevant state-of-
of the task and energy consumption of the device, but the-art research. Section 3 details the MEC-based
the units and dimensions of time and energy are quite requirement-adaptive partial offloading system. In
different. A simple weighted combination of delay and Section 4, we introduce the DEB-PO scheme and de-
energy consumption cannot reflect the specific prefer- velop an iterative algorithm. In Section 5, we evaluate
ences of individual users. Second, the related tech- our proposed scheme by experimental simulation. We
niques adopt binary offloading mode, and there are conclude and discuss the contributions of the paper in
few works considering a more practical partial offload- Section 6.
Adaptive delay-energy balanced partial offloading strategy in mobile edge computing networks 3

2. Related work [19], the authors designed a QoE model jointly con-
sidering service delay, energy consumption, and task
MEC plays a significant role in the next-generation success rate. Then a Deep Deterministic Policy Gradi-
wireless networks, and has recently gained tremen- ents (DDPG) algorithm is adopted to obtain the solu-
dous attention [20, 21]. With the popularity of IoT tion of maximizing QoE. The above research consid-
applications, many complex and resource-hungry mo- ered fixed delay-energy tradeoff services for users. In
bile applications, such as connected vehicles, inter- realistic social IoT networks, users will have a wide
active gaming, and AR, will pose significant chal- range of requirements for delay and energy consump-
lenges on wireless network infrastructures [22, 23]. tion. Since the units and dimensions of time and en-
Given this background, MEC-based computation of- ergy are quite different, obviously a simple weighted
floading is an important approach to enhance the QoE combination of delay and energy consumption can-
of mobile users. By offloading computational applica- not accurately reflect users’ requirements. In view of
tions to the network edge, MEC can support enormous this situation, the authors in [34] aimed to maximize
computation-intensive and delay-critical applications the users’ task offloading gains, which were measured
for nearby users [24, 25]. by a weighted sum of reductions in task completion
In recent years, computation offloading in MEC- time and energy consumption. The authors proposed
based systems has been considered in [13, 26, 27]. Ini- to normalize delay and energy consumption, and dy-

of
tially, researchers focused on minimizing the process- namically balance them according to users’ task re-
ing delay of tasks or the energy consumption of de- quirements. However, they considered the binary of-
vices [28, 29]. In terms of delay reduction, the authors

ro
floading mode. Thus users have only two task process-
in [30] proposed a delay-sensitive scheduling mech- ing styles, i.e., perform them locally or offload them
anism to meet the heterogeneous Quality-of-Service entirely to MEC servers. To sum up, when both var-
(QoS) requirements of mobile users in MEC-based
computation offloading. In [28], the authors proposed
a computation offloading scheme to reduce users’ av-
-p
ious delay-energy balance requirements and flexible
offloading modes are considered, there remain several
re
open problems, which will be addressed in this paper.
erage offloading delay by jointly optimizing offload-
ing decision and resource allocation. Focusing on en-
lP

ergy saving, the authors in [31] proposed a multi-task 3. MEC-based requirement-adaptive partial of-
multi-access MEC system to minimize the total energy floading system
consumption for completing all tasks subject to the de-
na

lay tolerance of each task. The authors in [32] inves- In this section, we introduce our MEC-based
tigated how to achieve an energy-efficient MEC-based requirement-adaptive partial offloading system.
IoT network by maximizing the energy efficiency for Specifically, we describe the network model, lo-
ur

offloading, while simultaneously satisfying the maxi- cal computing model, partial offloading, and edge
mum tolerable delay constraints of IoT devices. The computing model. Based on these, we design a
aforementioned studies consider that all participating requirement-adaptive computation overhead model
Jo

mobile users have a single goal of minimizing either for computation offloading.
delay or energy consumption. Obviously, these solu-
tions do not address the more general scenario where
3.1. Network model
users may have both delay and energy consumption
requirements. As shown in Fig. 1, we consider an illustrative
The tradeoff between service delay and energy con- MEC-based partial offloading network model where
sumption in MEC-based computation offloading sys- user i offloads its partial or complete computation
tems has been another important area of research [33]. tasks to an AP. The AP is integrated with an MEC
In [17], the authors formulated a Unmanned Aerial server, thus it has plenty of computation and energy
Vehicle (UAV) energy minimization problem and task resources. User i with limited computation and energy
delay minimization problem in UAV-enabled MEC resources has some computation-intensive and delay-
systems. Furthermore, they derived a Pareto-optimal aware tasks to be completed. The computation tasks
solution to balance the tradeoff between delay and of user i are characterized by (Li , Di ). Li is the total
UAV energy consumption. The authors in [16] pro- number of bits to be processed. Di is the tolerable de-
posed a delay energy balanced task scheduling al- lay, which means the tasks must be completed within
gorithm to minimize the overall energy consumption this time. We focus on partial offloading in this pa-
while reducing average service delay and delay jitter. per. Similar to [6, 8-10, 17], it is assumed that the
In [18], the authors formulated an average weighted computation tasks for each user i can be split arbitrar-
sum of energy consumption and execution delay min- ily. Each subtask is end-independent and can be pro-
imization problem for a mobile device with the stabil- cessed in parallel. Thus, i’s computation tasks can be
ity of buffer queues and battery level as constraints. partitioned into two parts with li and Li − li bits, and
Besides, a dynamic online tasks offloading strategy is the first part is locally computed in the user while the
developed to modify the data backlogs of queues. In second part is offloaded to the AP.
4 Shumei Liu, et al.

Table 1: List of Notations

Notations Definitions
User i AP with an integrated Li Computation tasks workload (in bits)
MEC server li Tasks workload by local computing
Computation Partial offloading Di Tolerable delay of user i’s tasks
Tasks Local computing hiA Channel power gain from user i to the AP
Ci CPU cycles required for computing one
bits task of user i
Computation tasks Local computing part Partial offloading part
fi Average CPU frequency of user i
ς Effective capacitance coefficient
Fig. 1: MEC-based partial computation offloading process riA , pi Offloading rate and offloading power
pmax
i Maximum transmission power of user i
B, nA Bandwidth and background noise
We consider a quasi-static Rayleigh fading channel CAP CPU cycles required for computing one
model, in which channels remain constant within each bits task at the AP
offloading process. Then, the channel power gain from fAP Average CPU frequency at the AP

of
user i to the AP is denoted by hiA . Since the AP per- γiE , γiD Normalized energy consumption and delay
forms real-time control signal interactions with its ser- Oi Computation overhead for processing user
viced users, so we further assume that the task infor-

ro
i’s tasks
mation and the instantaneous Channel State Informa- αi Control weight to reflect user i’s preference
tion (CSI) hiA are known by the AP. For ease of refer- δ, K Tolerance for convergence and maximum
ence, we list important notations in Table 1. -p
li0 , p0i
number of iterations
Feasible resource allocation solution
3.2. Local computing model
re
li (k), pi (k) Solution of kth iteration
As for the local computing at user i, we use Ci to li∗ , p∗i Optimal resource allocation solution
denote the number of Central Processing Unit (CPU) tiloc , tiAP
lP

Computing time for local computing and


cycles required for computing one bit tasks. Hence, edge computing
the total number of CPU cycles required for comput- Eiloc ,Eioff User i’s energy consumption for local
ing li bits is Ci li . We assume that fi is the average CPU computing and partial offloading
na

frequency (in CPU cycles per second) of user i. Then, Ei Total energy consumption of user i
the total computing time used for local computing is
C i li
fi . Due to the delay limitation requirement of the
ur

computation tasks, the delay constraint of local com- power at the AP. Then the offloading energy consump-
puting at user i is given by tion of user i is given by
Jo

C i li Li − li
tiloc = ≤ Di (1) Eioff = pi (4)
fi riA
Next, energy consumption for local computing in As for the edge computing at the AP MEC server,
user i is given by we use CAP and fAP to represent the number of re-
quired CPU cycles at the AP for computing 1-bit in-
Eiloc = ςCi li fi2 (2) put tasks and its average CPU frequency, respectively.
The time consumption at the AP is
where ς > 0 is the effective capacitance coefficient
that depends on the chip architecture of user i. C AP (Li − li )
tiAP = (5)
fAP
3.3. Partial offloading and edge computing model
Apart from local computing, user i can offload the To meet the delay limitation of user i’s computation
remaining computation tasks to the AP for computing. tasks, the delay constraint of edge computing at the
According to the principles of Shannon’s channel ca- AP is defined as
pacity theory [35], the offloading rate riA from user i L i − li
to the AP can be characterized as tiAP + ≤ Di (6)
riA
!
pi hiA
riA = Blog2 1 + (3) 3.4. Requirement-adaptive computation overhead
nA
model
where B denotes the channel bandwidth, pi denotes As we know, energy saving and delay reduction are
the transmission power in user i for offloading the par- two basic user requirements in MEC-based computa-
tial computation tasks, and nA is the background noise tion offloading systems. For energy consumption, we
Adaptive delay-energy balanced partial offloading strategy in mobile edge computing networks 5

focus on the user perspective, since the MEC server • User 2 places an equal priority on energy con-
is typically powered by grid energy and therefore has sumption and delay, thus he/she sets α2 = 12 since
sufficient energy resources. The consumed energy of both the energy consumption and delay should be
each user can be expressed in two parts given by local minimized;
computing and partial computation offloading. Based
on (2) and (4), the total energy consumption Ei of user
i using the partial offloading mode is given by • User 3 has some energy-intensive computation
tasks, but the device is in a low battery state,
Ei = Eiloc + Eioff thus he/she places a higher priority on his/her en-
Li − li (7) ergy consumption than delay. Then, he/she sets
= ςCi li fi2 + pi
2 < α3 ≤ 1, and α3 = 1 means that the goal is to
1
riA
minimize energy consumption while meeting the
If user i does not offload the partial computation
delay constraint [8].
tasks to the AP, the total energy consumption Eiall−loc
can be calculated as
We highlight that each user i can adaptively adjust
Eiall−loc = ςCi Li fi2 (8) their own weight value αi according to their task re-
As mentioned before, most existing MEC-based quirements and device status. We consider that the

of
computation offloading works on delay-energy bal- style of “user decision" in this paper is reasonable and
ance directly perform a weighted sum of delay and en- necessary. This is generally because different users

ro
ergy consumption, and aim to minimize the sum. But have diverse computation tasks, and each user’s tasks
the units and dimensions of time and energy are quite at different periods may be quite different. More im-
portantly, even for the same tasks at the same period,
different. A simple weighted combination cannot pro-
vide users with a preference index to fairly balance
delay and power consumption. In realistic IoT net-
-p
different users will have widely varying task prefer-
ences due to differences in individual habits and the
re
works, users tend to have various demands for delay current status of the holding devices. A user with a
reduction and energy saving. Therefore, it is necessary low battery device can choose a larger αi to save more
energy. Another user running delay-sensitive applica-
lP

to address the dimensional differences between time


and energy and provide on-demand services. To this tions (e.g., online movies) may prefer to set a smaller
end, we introduce a normalized energy consumption αi to reduce the delay. As such, our approach can im-
γiE and a normalized delay γiD for user i when using prove the QoE of all participating users.
na

partial computation offloading, which are given by


Ei
γiE = (9)
ur

all−loc
Ei
4. Proposed delay-energy balanced partial offload-
 
max tiloc , tiAP + Li −li ing scheme
Jo

riA
γiD = , γiD ≤ 1 (10)
Di
where γiD should be less than or equal to one to sat-
isfy the user i’s delay limitation for the computation In this section, we propose a DEB-PO scheme to
tasks. As such, γiE and γiD can intuitively reflect the provide a specific offloading strategy for user i’s cur-
degrees of energy saving and delay reduction due to rent tasks. Specifically, we first formulate an optimiza-
partial computation offloading, respectively. tion problem to minimize computation overheads un-
Based on γiE and γiD , we then define the computa- der various constraints. Since the resulting problem is
tion overhead of processing user i’s tasks as non-convex, we explore its layered structure and de-
couple it into two subproblems. Finally, we develop
Oi = αi γiE + (1 − αi )γiD (11) an iterative algorithm to obtain the offloading solution.
where 0 ≤ αi ≤ 1 is a control weight, and its value
is decided by user i according to its own computation
tasks requirements. The offloading decisions can be 4.1. Problem formulation and equivalent transforma-
greatly affected by αi . To meet user-specific demands, tion
we allow each user to assign their own weights when
they make decision. To illustrate this, we present the
following three-user example:
Our goal is to provide computation offloading so-
• User 1 has some delay-sensitive computation lutions for participating users according to their pref-
tasks, thus he/she places a higher priority on erence requirements. Specifically, we aim to find an
his/her tasks’ delay than energy consumption. optimal allocation decision set {pi , li } for user i’s com-
Then, he/she sets 0 ≤ α1 < 21 , and α1 = 0 is putation tasks, minimizing computation overhead Oi .
the problem of minimizing delay [6]; Then, based on equations (1)-(11), the optimization
6 Shumei Liu, et al.

problem can be formulated as • Power allocation problem

(P1): min Oi = αi γiE + (1 − αi )γiD First, for a given offloading workload li , problem
pi ,li (P2) becomes a power allocation problem with respect
 αi li αi pi (Li − li ) to variable pi . Next, we introduce the first-layer opti-

= min  + mization under a given feasible li0 .
pi ,li Li ςCi Li fi2 riA
First-Layer Optimization: Given li0 satisfying 0 ≤
(1 − αi ) max(tiloc , tiAP + Li −li 
riA )  li ≤ min( DCi ifi , Li ) (i.e., the equivalent transformation
0

+ 
 (12a)
Di of (12b) and (12d)), the first subproblem (P2-1) is ex-
pressed as
s.t. tiloc ≤ Di (12b)
 αi li αi (Li − li0 )pi (1 − αi )zi 
 0 
Li − li
tiAP + ≤ Di (12c) (P2-1): min  + +  (16a)
ςCi Li fi2 riA

riA pi ,zi Li Di
0 ≤ li ≤ L i (12d) Ci li0
s.t. ≤ zi ≤ Di (16b)
0 ≤ pi ≤ pmax
i (12e) fi
C AP (Li − li0 ) Li − li0
where (12b) and (12c) satisfy the delay limitation Di , zi ≥ + (16c)
fAP riA

of
(12d) ensures the offloading size limit, and (12e) re- max
stricts the transmission power of user i. pmax is the 0 ≤ pi ≤ pi (16d)
i
maximum transmission power of user i. Problem (P1)

ro
Note that the problem (P2-1) is still non-convex be-
is a non-convex problem due to the coupling of vari- cause the term rpiAi in objective function (15a) is non-
ables, which belongs to an NP-hard problem and trou- convex with respect to pi . Let f (pi ) = rpiAi , and we then
blesome to find the optimal solution by employing
traditional optimization methods. Before solving the
nonconvexity, we introduce an auxiliary variable zi as
-p
convert f (pi ) into a convex term by using Successive
Convex Approximation (SCA) [36]. We specifically
re
introduce the following Lemma for non-convex gradi-
∆ Li − li ent approximations from [37].
zi = max(tiloc , tiAP + ) (13)
lP

riA Lemma 1. If no convexity is present in U(x), we can


apply proximal-gradient methods for equivalent con-
Based on zi , equation (11) can be written as
vex transformation, i.e., first order approximation.
na

ei = αi li + αi (Li − li )pi + (1 − αi )zi


Hence, for any y in the domain of U, a convex ap-
O (14) proximation Ũ(x; y) of U(x) can be written as
Li ςCi Li fi2 riA Di
∆ τ
Ũ(x; y) = ∇ x U(y)(x − y) + (x − y)2
ur

Then, based on zi and (14), problem (P1) can be (17)


2
transformed as
where τ > 0 is a positive constant. ∇ x U(y) denotes
Jo

(P2): min O
ei (15a) the partial gradient of U with respect to argument x
pi ,li ,zi evaluated at y.
s.t. tiloc ≤ zi ≤ Di (15b)
Based on Lemma 1, a convex approximation
Li − li f (pi ; pi (k)) of f (pi ) can be written as
zi ≥ tiAP + (15c)
riA  τ 
0 ≤ li ≤ Li (15d) f (pi ; pi (k)) = B f 0 (pi (k))(pi − pi (k)) + (pi − pi (k))2
2
0 ≤ pi ≤ pmax
i (15e) (18)
log2 ( nA +pniA(k)hiA ) − pi (k) (nA +pih(k)h
iA
iA )ln2
where (15b) and (15c) are the equivalent form of (12b) f 0 (pi (k)) =
nA +pi hiA 2
 
and (12c), respectively. However, problem (P2) is still log2 ( nA )
a non-convex optimization problem because objective (19)
function (15a) and constraint (15c) are non-convex. where τ > 0 is a positive constant, and pi (k) is the
solution of the kth iteration in subproblem (P2-1).
4.2. Layered structure of problem (P2) Based on equations (18) and (19), we approxi-
mately transform problem (P2-1) into a convex prob-
Fortunately, it is evident from problem (P2) that lem (P2-1*) as
once the offloading power pi is determined, the op-
 αi l αi (Li − li0 )
 0
timization problem is a general convex problem. In- (P2-1*): min  i + f (pi ; pi (k))
spired by this idea, in this section, we explore the lay- pi ,zi Li ςCi Li fi2
ered structure of problem (P1) to efficiently solve it. (1 − αi )zi
!
Specifically, we build a two-layer structure and decou- + (20)
Di
ple it into a power allocation problem and a workload
allocation problem. s.t. (16b), (16c), (16d)
Adaptive delay-energy balanced partial offloading strategy in mobile edge computing networks 7

Problem (P2-1) is convex, and can be solved by Algorithm 2 Optimal Resource Allocation Algorithm
CVX. To facilitate the understanding, we develop an for Solving P2
iterative power allocation algorithm for solving P2-1*, Input:
which is shown in Algorithm 1. δ: tolerance for convergence
K: maximum number of iterations
Algorithm 1 Power Allocation Algorithm for Solving Output:
P2-1* (li∗ , p∗i ): optimal allocation strategy
Input: 1: Initialize: k2 = 1, O e(0) = −δ and |O e(k2 −1) | =
e(k2 ) − O
i
δ: tolerance for convergence 2δ;
K: maximum number of iterations 2: while |O e(k2 −1) | > δ and k2 ≤ K do
e(k2 ) − O
li0 : feasible partial offloading size 3: k2 = k2 + 1
p0i : feasible power allocation solution 4: Given feasible partial offloading size li (k2 ), per-
Output: form Algorithm 1 to update pi (k2 ) = p∗i ;
p∗i : optimal power allocation strategy 5: Based on pi (k2 ), solve problem (P2-2) by using
1: Initialize: k1 = 1, pi (0) = −δ, |pi (k1 ) − CVX to update li (k2 );
pi (k1 − 1)| = 2δ and pi (k1 − 1) = p0i ; 6: end while
2: while |pi (k1 ) − pi (k1 − 1)| > δ and k1 ≤ K do 7: Return p∗i and li∗ = li (k2 )

of
3: Solve problem (P2-1*) based on li0 and pi (k1 −
1) by using CVX and obtain power allocation

ro
pi (k1 ); (P2-2). Then, we present an optimal resource allo-
4: k1 = k1 + 1 cation algorithm for solving (P2), and the algorithm
5: end while
6: Return p∗i = pi (k1 )
-p
process is summarized in Algorithm 2.
In Algorithm 2, δ and K are the tolerance for con-
vergence (i.e., the desired accuracy) and the maximum
re
In Algorithm 1, δ and K are the tolerance for con- number of iterations, respectively. We iteratively ap-
vergence and the maximum number of iterations, re- proximate the optimal solution until the termination
lP

condition is satisfied, i.e., |O e(k2 −1) | ≤ δ or


e(k2 ) − O
spectively. We iteratively approximate the optimal
solution until the termination condition is met, i.e., k2 > K, where O e(k2 ) is the objective value of the k2 th
|pi (k1 ) − pi (k1 − 1)| ≤ δ or k1 > K. iteration, and li∗ and p∗i are optimal partial offloading
na

workload and offloading power of user i’s computation


• Workload allocation problem tasks.
Complexity analysis: The complexity of Algorithm
Second, for a given offloading power pi , problem
ur

2 is theoretically analyzed as follows. As men-


(P2) becomes a workload allocation problem with tioned above, through the proposed two-layer opti-
respect to variable li . Next, we will introduce the mization algorithm, problem (P2) can be decomposed
Jo

second-layer optimization under a given feasible p0i . into the inter power allocation problem (P2-1*) and
Second-Layer Optimization: Given p0i satisfying the outer workload allocation problem (P2-2). Both
p0 h
 
0 ≤ p0i ≤ pmax i (i.e., (14e)), riA 0
= Blog2 1 + ni AiA (P2-1*) and (P2-2) are convex optimization problems,
and the second subproblem (P2-2) is expressed as and are solved by the interior point method of CVX
toolkit. Therefore, the overall complexity of solv-
 αi li αi (Li − li )p0i (1 − αi )zi 
 
ing problem (P2) in Algorithm 2 can be written as
(P2-2) : min  + +
ςCi Li fi2 riA O((K1 log( 1δ ))(K2 log( 1δ ))), where K1 and K2 are respec-
 0

li ,zi Li Di
(21a) tively the iteration number of solving problems (P2-
C i li 1*) and (P2-2) using CVX, and δ denotes the accu-
s.t. ≤ zi ≤ Di (21b) racy required for the convergence of Algorithms 1 and
fi
2. Therefore, our proposed Algorithm 2 can solve the
C AP (Li − li ) Li − li
zi ≥ + 0 (21c) formulated non-convex problem in polynomial time
fAP riA complexity.
In problem (P2-2), both the objective function and
constraints are convex with respect to li and zi . There- 5. Performance evaluation
fore, problem (P2-2) is a convex problem, and can be
easily solved by CVX. In this section, we evaluate the performance of the
proposed DEB-PO scheme via extensive numerical
experiments. All the experiments are implemented in
4.3. An iterative algorithm for offloading solution
MATLAB using CVX on a desktop computer with an
Based on the above problem decomposition, the Intel Core i7-4790 3.60GHz CPU and 16GB RAM.
original non-convex problem (P2) is approximately The simulation parameters settings are summarized in
transformed into two convex problems (P2-1*) and Table 2 unless otherwise stated.
8 Shumei Liu, et al.

Table 2: Simulation Parameter Settings case. This phenomenon can conclude that the tasks
with the above settings of Li , Di , and ς are relatively
Parameters Values more energy-consuming.
Maximum number of iterations: K 50
Tolerance for convergence: δ 0.0001 5.2. Performance comparison
Channel bandwidth: B 20MHz In this subsection, we evaluate the performance of
Channel power gain: hiA 2.5×10−7 DEB-PO in terms of computation overhead of pro-
Background noise at AP: nA -43dbm
cessing tasks. To verify the advantages of our pro-
posed scheme, we compare our solution with the exist-
CPU cycles of user i: Ci 100 cycles/bit
ing partial offloading schemes configured as follows:
CPU cycles of AP: CAP 100 cycles/bit
(1) Delay Minimization Partial Offloading (DMPO):
Maximum transmission power: pmax
i 2W This scheme is derived by modifying the offload-
Average CPU frequency of user i: fi 1GHz ing method in [6]. In this scheme, user i adopts
Average CPU frequency of AP: fAP 3GHz partial offloading mode in the MEC-based sys-
tem, and the AP aims to provide solutions with
minimum delay for user i’s tasks.
In the following simulation experiments, we use L,
(2) Energy consumption Minimization Partial Of-
D and ς to describe the characteristics of various com-

of
floading (EMPO): This scheme is derived from
putation tasks belonging to different users. Amongst,
the offloading method in [8]. In this scheme, user
Li is the workload of the computation tasks required

ro
i also adopts partial offloading mode in the MEC-
by user i, Di is the tolerable delay of i’s tasks, and
based system, and the AP aims to provide solu-
ς is the effective capacitance coefficient of i’s device.
tions with minimum energy consumption for user
A larger ς represents the more energy consumed by
computing one bit of tasks.
-p i’s tasks.
(3) Delay-Energy Tradeoff Partial Offloading
re
5.1. Convergence analysis (DETPO): This scheme is derived from the of-
floading method in [33]. In this scheme, user i
lP

also adopts partial offloading mode in the MEC-


0.5 based system, and the AP aims to provide solu-
0.45 tions with the minimum weighted sum of delay
and energy consumption for user i’s tasks. (For
na

0.4
Computation overhead

0.35 a fair comparison, in each simulation experiment,


0.3 we give the same tradeoff weight for the DETPO
scheme as the preference weight αi in our pro-
ur

0.25 DEB-PO scheme with α =0.4


0.2 DEB-PO scheme with α =0.5 posed DEB-PO scheme.)
0.15 DEB-PO scheme with α =0.6
First, we set αi = 0.3, Li = 3 × 106 bits and
Jo

0.1 ς = 10−27 to describe the characteristics of user i’s


0.05 current tasks. Obviously, the tasks are delay-sensitive
0 for user i because he/she sets αi = 0.3 < 0.5. Fig. 3
0 2 4 6 8 10 12 14 16 18
shows the computation overhead Oi to process user
Number of iterations
i’s tasks changing with the tasks’ tolerable delay Di .
Fig. 2: Convergence of the proposed DEB-PO scheme We find that EMPO has the worst performance among
the above four schemes. This is because the tasks are
In this subsection, we verify the convergence of delay-sensitive, but EMPO only focuses on minimiz-
the proposed Algorithm 2 for the DEB-PO scheme, ing i’s energy consumption without considering pro-
shown in Fig. 2. In this case, we set Li = 3 × 106 bits, cessing delay, which increases the computational over-
Di = 0.5s and ς = 10−27 to describe the current tasks head of processing the tasks. Moreover, it can be ob-
of user i. From Fig. 2, we can see that the computa- served that the computation overhead decreases with
tion overhead increases significantly with the number increasing tolerable delay Di for DEB-PO, DETPO
of iterations in the DEB-PO schemes with three differ- and DMPO, while EMPO shows an increasing trend.
ent values of α, and the optimal computation overhead We next analyze the reason for this phenomenon. The
is achieved within 13 iterations at most. Therefore, EMPO scheme aims to minimize energy consumption
the proposed Algorithm 2 for the DEB-PO scheme by offloading more tasks and allocating less offload
converges to a stable value with a fast convergence power. When task workload Li is constant, a larger
speed, thus verifying the convergence and feasibility tolerable delay Di in EMPO leads to larger offloading
of our solution. Moreover, we observe that the com- part and smaller offloading power, which in turn re-
putation overhead in DEB-PO increases with increas- sults in longer processing and offloading delays. Con-
ing α value. That is, greater attention to energy con- versely, DEB-PO, DETPO and DMPO achieve lower
sumption achieves less computational overhead in this processing and offloading delays due to giving more
Adaptive delay-energy balanced partial offloading strategy in mobile edge computing networks 9

focuses on minimizing delay than EMPO. Therefore, the computation overhead and the task workload in
we conclude that for delay-sensitive tasks, the solu- these schemes. Meanwhile, the computation overhead
tion of minimizing energy consumption increases the in EMPO increases at first and then decreases with
computational overhead of processing the tasks. increasing computation task workload. This is be-
cause EMPO aims to offload as much task workload
as possible to MEC for processing, thereby minimiz-
0.7
ing i’s energy consumption. When the task workload
0.6
is relatively small, as the workload increases, the so-
lution for EMPO increases task delay, and meanwhile
Computation overhead

0.5 Our proposed DEB-PO scheme contributes a small reduction to energy consumption
DMPO scheme due to the consumption of offloading energy. With
EMPO scheme
0.4 a larger task workload, the solution for EMPO con-
DETPO scheme
tributes larger reduction both in delay and energy con-
0.3
sumption. For the above reasons, the trend line of the
EMPO scheme in Fig. 4 increases first and then de-
0.2
creases.
0.1

of
0.35 0.45 0.55 0.65 0.75 0.85 0.95
Tolerable delay (s) 0.6

ro
Fig. 3: Computation overhead versus tolerable delay 0.5
Computation overhead
Our proposed DEB-PO scheme
0.4
Furthermore, it is obvious in Fig. 3 that our pro-
posed DEB-PO scheme outperforms the DETPO,
DMPO, and EMPO schemes. First, the reason for the
-p 0.3
DMPO scheme
EMPO scheme
DETPO scheme
re
improvement to DETPO is that in DEB-PO we design 0.2
two normalized parameters for delay and energy con-
lP

0.1
sumption, which can flexibly balance user i’s prefer-
ence regarding the degrees of energy saving and de- 0
lay reduction. But DETPO only considers a simple 1 1.25 1.5 1.75 2 2.25 2.5 2.75 3 3.25 3.5 3.75 4
×106
na

weighted combination of delay and energy consump- Computation task workload (bits)
tion. The units and dimensions of time and energy
Fig. 4: Computation overhead versus computation task workload
are quite different, and a simple weighted combina-
ur

tion cannot reflect the specific preferences to delay re-


duction and energy saving for individual users. As Moreover, we observe from Fig. 4 that EMPO per-
such, DETPO exhibits a larger computational over- forms far better than the DMPO scheme because the
Jo

head than our proposed DEB-PO scheme. Second, the current tasks are energy-intensive for user i’s device.
reason for the improvement to DMPO and EMPO is In DMPO, to rigidly provide low-delay service for
that DEB-PO can accommodate i’s requirement pref- user i, the offloading solution forces part of the tasks
erence, while DMPO and EMPO can only provide a to be computed locally, which consumes more en-
fixed strategy of minimizing delay or energy consump- ergy of user i and increases the computation overhead.
tion. When αi = 0.3, our proposed DEB-PO scheme We further find that DETPO outperforms DMPO and
pays more attention to the delay reduction compared EMPO. This is because DETPO sets a weight factor
to the energy saving for user i’s tasks. In other words, and then achieves the tradeoff of delay and energy
instead of ignoring energy saving as DMPO, we give consumption, but DMPO and EMPO can only provide
energy consumption relatively low attention. DMPO a fixed strategy of minimizing delay or energy con-
and EMPO schemes default to αi = 0 and αi = 1, re- sumption. More importantly, our proposed DEB-PO
spectively. Thus they display a larger computational scheme outperforms DETPO, and performs far better
overhead than do DEB-PO and DETPO. than DMPO and EMPO. The reason for the improve-
Next, we set αi = 0.8, Di = 0.9s, ς = 2∗10−27 to de- ment to DETPO is the same as that in Fig. 3. The
scribe the characteristics of user i’s current tasks. This reason for the improvement to DMPO and EMPO is
describes user i having energy-intensive tasks or the that user i is not completely ignoring the processing
residual energy of his/her device is low due to the set- delay of current tasks, but the delay reduction is of
ting of αi = 0.8 > 0.5 and ς = 2 ∗ 10−27 . Fig. 4 shows relatively low concern compared to energy saving. As
the computation overhead Oi of processing user i’s such, he/she does not set αi = 1 as EMPO. Therefore,
tasks changing with the task workload Li . This figure DEB-PO always provides a specific offloading solu-
shows that the computation overhead increases with tion based on each user’s requirement preference.
increasing task workload for DEB-PO, DETPO and Next, we evaluate the performance of DEB-PO with
DMPO because of the positive correlation between different preference degrees on energy consumption
10 Shumei Liu, et al.

0.7
forms the other schemes. This is because DEB-PO not
only focuses on the balance service between delay and
0.6 energy consumption but also considers the users’ em-
phasis degrees on delay reduction and energy saving.
Computation overhead

0.5
Moreover, the EMPO scheme has better performance
0.4 than DMPO because relatively more users think that
the tasks with the above settings of L, D and ς are
0.3
energy-consuming, further supporting the conclusion
0.2 Our proposed DEB-PO scheme in Fig. 2.
DMPO scheme Based on the above analysis, we can conclude that
0.1 EMPO scheme
DETPO scheme
it is of great significance to develop our correspond-
0 ing offloading solution according to the specific pref-
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
erence of each user. Our balanced approach is effec-
The value of α assigned by the user
tive in reducing computation overheads and thus im-
Fig. 5: Computation overhead versus the value of αi proving the QoE of mobile users.

(i.e., the value of αi ). The settings of L, D and ς are

of
6. Conclusion
the same as those in Fig. 2. Fig. 5 shows the compu-
tation overhead Oi of processing user i’s tasks chang- Recently, IoT and MEC networks are promising

ro
ing with the value of αi . Obviously, the DEB-PO and technologies for supporting ubiquitous services to
DETPO schemes are equivalent to the DMPO scheme large numbers of mobile users. Based on this, a
when αi = 0, and equivalent to the EMPO scheme
when αi = 1. We also observe that our proposed DEB-
PO schemes always achieve a lower computation over-
-p
large number of computation-sensitive social applica-
tions have emerged. Computational offloading is re-
garded as a critical solution for the ever-increasing re-
re
head than do other schemes regardless of the value of quirements of energy-intensive and delay-critical ap-
αi . This is because DEB-PO always accommodates plications. Unfortunately, most existing research can
lP

the varying requirements of users regarding delay re- only provide rigid offloading services. To this end,
duction and energy saving, and then provides a unique we design a requirement-adaptive MEC-based par-
strategy for each user’s specific requirement. tial offloading system. Specifically, we propose a
na

Delay-Energy Balanced Partial Offloading (DEB-PO)


45
scheme and formulate the corresponding optimization
Our proposed DEB-PO scheme problem, which aims to minimize the total compu-
40
ur

DMPO scheme tation overhead of processing tasks subject to tolera-


35 EMPO scheme
ble delay, task workloads, and power constraints. To
Computation overhead

DETPO scheme
30 tackle the original non-convex problem, we decouple
Jo

25 it into two subproblems, and develop an efficient itera-


20 tive algorithm to obtain suboptimal solutions. Finally,
15
we conduct extensive simulation experiments, and it is
verified that our proposed DEB-PO scheme provides
10
better performance than the baseline schemes.
5
0
10 20 30 40 50 60 70 80 90 100
References
The number of users
[1] Y. Ai, M. Peng, K. Zhang, Edge computing technologies for
Fig. 6: Computation overhead versus the number of users
Internet of Things: a primer, Digital Communications and Net-
works, 4(2) (2018) 77-86.
Finally, we verify the advantages of our proposed [2] Y. Yu, L. Guo, S. Liu, J. Zheng, H. Wang, Privacy protec-
DEB-PO scheme when there are multiple participat- tion scheme based on CP-ABE in crowdsourcing-IoT for smart
ocean, IEEE Internet Things J., 7 (10) (2020) 10061-10071.
ing users in the MEC-based offloading system, and [3] Y. Li, S. Xia, M. Zheng, B. Cao, Q. Liu, Lyapunov optimization
the settings of L, D and ς are the same as those in based trade-off policy for mobile cloud offloading in heteroge-
Fig. 2. We aim to reflect the positive impact of con- neous wireless networks, IEEE Trans. Cloud Comput., 10 (1)
sidering each user’s specific preference. The αi value (2022) 491-505.
[4] F. Fowley, C. Pahl, P. Jamshidi, D. Fang, X. Liu, A classifi-
of each user i is randomly generated between 0 and cation and comparison framework for cloud service brokerage
1, which can reflect the preferences of different users architectures, IEEE Trans. Cloud Comput., 6 (2) (2018) 358-
for the energy consumption of the current tasks. Fig. 371.
[5] L. Huang, X. Feng, C. Zhang, L. Qian, Y. Wu, Deep reinforce-
6 shows the computation overhead Oi of processing
ment learning-based joint task offloading and bandwidth allo-
users’ tasks changing with the number of users. We cation for multi-user mobile edge computing, Digital Commu-
observe that our proposed DEB-PO scheme outper- nications and Networks, 5 (2019) 10-17.
Adaptive delay-energy balanced partial offloading strategy in mobile edge computing networks 11

[6] Y. Wu, L.P. Qian, K. Ni, C. Zhang and X. Shen, “Delay- ing in vehicular networks, IEEE Trans. Veh. Technol., 68 (8)
minimization nonorthogonal multiple access enabled multi- (2019) 7944-7956.
user mobile edge computation offloading, IEEE J. Sel. Topics [27] S. Zhong, S. Guo, H. Yu, Q. Wang, Cooperative service
Signal Process., 13 (3) (2019) 392-407. caching and computation offloading in multi-access edgecom-
[7] Y. Li, H. Ma, L. Wang, S. Mao, G. Wang, Optimized content puting, Digital Communications and Networks, 189 (2021)
caching and user association for edge computing in densely de- 10716.
ployed heterogeneous networks, IEEE Trans. Mobile Comput., [28] M. Sheng, Y. Dai, J. Liu and N. Cheng, X. Shen, Q. Yang,
21 (6) (2022) 2130-2142. Delay-aware computation offloading in NOMA MEC under
[8] M. Sheng, Y. Wang, X. Wang, J. Li, Energy-efficient multiuser differentiated uploading delay, IEEE Trans. Wireless Com-
partial computation offloading with collaboration of terminals, mun., 19 (4) (2020) 2813-2826.
radio access network, and edge server, IEEE Trans. Commun., [29] Y. Dai, K. Zhang, S. Maharjan, Y. Zhang, Edge intelligence
68 (3), (2019) 1524-1537. for energy-efficient computation offloading and resource allo-
[9] Y. Zhou, L. Tian, L. Liu and Y. Qi, Fog Computing Enabled cation in 5G beyond, IEEE Trans. Veh. Technol., 69 (10) (2020)
Future Mobile Communication Networks: A Convergence of 12175-12186.
Communication and Computing. IEEE Commun. Mag., 57 (5) [30] C. Yi, J. Cai, Z. Su, A multi-user mobile computation offload-
(2019) 20-27. ing and transmission scheduling mechanism for delay-sensitive
[10] S. Safavat, N.N. Sapavath, D. B.Rawat, Recent advances in applications, IEEE Trans. Mobile Comput., 19 (1) (2019) 29-
mobile edge computing and content caching, Digital Commu- 43.
nications and Networks, 6 (2) (2020) 189-194. [31] Y. Wu, B. Shi, L. P. Qian, F. Hou, J. Cai, X.S. Shen, Energy-
[11] F. Wang, J. Xu, Z. Ding, Multi-antenna NOMA for compu- efficient multi-task multi-access computation offloading via
tation offloading in multiuser mobile edge computing systems, NOMA transmission for IoTs, IEEE Trans Ind. Informat., 16

of
IEEE Trans. Commun., 67 (3) (2019) 2450-2463. (7) (2020) 4811-4822.
[12] Y. Qi, Y. Zhou, Y.F. Liu, L. Liu and Z. Pan, Traffic-aware task [32] B. Liu, C. Liu, M. Peng, Resource allocation for energy-
offloading based on convergence of communication and sensing efficient MEC in NOMA-enabled massive IoT networks, IEEE

ro
in vehicular edge computing, IEEE Internet Things J., 8 (24) J. Sel. Areas Commun., 39(4) (2021) 1015-1027.
(2021) 17762-17777. [33] C. Huang, J. Zhang, H. V. Poor and S. Cui, Delay-Energy
[13] Z. Yu, Y. Gong, S. Gong, Y. Guo, Joint task offloading and Tradeoff in Multicast Scheduling for Green Cellular Systems,
resource allocation in UAV-enabled mobile edge computing,
IEEE Internet Things J., 7 (4), (2020) 3147-3159.
[14] S. Xia, Z. Yao, Y. Li, S. Mao, Online distributed offload-
-p IEEE J. Sel. Areas Commun., 34(5), (2016), 1235-1249.
[34] T.X. Tran and D. Pompili, Joint task offloading and resource
allocation for multi-server mobile-edge computing networks,
re
ing and computing resource management with energy harvest- IEEE Trans. Veh. Technol., 68 (1) (2020) 856-868.
ing for heterogeneous MEC-enabled IoT, IEEE Trans. Wireless [35] N. Alon, E. Lubetzky, The Shannon capacity of a graph and
Commun., 20 (10) (2021) 6743-6757. the independence numbers of its powers, IEEE Trans. Inf. The-
lP

[15] H. Ye, G. Lim, L.J. Cimini, Z. Tan, Energy-efficient schedul- ory, 52 (5) (2006) 2172-2176.
ing and resource allocation in uplink OFDMA systems, IEEE [36] G. Scutari, F. Facchinei, L. Lampariello, S. Sardellitti, P.
Commun. Lett., 19 (3) (2015) 439-442. Song, Parallel and distributed methods for constrained noncon-
[16] Y. Yang, S. Zhao, W. Zhang, Y. Chen, X. Luo, J. Wang, vex optimization-part II: Applications in communications and
DEBTS: Delay energy balanced task scheduling in homoge- machine learning, IEEE Trans. Signal Process., 65 (8) (2017)
na

neous fog networks, IEEE Internet Things J., 5 (3) (2018) 2094- 1945-1960.
2106. [37] G. Scutari, F. Facchinei, L. Lampariello, Parallel and Dis-
[17] C. Zhan, H. Hu, X. Sui, Z. Liu, D. Niyato, Completion time tributed Methods for Constrained Nonconvex Optimization-
ur

and energy optimization in the uav-enabled mobile-edge com- Part I: Theory, IEEE Trans. Signal Process., 65 (8) (2017) 1929-
puting system, IEEE Internet Things J., 7 (8) (2020) 7808- 1944.
7822.
[18] G. Zhang, W. Zhang, Y. Cao, D. Li, L. Wang, Energy-delay
Jo

tradeoff for dynamic offloading in mobile-edge computing sys-


tem with energy harvesting devices, IEEE Trans Ind. Informat.,
14 (10) (2018) 4642-4655.
[19] H. Lu, X. He, M. Du, X. Ruan, Y. Sun, K. Wang, Edge QoE:
Computation offloading with deep reinforcement learning for
Internet of Things, IEEE Internet Things J., 7 (10) (2020) 9255-
9265.
[20] H. Guo, J. Liu, Collaborative computation offloading for mul-
tiaccess edge computing over fiberCwireless networks, IEEE
Trans. Veh. Technol., 67 (5) (2018) 4514-4526.
[21] H. Li, F. Fang, Z. Ding, Joint resource allocation for hybrid
NOMA-assisted MEC in 6G networks, Digital Communica-
tions and Networks, 6 (2020) 241-252.
[22] Y. Li, J. Liu, B. Cao, C. Wang, Joint optimization of radio and
virtual machine resources with uncertain user demands in mo-
bile cloud computing, IEEE Trans. Multimedia, 20 (9), (2018)
2427-2438.
[23] Y. Yu, F. Li, S. Liu, J. Huang, L. Guo, Reliable fog-based
crowdsourcing: a temporal-spatial task allocation approach,
IEEE Internet Things J., 7 (5) (2020) pp. 3968-3976.
[24] P. Mach, Z. Becvar, Mobile edge computing: A survey on
architecture and computation offloading, IEEE Commun. Sur-
veys & Tuts., 19 (3) (2017) 1628-1656.
[25] G. Gao, Y. Wen, Video transcoding for adaptive bitrate stream-
ing over edge-cloud continuum, Digital Communications and
Networks, 7 (2020) 598-604.
[26] J. Zhao, Q. Li, Y. Gong, K. Zhang, Computation offloading
and resource allocation for cloud assisted mobile edge comput-
Conflict of interest statement

We declare that we have no conflicts of interest to this work.

f
r oo
-p
re
lP
na
ur
Jo

You might also like