Professional Documents
Culture Documents
Abstract—Advanced communication technologies (e.g., 5G) future, not only mobile devices but also other appliances, such
probably elicit a complete change of network and its applications. as vehicles or industrial equipment, will join the 5G network
For example, a growing number of services begin shifting from
central clouds to vast mobile devices, as the hybrid use of cloud and for more convenient and extensive services. Taking Internet of
fog computing technologies can provide enhanced quality of service vehicles for example, with 5G or even future 6 G connec-
and efficient utilization of resources. However, to design such tions [1], the vehicles can provide a greater variety of services
complex hybrid cloud-fog (HCF) systems, it remains a challenge to through the networks [2]. As most smart vehicles integrate
implement time-consuming modeling and inefficient evaluation in more powerful devices such as high-performance CPUs, mas-
its early design stage based on the conventional simulation or
practical experimentation. Therefore, how to reduce design cost
sive storage and various sensors (e.g., video cameras), they can
and improve development efficiency becomes a crucial issue in the behave as not only service consumers but also producers.
process of designing large-scale HCF systems. To address the issue, Therefore, many conventional service providers, particularly
this paper proposes a novel modeling framework for large-scale cloud-service merchants, might consider new solutions to com-
HCF systems based on a high-level formal language, i.e. bine their services with a mobile platform such as the vehicular
performance evaluation process algebra (PEPA). Toward the key
components of an HCF system, the proposed framework includes cloud [20] and vehicular fog [4] platforms. In fact, fog comput-
three crucial model prototypes: compositional architecture model, ing technologies have been developed to offload data and com-
abstract communication model and scheduling model. Moreover, the putation from central clouds to a group of virtual servers
scheduling model is designed with a novel smart scheduling scheme formed by mobile devices [5], [6]. The fog computing offers
that integrates two atomic scheduling algorithms and a decision supplemented physical resources (e.g., computation, network,
module to make an efficient algorithm selection. The smart
scheduling algorithm can well adapt the HCF systems by yielding and storage) closer to end users for better services [7]. The con-
stable and fast response to end-users, particularly under dynamical ventional cloud service providers have to face the challenge of
system conditions. Finally, the framework is the first research new fog computing-based service platforms. As a result, a
achieving the full potential of formal methods to implement growing number of applications will be built on such a hybrid
industry-level modeling and evaluation. platform by merging cloud and fog systems, i.e., the hybrid
Index Terms—Fluid flow approximaton, formal modeling, cloud-fog (HCF) system. Our recent research [8] proposed a
hybrid cloud-fog systems, PEPA, scheduling. blockchain-based trust management system that conducts trust
evaluation and preserves trust data based on all distributed par-
I. INTRODUCTION ticipating nodes. Comparing to the conventional cloud-central-
ised system, the blockchain-based system needs to join all
S INCE2019, 5G communication networks have begun to be
commercially deployed for mobile services. In the near distributed end nodes together to perform trust evaluation and
blockchain consensus operations by aggregating nodes’ resour-
ces into a virtual fog server [9]. This design relies on an HCF
Manuscript received July 16, 2020; revised October 6, 2020 and November
6, 2020; accepted November 20, 2020. Date of publication November 24, system framework that provides efficient and scalable services
2020; date of current version March 17, 2021. The research was supported by without a cost increase of central cloud resources.
the National Natural Science Foundation of China (NSFC) under Grants To shift services away from clouds, new systems need be
61 702 233, 61 472 343, 61 773 220, and 61 976 117, and the State Key Labo-
ratory of Software Development Environment under Grant SKLSDE-2017KF- designed in terms of specific application requirements. The
03, and the National Science Foundation of Jiangsu Province under Grants large-scale system design usually suffers from a time-consum-
BK20191409 and 19KJA360001. Recommended for acceptance by Prof. Kun ing process for the preliminary design, as well as design evalu-
Yang. (Corresponding author: Xiao Chen.)
Xiao Chen was jointly with the Jiangsu Key Laboratory of Meteorological ation and several rounds of design refinement. To facilitate
Observation, and Information Processing, Nanjing University of Information system design work, this paper presents a reasonable solution
Science, and Technology, Nanjing, China and is with School of Informatics, Uni-
versity of Edinburgh, Edinburgh, Midlothian EH8 9AB, U.K. (e-mail: xiao.
for the key issue, i.e., how to generate efficient modeling and
chen@ed.ac.uk). reliable evaluation to facilitate the preliminary design. To
Jie Ding is with the China Institute of FTZ Supply Chain, Shanghai Mar- address these issues, we propose a novel modeling framework
itime University, Shanghai 201306, China (e-mail: jding@shmtu.edu.cn). to efficiently build models and generate performance analysis
Zhenyu Lu is with the School of Artificial Intelligence, Nanjing University
of Information Science, and Technology, Nanjing 210044, China (e- for the HCF-based systems.
mail: luzhenyu76@163.com). The framework uses a novel formal modeling language, i.e.,
Tianming Zhan is with the School of Information, and Engineering, Nanjing performance evaluation process algebra (PEPA) [10], to model
Audit University, Nanjing 211815, China (e-mail: ztm@nau.edu.cn).
Digital Object Identifier 10.1109/TNSE.2020.3040215 all system designs. PEPA is a high-level formal modeling
2327-4697 ß 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
448 IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, VOL. 8, NO. 1, JANUARY-MARCH 2021
language for stochastic models, which describes a concurrent Second, with a multicomponent design, most HCF-based
system as an interaction of the components engaging in activi- systems usually adopt different communication protocols
ties. In contrast to the conventional process algebras, the activity between their subsystems (e.g., cloud and fog subsystems) or
duration of PEPA is assumed to be exponentially distributed. among components of the same subsystem [29]. Thus, it is
The PEPA language has a few combinators, and its structured inefficient to include all protocol-based communication
operational semantics can be represented as: behaviors in a single model, which makes the model overly
complicated to define and maintain. Therefore, this research
S ::¼ ða; rÞ:S j S þ S j CS ; uses abstraction modeling to demonstrate a communication
P ::¼ P ffl P j P =L j C: model prototype that defines the offloading process of the
L
HCP system. The abstraction communication model moves
where S denotes a sequential component and P denotes a protocol behaviors from a high-level offloading model to sev-
model component that executes in parallel. Here, C stands for eral submodels that can be analyzed together with the high-
a constant that denotes either a sequential component or a level model. The abstraction modeling can enable efficient
model component as introduced by a definition and CS stands model construction and maintenance without compromising
for constants that denote sequential components. The effect of its correctness.
this syntactic separation between these types of constants is to Third, to obtain better offloading performance, HCF sys-
constrain legal PEPA components to be cooperations of tems usually use a smart scheduling mechanism to adapt to
sequential processes. More details of PEPA syntax can be changing system conditions. The smart scheduling scheme is
obtained in [10]. proposed by using multidynamic scheduling algorithms and a
PEPA is selected as our main tool due to its key features, decision model to achieve the optimal selection of scheduling
which are not available in other well-known performance algorithm. The scheduling algorithms and decision model are
modeling paradigms (e.g., stochastic Petri-nets, queueing the- usually investigated with simulation tools due to their complex
ory and discrete event simulation). The most important fea- operations. To improve the modeling ability of formal lan-
tures of PEPA are: compositionality, the ability to efficiently guage, this part aims to build a prototype for scheduling mod-
model the system as the interaction of subsystems or compo- els and the corresponding decision model. Finally, we
nents which clearly defines the architecture of a comprehen- demonstrate this scheduling model prototype and prove its
sive system as well as the interactive system behaviors; correctness by comparing its analysis results with those of
formality, defining all terms in the PEPA language with pre- equivalent simulation analysis. The main contributions can be
cise meaning, which can yield accurate formal model specifi- highlighted as follows:
cation of the system; abstraction, the ability to build up The critical contribution of this work is to demonstrate
complex system models from detailed components and to sep- an efficient modeling framework using a novel formal
arate the details into a set of submodels that can be combined language (i.e. PEPA) to achieve comprehensive perfor-
with the abstract model for analysis. These features allow mance modeling and analysis towards complex systems
PEPA to efficient build models for large-scale systems and (e.g., HCF systems). The framework includes three core
yield reliable numerical analysis based on stochastic simula- model prototypes. The first one is the compositional
tion and fluid-flow approximation [11], compared to time-con- model prototype representing the compositionality fea-
suming discrete event simulation [18], [19], [24] and practical ture of PEPA language, which can be efficiently used to
experiments [28]. The high-efficiency modeling and analysis define system components and their interactions in a
process based on PEPA models can reduce the cost and time clear and layered structure.
span of the preliminary design. The second prototype is the abstraction model that is
With the PEPA language, we propose the modeling frame- usually used to aggregate unobserved system components
work for HCF systems by considering three detailed HCF- and behaviors for efficient model creation without losing
design issues. First, to generate a complete model of a specific analysis accuracy, which indicates the abstraction feature
HCF system, we need to define the system architecture that of PEPA models. To introduce this model prototype, we
consists of all subsystems/components and their internal behav- use PEPA to model system communication of offloading
iors or external interactions. Based on the feature of composi- process based on an HCF system scenario and also intro-
tionality, PEPA can effectively define the compositional duce an efficient numerical analysis method (i.e. fluid
structure of these components, including their behaviors and flow analysis) through presenting a group of results.
interactions. In this research, we adopt the Openstack cloud The last scheduling model presents a novel design of the
platform for the top cloud-layer of the HCF system. A composi- smart scheduling scheme by integrating two atomic
tional model is developed to demonstrate how a cloud-layer scheduling algorithms to adapt to changing system condi-
system model is efficiently built in PEPA language. This com- tions. This design combines a novel time-preemptive
positional model can be considered a universal prototype for scheduling algorithm with the classic queue-preemptive
various HCF-based systems which integrate an Openstack-like algorithm and provides a smart decision support module
cloud platform. In addition, we also introduce stochastic analy- to aid the selection of two atomic scheduling algorithms
sis based on PEPA models, in which all parameters are by observing the changing system environments to
observed directly from running an Openstack platform. improve system performance and quality of service.
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: EFFICIENT FORMAL MODELING FRAMEWORK FOR HYBRID CLOUD-FOG SYSTEMS 449
Besides, we present a PEPA prototype for modeling random environment. Moreover, Mehmeti [21] proposed a
scheduling algorithm and the related numerical analysis, queueing analytic model for delayed offloading to analyze
which can be considered an alternative method of perfor- mean delay, offloading efficiency, and other metrics of interest,
mance modeling and evaluation besides conventional and Cheng [22] investigates WiFi offloading performance by
simulation approaches. establishing an explicit relation between offloading effective-
To the best of our knowledge, this is the first research that ness and average service delay with an M/G/l/K queueing
investigates a universal framework based on the formal model in order to minimize the trade-off between those two
modeling method which can effectively promote the use of aspects. As we noted before, a queueing model can represent
formal modeling in industry-level design, since the proposed the compositional architectures of systems but cannot formal-
framework is able to meet more complex modeling demands ize the model definition. Thus, this work defines the proposed
with those basic model prototypes. The rest of this paper will D2D offloading scheme in a formal high-level PEPA model to
provide the details of those three model prototypes and related achieve both formality and compositionality and adopts fluid-
performance analysis approaches. flow approximation to generate numerical analysis.
The scheduling efficiency is an important issue in cloud/fog-
based systems, which has been explored in many recent research
II. RELATED WORK endeavors. Tan [24] has studied the approach of minimizing the
Cloud architecture modeling has been implemented through total weighted response time for all tasks by constructing a gen-
various techniques, such as stochastic Petri nets (SPNs) and eral model, where the tasks are generated in arbitrary order and
stochastic reward nets (SRNs). Silva [14] and Ghosh [15] times at the mobile devices and are offloaded to servers with
apply SPNs and SRNs to model mobile cloud and cloud com- delays. In the research, the scheduling issue is investigated in a
puting systems, respectively. SPNs/SRNs support a formal steady condition without considering its diversity. Other
representation of system architecture and behaviors; neverthe- researchers [25] have had a similar goal of minimizing task com-
less, their syntax confines the representation for compositional pletion time by designing an efficient task scheduling and
system architectures, which entails an inferior compositional- resource management strategy through extensive simulation-
ity in model representation. In addition, another modeling based studies. However, these researchers neglect the diverse
technique, i.e., queueing theory, can be used for similar per- system conditions while using nonformal simulation tools for
formance modeling and analysis. Chang [16] developed a performance evaluation. Moreover, some other researchers,
novel approximate analytical model to conduct performance such as [26] and [27], also overlook the influence on the perfor-
evaluation in IaaS clouds with a queueing model, and Li [17] mance of proposed schemes caused by the changing system con-
also uses the M/M/c/r queueing system to propose an approxi- ditions. To address this issue, we will focus on developing a
mate analytical model for cloud computing centers by repre- novel smart dynamic scheduling scheme to adapt this change-
senting the rendering service platform as a multistation, able system environment, which can be integrated with HCF
multiserver system. These queueing models can efficiently systems to obtain higher performance. Moreover, we have had
represent a compositional architecture; however, they present some initial research (seeing [35]) to model the Join-the-short-
the drawback of providing such compositional models using a est-queue scheduling algorithm, which help us get some experi-
formalized definition. In contrast to these modeling techni- ence of modeling scheduling algorithm under fog computing
ques, PEPA can provide both formality and compositionality environments. This research will explore a new PEPA-based
in terms of its features stated in Section I. modeling framework and a novel smart scheduling algorithm for
Regarding the communication models, Su’s research [18] HCF systems.
presents a device-to-device-based communication framework In summary, the most recent research commonly utilizes
that is used for content delivery by employing resources of either noncompositional (e.g., SPNs and SRNs) or nonformal
parked vehicles. Moreover, Chen [19] proposes a novel hybrid techniques (e.g., queueing models and simulations) to con-
task offloading framework in fog computing so that end users struct system models; meanwhile, none of them has consid-
have the flexibility of selecting among multiple options for task ered the realistic system environment (i.e., the system with
execution, including local mobile execution, device-to-device changing conditions) that can cause unexpected influence on
(D2D) offloading, and cloud offloading. Both Su and Chen use the performance of offloading or scheduling scheme. All
simulation for performance evaluation. Most related research related literature is summarized in Table I in order to compare
(e.g., [5], [6]) also utilizes simulation as the main tool for per- key features. Hence, this research aims to facilitate modeling
formance modeling and numerical analysis, though the simula- work in system design and proposes a novel formal modeling
tion is informal and quite time-consuming, particularly for framework to address several key issues of HCF-based sys-
large-scale models. To improve modeling efficiency, some tems by considering a more realistic system environment.
other researchers [20] have explored an offloading solution by
enabling it for real-time traffic management in fog-based Inter-
III. COMPOSITIONAL MODEL OF CLOUD SYSTEMS
net of Vehicles (IoV). In this research, parked and moving vehi-
cle-based fog nodes are modeled as an M/M/1 queue. Similarly, This section introduces a PEPA-based compositional model
Wu [23] builds partial and full offloading models based on a prototype that is suitable for modeling systems with multiple
modulated M/M/1 queue in a two-phase (fast and slow) Markov components and layered structure due to PEPA’s key feature,
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
450 IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, VOL. 8, NO. 1, JANUARY-MARCH 2021
TABLE I
SUMMARY OF RELATED WORK
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: EFFICIENT FORMAL MODELING FRAMEWORK FOR HYBRID CLOUD-FOG SYSTEMS 451
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
452 IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, VOL. 8, NO. 1, JANUARY-MARCH 2021
Fig. 4. Response Time Varying against the Number of Users. Fig. 6. Utilization of Userreset Varying against the Number of Users (1).
Fig. 5. Response Time Varying against the Number of CTNs. Fig. 7. Utilization of Userreset Varying against the Number of Users (2).
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: EFFICIENT FORMAL MODELING FRAMEWORK FOR HYBRID CLOUD-FOG SYSTEMS 453
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
454 IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, VOL. 8, NO. 1, JANUARY-MARCH 2021
TABLE II
RELATED PARAMETERS IN 5 G ENVIRONMENTS.
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: EFFICIENT FORMAL MODELING FRAMEWORK FOR HYBRID CLOUD-FOG SYSTEMS 455
Fig. 12. Average Throughput of Root/Neighbour Base Stations Varyinga- Fig. 13. Average Response Time from Root Base Stations Varyingagainst
gainst Number of Vehicles and Request Rates. Number of Vehicles and Request Rates.
activity ðb; sÞ is an entry activity if a derivative D0 enables stations (shown in Fig. 11) by comparing the D2D-based off-
ðb; sÞ loading scheme to the non-offloading scheme. It is clear that
ðb; sÞ, such as D0 ! D. EnðDÞ denotes the set of entry Surf 2 & 4, on behalf of the D2D offloading scheme, represent
activities of D. lower-level throughput in contrast to Surf 1 & 3 for non-off-
After specifying the concepts of the exit activity and entry loading scheme. This indicates the communication overhead of
activity, the movements of the numerical state vector of the base stations are offloaded to D2D communications. With the
PEPA model are represented with these concepts. Here, we increase of communication traffic, the difference between the
define vij ðtÞ ¼ NðCij ; tÞ for the jth entry of the ith subvector at two schemes grows until the full capacity of communication
time t; NðCij ; tÞ denotes the number of instances of the jth local channels. In addition, Fig. 13 represents the average response
derivative of sequential component Ci . In a very short time inter- time from the root base station. The offloading scheme (Surf 2)
val dt, the change of the vector entry vij ðtÞ can be expressed as: generates a faster response to vehicle requests in contrast to the
non-offloading scheme (Surf 1) at the root base station due to
NðCij ; t þ dtÞ NðCij ; tÞ
X the reduced communication overhead.
¼ r min ðNðCkl ; tÞÞ dt In conclusion, PEPA languages can generate simplified
Ckl 2Exða;rÞ
ða;rÞ2ExðCij Þ abstraction models by aggregating model details that are not
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} observed in analysis. The abstraction modeling and fluid-flow
exit activities (1)
X approximation allow PEPA to generate efficient performance
þ r min ðNðCkl ; tÞÞ dt; modeling and analysis, particularly for large-scale modeling
Ckl 2Exða;rÞ
ða;rÞ2EnðCij Þ
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} systems.
entry activities
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
456 IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, VOL. 8, NO. 1, JANUARY-MARCH 2021
Fig. 14. Smart Scheduling Framework in Fog Computing. In the smart scheduling scheme, the algorithm set S includes
different atomic scheduling algorithms that can be selected by
to more flexible fog-side servers, i.e., service offloading. To the scheme. Join-the-Shortest-Queue (JSQ) is a classic schedul-
generate efficient offloading, scheduling is a crucial issue being ing algorithm that assigns a new arrival to a queue with the least
investigated in much research. However, most previous number of unfinished jobs. However, in the fog computing envi-
research aims to improve the scheduling process through refin- ronment, both incoming task stream and server capability
ing an individual scheduling policy (e.g., R-JSQ schedul- encounter unstable situations, and thus it is difficult to determine
ing [31], Dynamic Task Offloading and Scheduling [32], whether the classic JSQ policy still performs well in such a
Dynamic Switching Algorithm [33] and Energy-aware Task diverse fog environment. Meanwhile, the recent task stream has
Allocation [34]). These research approaches usually provide a an obvious heavy-tail distributed feature. A larger number of
more powerful but complex algorithm design, which might not waiting jobs in the queue does not represent longer waiting time,
be readily employed for industrial deployment. However, our as a large-sized job may consume much more time than many
solution aims to develop a novel smart scheduling scheme with other small-sized tasks. In the situation, the classic JSQ algo-
hybrid scheduling algorithms that are designed on the basis of rithm maybe not the most efficient solution in scheduling. Thus,
classic scheduling policies. These classic scheduling policies the smart scheduling scheme aims to combine the classic JSQ
usually benefit from high reliability and usability in practice. algorithm with a new time-preemptive algorithm to improve
Fig. 14 depicts a vehicular fog environment, in which a group scheduling adaptivity by selecting the optimal scheduling algo-
of vehicles can be organized to perform as a Vehicular Fog rithm based on the system environment through a decision func-
Server (VFS) by sharing their available physical resources. In tion DðfÞ. The DðfÞ takes the status of current servers (f 2 F )
each VFS group, these vehicle nodes can be either static nodes as input and makes a decision to select a scheduling algorithm
(parking vehicles) or mobile nodes (moving vehicles) according (s 2 S) that is considered as the output of function DðfÞ.
to their mobility feature. Therefore, VFSs usually suffer from With this in mind, the classic JSQ algorithm is formalised to
unstable capability in computation caused by the changing phys- a Queue-preemptive Scheduling algorithm - i.e. QS algorithm.
ical resources of vehicular nodes. For this reason, this section Meanwhile, a new scheduling algorithm is built based on the
proposes a novel smart scheduling scheme (SSS) that can adap- predicted response time of jobs in the queue, which is named
tively conduct scheduling operations based on the real-time VFS the Time-preemptive Scheduling algorithm - i.e. TS algorithm.
conditions. In the scheme, a novel time-preemptive scheduling The decision function, DðfÞ, is designed to select the optimal
(TS) algorithm is developed for the unstable VFS groups, along algorithm from QS and TS on the basis of server status (F ),
with a smart scheduling scheme with both TS and a queue-pre- which indicates a real-time system environment.
emptive scheduling (QS) algorithm that is derived from the clas- First, the definitions of QS and TS must be specified, and
sic join-the-shortest-queue (JSQ) algorithm [30]. In addition, a the performance of each scheduling algorithm needs to be
decision module is designed to perform selection of scheduling evaluated in various system conditions to confirm the best
algorithm (TS or QS) along with real-time VFS conditions. operating conditions.
The main goal of smart scheduling scheme is to combine two Definition 2: Queue-preemptive Scheduling (QS)
atomic scheduling algorithms (i.e. TS and QS) to a hybrid
scheme and apply the optimal algorithm along with a changing CFQS ¼ Ltrans
1
ðtÞ C: (3)
system environment. Such a design can take full advantage of
each scheduling algorithm based on the system condition and Generally, the queue-preemptive scheduling (QS) algorithm
improve the adaptivity of the scheduling scheme, particularly in means that the task scheduling is conducted on the basis of
diverse system conditions. As a result, the end-users can have queue length. In other words, the server with the shortest wait-
improved quality of service such as faster system response. ing queue will receive the next scheduled task.
In Definition 2, the value of the QS algorithm is defined to
be a controlling factor, CFQS . The CFQS value is governed by
B. Definition of Smart Scheduling
the transient queue length (Ltrans ðtÞ) of the VFS group. Lon-
With the aim of improving resource utilization and QoS, a ger queue length means a smaller value of CFQS , which indi-
smart scheduling scheme is proposed for such a diverse fog cates the rate of sending tasks to a target server. Moreover, C
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: EFFICIENT FORMAL MODELING FRAMEWORK FOR HYBRID CLOUD-FOG SYSTEMS 457
is specified as a constant value used for adjusting the sending needs to wait for a response. The task can be completed within
rate. The value of CFQS can be directly obtained by solving a a mean time vðtÞ ¼ 1=m.
PEPA model in which the QS process is defined. According to the terminal model, every task has its own
Definition 3: Time-preemptive Scheduling (TS) server in an infinite-server queueing station; thus, there will be
no queueing or waiting. As a result, the average response time
R ðNÞ at terminals (R ter ) equals fðtÞ, which is independent of the
CFTS ¼ Rtrans
avg
ðtÞ C:
(4)
number of users (N). However, the average response time of
the server system (R ser ¼ Ravg ðNÞ) depends on the number of
The time-preemptive scheduling (TS) algorithm controls users (N). Hence, the Ravg ðNÞ can be obtained by calculating
task scheduling based on the response time, which means that the average circle time, which is defined as:
if the transient response time (i.e., Rtrans ðtÞ) of a server
increases, the rate of scheduling tasks (i.e., CFTS ) to the server
CðNÞ ter þ R
¼R ser ¼ fðtÞ þ Ravg ðNÞ: (6)
must be reduced. To control the rate of task scheduling, the TS
algorithm designs a controlling factor represented in Defini-
Hence, CðNÞ represents the mean time of a customer going
tion 3, which is formed by a ratio of average response time through the cycle of terminal-server once. The throughput
(i.e., Ravg ðNÞ) and transient response time (i.e., Rtrans ðtÞ) of a
T ðNÞ can be represented as N=CðNÞ. Based on the above two
server. As a controlling factor, the ratio quantifies the variation equations, we can obtain:
of specific transient response time of a task at the server com- N N
pared to the average response time at the server. Usually, the T ðNÞ ¼ ¼ : (7)
CðNÞ fðtÞ þ Ravg ðNÞ
average response time is a fixed but predicted value for each
server, while the transient response time can be directly calcu- Thereafter, from Eq. (7), the average response time of the
lated from the system observation. The following section will server system can be represented as:
introduce how to obtain the transient response time and predict
the average response time. N
Ravg ðNÞ ¼ fðtÞ: (8)
In Definition 3, the transient response time Rtrans ðtÞ repre- T ðNÞ
sents the value of response time based on the tasks in a server
queue, which can be obtained by calculating the summation of As we known, the throughput T ðNÞ can be represented with
transient waiting time (Wtans ðtÞ) of all tasks in the queue and the product of the server-busy probability and the system ser-
the transient service time (Strans ðtÞ) of the task in the server. vice rate, which is formulated as:
In other words, Rtrans ðtÞ indicates the duration for a task to 1 p0
wait before gaining access to the server. T ðNÞ ¼ ð1 p0 Þ m ¼ ; (9)
vðtÞ
As the value of Rtrans ðtÞ can be easily calculated with
Ltrans ðtÞ, we have the formula of calculating Rtrans ðtÞ as: in which p0 is the probability of the server being in the idle
state. As the value of p0 changes against N, p0 can be consid-
Rtrans ðtÞ ¼ Wtans ðtÞ þ Strans ðtÞ ered a function of N, namely, pðNÞ. Therefore, Eq. (8) can be
l 1 (5) converted to a new expression:
¼ þ ¼ ðl þ 1Þ vðtÞ;
m m
N vðtÞ
Ravg ðNÞ ¼ fðtÞ: (10)
in which l ¼ Ltrans ðtÞ and m is the average service rate of a 1 pðNÞ
fog server (vðtÞ ¼ 1=m).
However, the average response time Ravg ðNÞ is defined as In a large-scale system, the value of N is set with a large value,
an average value for all tasks in a server system with N users. which means that the probability of server-idle state is quite
Actually, it is impossible to obtain the accurate value of small. As a result, 1 pðNÞ in Eq. (10) can be approximated as
Ravg ðNÞ before completing all tasks. Thus, an approximation 1. Therefore, Eq. (10) is now approximated to the following
will be considered to estimate the value of Ravg ðNÞ. The fog expression:
server system can be theoretically considered as a closed Ravg ðNÞ N vðtÞ fðtÞ: (11)
MjMj1jjK terminal model, in which system users can be
thought of as sitting behind their terminals, and their sending Finally, according to Eqs. (5) and (11), the TS algorithm in
tasks queue for the server system. These users send their task Definition 3 (CFTS ) can be represented as:
requests in every thinking timeslot t that is exponentially dis-
ðNÞ
tributed with a mean fðtÞ ¼ 1=. The more users in the think- CFTS
R
¼ Rtrans
avg NvðtÞfðtÞ
ðtÞ C ¼ ðlþ1ÞvðtÞ C:
(12)
ing state, the higher the effective completion rate of the
thinkers. In this situation, we can consider that the actual In Eq. (12), all parameters have been attained except the tran-
arrival rate in the system must be proportional to the number sient queue length l ¼ Ltrans ðtÞ that is measured from real-
of thinking users. Hence, in such a system model, users are time model operation.
represented as infinite servers with each user configured with Lastly, for the decision function DðfÞ, it is designed to
its own server. Once the user submits a task request, it only select an atomic scheduling algorithm (either QS or TS)
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
458 IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, VOL. 8, NO. 1, JANUARY-MARCH 2021
def
Tkall ¼ ðsending; p0 Þ:Tk Midle
0 00
þ ðsending; p1 Þ:Tk Sidle þ ðsending; p2 Þ:Tk Sidle :
Fig. 15. State Diagram of Smart Scheduling Framework. in which Sev1 and Sev2 are defined to be cooperatively used
by both server components (FSgrp 1 and FSgrp 2 ) and the task
through comparing the dispersion of the probability distribu- component (Tkall ).
tions representing varying arrivals and server capabilities. In Step 3: With the state diagram Fig. 15, the task serving pro-
the DðfÞ, the coefficient of variation (i.e. CV) is used to esti- cess can be modeled as the cooperative interaction between
mate the dispersion of probability distributions, which equals the task component and the fog server component.
the ratio of the standard deviation to its absolute mean value. Simple-task Group Serving Model:
The DðfÞ first calculates the CVs of both parameters - i.e.
0 def 0
arrival rates (i.e. CV ) and service rates (i.e. CVm ) representing Tk Sidle ¼ ðSev1 ; m1 Þ:Tk Send ;
the dynamical arrivals and servers; then compares the two CV 00
Tk Sidle
def 00
¼ ðSev2 ; m2 Þ:Tk Send :
values to determine which one is larger. The parameter with
larger CV value can make a greater impact on the system Each simple-task reaches its end state (Tk Send 0 00
or Tk Send )
changing. Thus, the selection of an algorithm is based on the after a server processing action (Sev1 or Sev2 ).
comparison results, which will be confirmed after introducing Complex-task Group Serving Model:
the first stage of performance evaluation in Section V-D.
def
Tk Midle ¼ ðSev1 ; m1 Þ:ðSev2 ; m2 Þ:Tk Mend
C. PEPA-Based Scheduling Model þ ðSev2 ; m2 Þ:ðSev1 ; m1 Þ:Tk Mend :
In this subsection, PEPA language is used to model QS
and TS algorithms in order to evaluate their performances in Each complex-task completes its processing on fog servers in
various system conditions. As described in Section VII-A, a either action sequence Sev1 ! Sev2 or Sev2 ! Sev1 .
diverse fog computing environment is supposed with respect Step 4: For the complex-tasks, a scheduling decision is
to the following three conditions: varying arrival condition made to select the first VFS-group to determine its processing
that indicates the unstable incoming task stream with varying action sequence. Thus, the scheduling algorithm needs to be
arrival rates defined as ¼ fðtÞ; varying service condition formally represented as the following expression:
that indicates the unstable fog server capability with a vary-
P ¼ H ðsÞ D ðcÞ; (13)
ing service rate m ¼ gðtÞ; varying both arrival and service
rates. in which the function D indicates the decision function for
A general scheduling framework is modeled as shown in selecting an optimal scheduling algorithm based on the current
Fig. 15. In the model, the system is defined with two types of server conditions (c), and the function H is a composite sched-
serving activities which are specified as Sev1 and Sev2 , repre- uling function based on the selected scheduling algorithm s ¼
senting serving activities of two VFS groups previously shown DðcÞ. Finally, Eq. (13) outputs the selected server for the cur-
in Fig. 14. According to the serving types, user tasks can be spec- rent task given a probabilistic value P .
ified in three groups: Tk0S , indicating a simple-task group that Step 5: To model a scheduling process, we need to design a
needs Sev1 service only, Tk00S , indicating another simple-task separate scheduler component (SCH) in the PEPA model,
group that needs Sev2 service only, and TKM , a complex task which can be represented as:
requiring both Sev1 and Sev2 . The scheduling process can be
considered as the selection operation on candidate fog servers. def
SCH ¼ ðSch QS ; p QS Þ:SCH þ ðSch TS ; p TS Þ:SCH;
For TKM , the serving action flow determined by the scheduling
algorithm is either in the order Sev1 ! Sev2 or Sev2 ! Sev1 . in which Sch QS is for the execution of queue-preemptive
Thereafter, the framework will be defined in PEPA as follows: scheduling (QS) and Sch TS is to implement time-preemptive
Step 1: Task generation is achieved by a branching process scheduling (TS). p QS and p TS are the rates of scheduling pro-
with a given probability p 2 < p0 ; p1 ; p2 >. Thus, the PEPA cesses that are obtained from Eq. (13) for QS and TS algo-
model of the task-generation is specified with a sending rithms, respectively.
action and its corresponding rate multiplied by an associated Hence, the previous complex-task serving model should be
probability p, and then behaves as different types of tasks in cooperatively implemented with the scheduler component
0
their initial idle states, such as Tk Midle , Tk Sidle , and SCH. Thus, the defined Tk Midle component needs to be
00
Tk Sidle , respectively. updated by adding a new scheduling action:
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: EFFICIENT FORMAL MODELING FRAMEWORK FOR HYBRID CLOUD-FOG SYSTEMS 459
Fig. 16. Queue Length of Servers Based on Three Scheduling Algorithms under Three Different System Conditions.
def 0 00
Tk Midle ¼ ðSch QS ; p QS Þ:ðSev1 ; m1 Þ:ðSev2 ; m2 Þ:Tk Mend i.e., Tk Sidle and Tk Sidle , respectively, with 2,000 senders for
the complex-task component, i.e., Tk Midle . The system
þ ðSch QS ; p QS Þ:ðSev2 ; m2 Þ:ðSev1 ; m1 Þ:Tk Mend ;
assumes that there are two independent server groups (i.e.,
which implements the QS algorithm; however, an alternative FSgrp 1 and FSgrp 2 ) in the model, and the number of server
Tk Midle using the TS algorithm is denoted as: instances based on each group is set to one. The arrival rates of
all types of tasks are set to the same value of 400 (tasks/time
def unit). The service rates are set to 120 and 80 for FSgrp 1 ðm1 Þ
Tk Midle ¼ ðSch TS ; p TS Þ:ðSev1 ; m1 Þ:ðSev2 ; m2 Þ:Tk Mend
or FSgrp 2 ðm2 Þ, respectively. Furthermore, to achieve the
þ ðSch TS ; p TS Þ:ðSev2 ; m2 Þ:ðSev1 ; m1 Þ:Tk Mend : diverse system conditions, trigonometric functions (e.g., sin
and cos) are used to govern the varying means of arrival rates
Step 6: Finally, a complete scheduling model based on a and service rates. Three sets of experiments are conducted
VFS system can be represented by joining these components against different system conditions: 1). varying task arrivals
as Sys, which is represented as: only, 2). varying server capability only, and 3). a comprehen-
def
sive condition with both varying task arrivals and server capa-
Sys ¼ Tkall ½n fflðFSgrp 1 ½k1 kFSgrp 2 ½k2 kSCH½mÞ bility. In addition, a classic random scheduling algorithm is
L
applied for comparison with QS and TS algorithms.
L ¼ fpros1; pros2; Sch QD ; Sch TD g;
Figs. 16(a) and 17(a) represent the average queue length of
three scheduling algorithms. As shown in the figures, both QS
in which < n; m; k1 ; k2 > denotes the number of instances and TS exhibit substantially better performance than the ran-
of each component. dom algorithms due to their shorter waiting-queue length. In
detail, the QS algorithm generates more stable and shorter
D. Performance Evaluation With Scheduling Model
queue length in contrast to TS. Hence, this means that the QS
In performance analysis, to clear the advantages of two is a better choice under the above system condition 1).
atomic scheduling algorithms, we first compare the proposed Figs. 16(b) and 17(b) depict the performances of three
time-preemptive scheduling algorithm to the conventional scheduling algorithms under varying service capability condi-
JSQ algorithm - i.e. the formulated queue-preemptive schedul- tions. It is clear that the random algorithm has inferior perfor-
ing given in Definition 2. As per the general service-level mance due to the large movement of queue length; furthermore,
agreement between customers and service providers, the total the TS algorithm has better performance than QS with a more
resolution time (TRT) becomes a primary concern in repre- stable and shorter queue length. It is concluded that the TS algo-
senting the quality of service. In performance engineering, rithm becomes better under system condition 2).
TRT is similar to the response time that defines the amount of To validate these conclusions, a complex system condition
time between when a client sends a request and when the 3) is used to observe the scheduling performance. Figs. 16(c)
request is answered. Hence, the following analysis is based on and 17(c) represent the performance under a complex condi-
the performance metric - i.e. response time. tion with varying arrival rates and service rates, in which the
To conduct prformance evaluation, a set of parameters varying arrival rates dominate the influence to a greater extent
needs to be specified under the defined PEPA model defined than service rates. In this case, the QS is better than the TS
in the previous section. Each task is modelled as a text mes- with both shorter average queue length and reduced fluctua-
sage, and the workload is represented by the message size tion of queue length. Conversely, when the service rates vary
(e.g. megabytes). The performance metric - i.e. response time larger than the arrival rates, as shown in Fig. 16(d) and Fig. 17
is defined by the sum of transmission delay and processing (d), the TS algorithm becomes superior to the QS.
delay that are measured by altering workload and task arrival Based on the above analysis, we find that the QS algorithm
rates. Thereafter, the number of task senders is set to 4,000 for performs better under an intensive varying-arrival environ-
each fog server unit. As there are three types of task senders ment; however, the TS algorithm is more suitable for the
0 00
(i.e., Tk Sidle ; Tk Sidle and Tk Midle ) defined in the model, intensive change of server condition. Therefore, if we combine
1,000 sender instances are set for each simple-task component, two basic scheduling algorithms to a hybrid scheduling
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
460 IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, VOL. 8, NO. 1, JANUARY-MARCH 2021
Fig. 17. Queue Length of Servers Based on Two Dynamic Scheduling Algorithms under Three Different System Conditions.
Fig. 18. Response Time Comparison Based on Fluid-flow Analysis. Fig. 19. Response Time Comparison Against Arrival Rate or Job Size.
algorithm and apply a decision component to support the Fig. 18 shows that our smart scheduling scheme yields less
selection of two algorithms, such smart scheduling scheme response delay than both SQ(2) and HSQ(2); moreover, under
should gain improved performance under dynamic environ- the complex system environment, the smart scheduling scheme
ments with both varying arrivals and servers. According to the has stable performance in response time, which is caused by
design principles of DðfÞ and the analysis conclusion, if the joint use of two algorithms - i.e. QS and TS. This hybrid
CV > CVm representing an intensive varying-arrival envi- scheme allows the scheduling algorithm to be altered with the
ronment, the DðfÞ selects the QS algorithm; conversely, it changing system environment, which can minimize the influ-
selects the TS algorithm. ence on system performance. However, both SQ(2) and HSQ
Next, to verify the performance of the smart scheduling (2) algorithms are affected by the varying environment, which
scheme, we will compare it to two randomized join-the-shortest- causes a fluctuation of response time, as shown in Fig. 18. To
queue scheduling algorithms defined in [31], which are SQ(2) build a simulation-based analysis, a Java-based modeling tool,
and HSQ(2) schemes. In the SQ(2) scheme, a subset of two serv- named SimJava [36], is applied to generate a discrete event
ers is selected from the set of N servers uniformly at random at simulation model that is equivalent to the PEPA model design.
each arrival instant. The job is scheduled to the server with the In the simulation model, system components/behaviors and
least number of unfinished jobs among the two chosen servers. action rates are defined along with the PEPA model, such as the
The SQ(2) assigns jobs to any of the two servers with equal prob- number of instances for request senders and servers as well as
ability. Nevertheless, the HSQ(2) first choose a capacity value their rates (e.g. arrival rates and service rates) are sent the same
Cj with a probability pj upon arrival of a new job. Then two to that used in PEPA models - i.e. as stated before 4000 task
servers having the selected value of capacity are chosen uni- senders per server including three types, one instance for each
formly from a set of available servers having that capacity. fog server group, 400 task arrival rates, and 120 and 80 service
Finally, the job is assigned to the server with the least number of rates for each server group. Moreover, the network parameter
unfinished jobs among the two chosen servers. Comparing to the is also consistent with PEPA parameters: 10 and 5 (simulation
conventional JSQ algorithm, these two schemes do not need to time units) mean delays from scheduling node to senders and
observe all servers to select that with the shortest waiting queue, fog servers, respectively; 1 Gbps bandwidth and zero packet
which can reduce the observation cost, particularly for systems loss rate. Fig. 19 presents the mean response time of three
with large node sets. In this experiment, we will compare our scheduling algorithms against varying arrival rate and job size.
smart scheduling scheme with the two randomized JSQ schemes In the experiment, both arrival rate and job size are set from 0.1
under a heterogeneous system environment (i.e., varying rates of to 1.0 with step size 0.1, and the service is also set to fluctuated
arriving tasks and VFS capability). All scheduling schemes are values as stated before. The left side sub-figure shows the
modeled in PEPA and analyzed with fluid-flow approximation. change of mean response time with varying arrival rate while
Moreover, to verify the results of fluid-flow approximation, we setting the job size to 0.1. Comparing to SQ(2) and HSQ(2), the
also build equivalent simulation models to enable a comparison. line representing the smart scheduling algorithm indicates a
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: EFFICIENT FORMAL MODELING FRAMEWORK FOR HYBRID CLOUD-FOG SYSTEMS 461
smoother and low-level change of mean response time with the [4] Y. Yao, X. Chang, J. Misic and V. Misic, “Reliable and secure vehicular
fog service provision,” IEEE Internet Things J., vol. 6, no. 1, pp. 734–743,
growth of arrival rates. The right side sub-figure indicates the Feb. 2019.
mean response time with varying job size while the arrival rate [5] Z. Zhao et al., “On the design of computation offloading in fog
is 0.1. Similarly, the smart scheduling scheme presents a rela- radio access networks,” IEEE Trans. Veh. Technol., vol. 68, no. 7,
pp. 7136–7149, Jul. 2019.
tively more stable change and slower increase of response time [6] T. T. Nguyen, V. N. Ha, L. B. Le and R. Schober, “Joint data compres-
with the growth of job size. These analysis results demonstrate sion and computation offloading in hierarchical fog-cloud systems,”
the better performance of our smart scheduling scheme under a IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 293–309, Jan. 2020.
[7] N. Choi, D. Kim, S. J. Lee, and Y. Yi, “A fog operating system for user-
complex system environment. oriented IOT services: Challenges and research directions,” IEEE Com-
In summary, this section illustrates a scheduling model pro- mun. Mag., vol. 55, no. 8, pp. 44–51, Aug. 2017.
totype based on a proposed smart scheduling scheme and indi- [8] X. Chen, J. Ding, and Z. Lu, “A decentralized trust management system
for intelligent transportation environments,” IEEE Trans. Intell. Transp.
cates that PEPA exhibits a powerful model specification Syst., to be published, doi: 10.1109/TITS.2020.3013279.
capability and accurate performance analysis precision, which [9] Y. Jiao, P. Wang, D. Niyato and K. Suankaewmanee, “Auction mecha-
can be applied to support the performance evaluation of large- nisms in cloud/fog computing resource allocation for public blockchain
networks,” IEEE Trans. Parallel Distrib. Syst., vol. 30, no. 9, pp. 1975–
scale and complex systems. 1989, Sep. 2019.
[10] J. Hillston, A Compositional Approach to Performance Modelling,
Cambridge, U.K.: Cambridge Univ. Press, 1996.
[11] J. Hillston, “Fluid flow approximation of pepa models, in Proc.
VI. CONCLUSION QEST’05, 2005, pp. 33–42.
[12] R. Yu, G. Xue, and X. Zhang, “Application provisioning in fog
This paper proposed a novel modeling framework for HCF computing-enabled Internet-of-Things: A network perspective, in Prof.
systems based on a formal modeling technique, i.e., perfor- INFOCOM’18, Honolulu, HI, USA, 2018, pp. 783–791.
mance evaluation process algebra. We developed the frame- [13] J. He, J. Wei, K. Chen, Z. Tang, Y. Zhou and Y. Zhang, “Multitier fog
computing with large-scale IoT data analytics for smart cities,” IEEE
work with the aim of providing a cost-effective formal Internet Things J., vol. 5, no. 2, pp. 677–686, Apr. 2018.
modeling solution for supporting the design of large-scale sys- [14] F. A. Silva et al., “Mobile cloud performance evaluation using
tems such as HCF systems. In comparison with conventional stochastic models,” IEEE Trans. Mobile Comput., vol. 17, no. 5,
pp. 1134–1147, May 2018.
simulation and other modeling techniques (e.g., queueing [15] R. Ghosh, F. Longo, F. Frattini, S. Russo, and K. S. Trivedi, “Scalable
modeling and Petri-net), the PEPA-based framework offers the analytics for IaaS cloud availability,” IEEE Trans. Cloud Comput.,
following crucial advantages: 1). strong capability in composi- vol. 2, no. 1, pp. 57–70, 2014.
[16] X. Chang, B. Wang, J. K. Muppala, and J. Liu, “Modeling active virtual
tional and abstract modeling, which help to quickly build mod- machines on IaaS clouds using an M/G/m/m+K queue,” IEEE Trans.
els from an elaborate design scheme; 2). various approaches Serv. Comput., vol. 9, no. 3, pp. 408–420, 2016.
(e.g., stochastic simulation and fluid-flow analysis) that can sup- [17] X. Li et al., “Performance analysis of cloud computing centers serving
parallelizable rendering jobs using M/M/c/r queuing systems, in Proc.
port performance evaluation by quickly yielding reliable analy- ICDCS’17, Atlanta, GA, 2017, pp. 1378–1388.
sis results. In detail, this work presented how to use the [18] Z. Su, Y. Hui, and S. Guo, “D2D-based content delivery with parked
proposed framework by introducing three core model proto- vehicles in vehicular social networks,” IEEE Wireless Commun.,
vol. 23, no. 4, pp. 90–95, Aug. 2016.
types, in which the compositional model illustrated how to build [19] X. Chen, and J. Zhang, “When D2D meets cloud: Hybrid mobile task
models of system structures and components as well as system offloadings in fog computing, in Proc. ICC’17, Paris, France, 2017,
behaviors; the second abstraction model introduced how to pp. 1–6.
[20] X. Wang, Z. Ning and L. Wang, “Offloading in Internet of Vehicles: A
aggregate model details in order to focus on the components fog-enabled real-time traffic management system,” IEEE Trans. Ind.
and behaviors that need be observed. Finally, we proposed a Informat., vol. 14, no. 10, pp. 4568–4578, Oct. 2018.
scheduling model that defined a novel smart scheduling scheme. [21] F. Mehmeti, and T. Spyropoulos, “Performance modeling, analysis, and
optimization of delayed mobile data offloading for mobile users,” IEEE/
The scheme integrated two atomic scheduling algorithms and ACM Trans. Netw., vol. 25, no. 1, pp. 550–564, 2017.
used a decision support module to select the right algorithm on [22] N. Cheng, N. Lu, N. Zhang, X. S. Shen and J. W. Mark, “Opportunistic
the basis of changing system environments. This design can WiFi offloading in vehicular environment: A queueing analysis, in Proc.
IEEE GLOBECOM’14, Austin, TX, USA, 2014, pp. 211–216.
improve scheduling performance and service quality by making [23] H. Wu and K. Wolter, “Stochastic Analysis of delayed mobile offloading
scheduler smarter. Furthermore, this model also demonstrated in heterogeneous networks,” IEEE Trans. Mobile Comput., vol. 17,
how to achieve algorithm modeling based on the PEPA model no. 2, pp. 461–474, Feb. 2018.
[24] H. Tan, Z. Han, X. Y. Li, and F. C. M. Lau, “Online job dispatching and
framework. In our future work, we will apply this framework to scheduling in edge-clouds, in Proc. INFOCOM’17, Atlanta, GA, USA,
more modeling and analyzing missions and explore other tech- 2017, pp. 1557–1566.
niques to make the framework more adaptable. [25] D. Zeng, L. Gu, S. Guo, Z. Cheng, and S. Yu, “Joint optimization of task
scheduling and image placement in fog computing supported software-
defined embedded systems,” IEEE Trans. Comput., vol. 65, no. 12,
pp. 3702–3712, Dec. 2016.
REFERENCES [26] J. Wan, B. Chen, S. Wang, M. Xia, D. Li and C. Liu, “Fog computing for
[1] M. Giordani, M. Polese, M. Mezzavilla, S. Rangan and M. Zorzi, energy-aware load balancing and scheduling in smart factory,” IEEE
“Toward 6G networks: Use cases and technologies,” IEEE Commun. Trans. Ind. Informat., vol. 14, no. 10, pp. 4548–4556, Oct. 2018.
Mag., vol. 58, no. 3, pp. 55–61, Mar. 2020. [27] L. F. Bittencourt, J. Diaz-Montes, R. Buyya, O. F. Rana, and M. Parashar,
[2] L. Chettri and R. Bera, “A comprehensive survey on Internet of Things “Mobility-aware application scheduling in fog computing,” IEEE Cloud
(IoT) toward 5G wireless systems,” IEEE Internet Things J., vol. 7, Comput., vol. 4, no. 2, pp. 26-35, Mar./Apr. 2017.
no. 1, pp. 16–32, Jan. 2020. [28] T. Wang, L. Qiu, A. K. Sangaiah, G. Xu and A. Liu, “Energy-efficient
[3] X. Wang and Y. Li, “Content retrieval based on vehicular cloud in inter- and trustworthy data collection protocol based on mobile fog computing
net of vehicles,” IEEE Trans. Comput. Social Syst., vol. 6, no. 3, in Internet of Things,” IEEE Trans. Ind. Informat., vol. 16, no. 5,
pp. 582–591, Jun. 2019. pp. 3531–3539, May 2020.
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.
462 IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, VOL. 8, NO. 1, JANUARY-MARCH 2021
[29] S. Zeadally, J. Guerrero and J. Contreras, “A tutorial survey on vehicle-to- Jie Ding received the B.S. degree in mathematical
vehicle communications,” Telecommun. Syst., vol. 73, pp. 469–489, 2020. education from Yangzhou University, Yangzhou,
[30] Hwa-Chun Lin and C. S. Raghavendra, “An approximate analysis of China, in 2001, the M.S. degree in mathematical sta-
the join the shortest queue (JSQ) policy,” IEEE Trans. Parallel Distrib. tistics from Southeast University, Nanjing, China, in
Syst., vol. 7, no. 3, pp. 301–307, 1996. 2004, and the Ph.D. degree in communication from
[31] A. Mukhopadhyay and R. R. Mazumdar, “Analysis of randomized join- The Edinburgh University, U.K., in 2010. He is cur-
the-shortest-queue (JSQ) schemes in large heterogeneous processor- rently a Professor with Shanghai Maritime Univer-
sharing systems,” IEEE Trans. Control Netw. Syst., vol. 3, no. 2, sity. His research interests include performance
pp. 116–126, Jun. 2016. modelling for communication and computer systems.
[32] H. A. Alameddine, S. Sharafeddine, S. Sebbah, S. Ayoubi and C. Assi,
“dynamic task offloading and scheduling for low-latency IoT services in
multi-access edge computing,” IEEE J. Sel. Areas Commun., vol. 37, Zhenyu Lu received the B.Sc. degree in electricity,
no. 3, pp. 668–682, Mar. 2019. the M.Sc. degree in information and communication
[33] T. Wang et al., “An intelligent dynamic offloading from cloud to from the Nanjing Institute of Meteorology, Nanjing,
edge for smart IoT systems with big data,” IEEE Trans. Netw. Sci. Eng., China, in 1999 and 2002, respectively, and the Ph.D.
vol. 7, no. 4, pp. 2598–2607, Oct.-Dec. 2020. degree in optics engineering from the Nanjing Uni-
[34] T. Liu, J. Li, F. Shu and Z. Han, “Optimal task allocation in vehicular fog versity of Science and Technology, Nanjing, in 2008.
networks requiring URLLC: An energy-aware perspective,” IEEE Trans. He was a Research Associate with the Department of
Netw. Sci. Eng., vol. 7, no. 3, pp. 1879–1890, Jul./Sep. 2020. Mathematics and Statistics, University of Strath-
[35] X. Chen and L. Wang, “Exploring fog computing-based adaptive vehicular clyde, Glasgow, U.K., from 2012 to 2013. He is cur-
data scheduling policies through a compositional formal method–PEPA,” rently a Professor with the School of AI, Nanjing
IEEE Commun. Lett., vol. 21, no. 4, pp. 745–748, Apr. 2017. University of Information Science and Technology.
[36] F. Howell and R. McNab “Simjava: A discrete event simulation package He has authored or coauthored more than seven international journal papers.
for Java with applications in computer systems modelling,” in Proc. 1st His current research interests include neural networks, stochastic control, and
Int. Conf. Web-based Modelling Simulation, 1998. artificial intelligence.
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on May 17,2021 at 04:38:43 UTC from IEEE Xplore. Restrictions apply.