Professional Documents
Culture Documents
This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
Abstract—Serverless edge computing is a specialized system design tailored for Internet of Things (IoT) applications. It leverages
serverless computing to minimize operational management and enhance resource efficiency, and utilizes the concept of edge computing
to allow code execution near the data sources. However, edge devices powered by renewable energy face challenges due to energy input
variability, resulting in imbalances in their operational availability. As a result, high-powered nodes may waste excess energy, while low-
powered nodes may frequently experience unavailability, impacting system sustainability. Addressing this issue requires energy-aware
resource schedulers, but existing cloud-native serverless frameworks are energy-agnostic. To overcome this, we propose an energy-
aware scheduler for sustainable serverless edge systems. We introduce a reference architecture for such systems and formally model
energy-aware resource scheduling, treating the function-to-node assignment as an imbalanced energy-minimizing assignment problem.
We then design an optimal offline algorithm and propose faasHouse, an online energy-aware scheduling algorithm that utilizes resource
sharing through computation offloading. Lastly, we evaluate faasHouse against benchmark algorithms using real-world renewable energy
traces and a practical cluster of single-board computers managed by Kubernetes. Our experimental results demonstrate significant
improvements in balanced operational availability (by 46%) and throughput (by 44%) compared to the Kubernetes scheduler.
1 I NTRODUCTION
2
1 The scalability and portability offered by serverless 26
5
S ERVERLESS edge computing [10], [11] is an emerging
technological innovation that combines the flexibility
and scalability of serverless computing with the proximity
computing enable applications to dynamically scale
functions based on real-time demand [9]. This flexibility
optimizes resource utilization and efficiently accommodates
27
28
29
6 and low latency of edge devices. This approach has the varying workloads. The statelessness and lightweight 30
7 potential to revolutionize edge computing and drive real- virtualization used to deploy serverless functions enable 31
8 time Internet of Things (IoT) applications. By distributing efficient execution, making it conceivable to have an 32
9 computational workloads to the edge of the network, application running with as little as 5 MB of memory. 33
10 serverless edge computing enables real-time processing, Extreme edge computing refers to the practice of performing 34
11 reduces data transfer costs, and enhances user experience. computing tasks and data processing at the extreme edge 35
12 It effectively bridges the gap between cloud computing and of a network often directly on the devices or sensors 36
13 localized processing, empowering developers to leverage themselves. Extreme edge network configuration is 37
16 Serverless computing, implemented through Function- computer functioning as the gateway at the network’s 40
17 as-a-Service (FaaS), is a paradigm initially designed for edge [1]. Extreme edge computing finds applications in 41
18 cloud computing to relieve developers from the burdens various fields, particularly in remote/inaccessible areas 42
19 of managing resources and backend components [9]. such as smart farming, smart forestry, and the Oil & 43
20 Similarly, serverless at the edge extends this concept to Gas Industry’s Industrial IoT [10]. While efforts have 44
21 IoT software development, allowing developers to focus been made to adopt the serverless computing model in 45
22 on the core business logic of their applications instead of edge environments to leverage the advantages of reduced 46
23 complex operational practices [6]. By leveraging serverless operational complexity and latency [5], [13], [32]–[34], 47
24 at the edge, developers can write independent stateless its adoption in extreme edge computing poses unique 48
• Mohammad Sadegh Aslanpour is with Monash University and CSIRO’s computing models for extreme edge computing lies in 51
DATA61, Email: mohammad.aslanpour@monash.edu the reliance on energy harvesting methods and battery- 52
• Adel N. Toosi and Muhammad Aamir Cheema are with Monash Univer- powered edge devices, such as a cluster of single board 53
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
68 operational availability, throughput, and reliability, thereby and rechargeable storage devices (e.g., batteries). Unique to
r
115
Ed
71 Such consequences pose challenges to the sustainability irregular rates—and a computation resource to execute tasks.
e3
118
72 of edge computing, as they undermine key aspects A controller plays the role of the gateway and scheduler 119
73 such as efficient utilization of renewable energy inputs, simultaneously. As a gateway, it distributes tasks/requests 120
74 uniform operational availability, reduced failure rates, and to their corresponding functions for execution. The sched- 121
75 uninterrupted operation over extended periods of time. uler monitors the edge nodes’ State of Charge (SoC), which 122
76 While several well-established serverless frameworks is the battery level of charge relative to its capacity, and 123
77 such as OpenFaaS, KubeEdge, AWS Lambda@Edge, and utilizes offloading opportunities to migrate/place functions 124
78 Azure Functions on IoT Edge are designed to meet the of low-powered nodes to well-powered nodes. Our earlier 125
79 demands of edge computing and address its unique research shows that computation offloading allows a node 126
80 challenges, it is worth noting that these frameworks to save a considerable amount of energy without imposing 127
81 are primarily designed to be energy-agnostic. They are significant performance overhead [2]. 128
82 focused on leveraging the advantages of the serverless edge The challenge in this system is that while low-powered 129
83 computing paradigm without specifically addressing the and/or overloaded nodes are prone to run out of energy and 130
84 energy-related considerations. To address this challenge, become operationally unavailable, well-powered and/or 131
85 we propose faasHouse, a dynamic resource scheduling underutilized nodes are likely to waste their excess energy. 132
86 algorithm designed specifically to tackle the energy and The scheduler is responsible for handling this imbalance to 133
87 load variability issues in serverless edge computing. In this achieve a more balanced energy distribution and improved 134
88 paper, we make the following key contributions: throughput. A node is deemed operationally available if it 135
89
has enough battery charge; otherwise, it is unavailable and 136
94 energy-aware function scheduling algorithm that ex- this research. Assume a farming field enabled with edge 140
95 ploits computation offloading to address the imbal- computing for crop monitoring and precision agriculture 141
96 anced energy availability and the system’s through- to improve crop productivity, as demonstrated in [3], [16]. 142
97 put challenges. faasHouse—inspired by Kubernetes Edge nodes, like SBCs [16], are positioned across the farm 143
98 design—follows a rigorous scoring scheme to rank edge and are connected through WiFi or LoraWAN. Sensors such 144
99 nodes for function placements and then incorporates an as camera, temperature, and humidity are attached to each 145
100 assignment algorithm modeled as a House Allocation node, so that each node can collect data from the surround- 146
101 problem to decide on the placements. ing area, e.g., estimating the bee population. Edge nodes are 147
102 • We provide a practical implementation of faasHouse equipped with actuators to enable automated actions such 148
103 using a real testbed and prototype, extending PiA- as pest deterrence or water disconnection. 149
104 gent [12]. We empirically evaluate faasHouse’s perfor- In use cases like these, edge devices are meant to 150
105 mance where experimental results demonstrate that run without or with limited internet connection and are 151
106 faasHouse significantly improves the balanced opera- powered by solar panels and batteries in remote and vast 152
107 tional availability (by 46%) and throughput (by 44%) farming areas [3]. Such constraints give rise to particular 153
108 of edge nodes. challenges. For instance, an edge node may have to per- 154
109 2 S USTAINABLE S ERVERLESS E DGE C OMPUTING the node’s energy. Alternatively, a node may be positioned 157
111 Fig. 1 depicts an overall system overview in which the perform minimal computations and thus waste its excess 160
112 edge nodes are wirelessly connected to each other and to input energy. All of these events have the potential to result 161
113 a controller node, forming a cluster. Power is unevenly in a severe energy imbalance among the edge nodes, lead- 162
114 supplied to the edge nodes by renewable energy sources ing to operational unavailability. In such scenarios, some 163
2
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
Monitor
169 2.2 System Architecture Serverless Platform (e.g., OpenFaaS) Sensor n
...
Gateway Trigger
170 Fig. 2 depicts the software architecture of our proposed
171 sustainable serverless edge computing platform as well as Queue
Function 1
Container
172 the application workflow. A cluster of edge devices consti- App 1 Queue Load
Balancing App 1
173 tuted of SBCs are enabled with container virtualization such
App n Queue
174 as Docker or containerd for resource efficiency and portabil- Function n
175 ity [14]. An orchestrator tool such as Kubernetes resides on Container
Database App n
176 edge nodes to enable container deployment, container/node
177 failure handling, high availability, etc [14]. Orchestration
178 tools generally operate in a centralized master/worker ar- Container Orchestration (e.g., Kubernetes)
179 chitecture, where the controller node within the cluster is
180 assumes the master role and resides on the master node. Container Virtualization (e.g., Docker or containerd)
181 Edge nodes responsible for executing serverless functions Edge Devices (e.g., Single-board Computers)
182 act as worker nodes.
183 The controller hosts three main components: Fig. 2: Architectural view of the system
184 (a) database, (b) serverless platform, and (c) scheduler.
each timeslot and is defined by S = {sit | i ∈ [1, n], t ∈ 221
185 A key-value database such as Redis serves data and state
186 persistence so that services freely migrate and scale. A
T, 0 ≤ sit ≤ ϑ}, where ϑ denotes the maximum battery 222
210 2.3 System Model and Problem Formulation that S , R, E , and C represent values per di at timeslot t ∈ T. 248
3
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
282 workload. In an optimal offline model with future knowl- usage, (b) direct usage and (c) offloading overhead. For the base 315
283 edge, Γ is assumed to be known. If node di is down at t, then usage, the bi determines a static rate of energy use for the 316
284 γti,j = 0 replicas are deployed in the cluster. In serverless, node’s hardware and OS to operate. For the direct usage, 317
4
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
355 Therefore, the design principle of faasHouse is to dynam- tion’s replica to determine its preference for nodes. The 381
356 ically operate in every timeslot, according to IBM MAPE preference is quantified using scores, where a higher score 382
357 loop [17]—Monitoring, Analysis, Planning, and Execution— indicates a stronger preference. Algorithm 2 first initializes 383
i,j,k
358 to address the imbalance in nodes’ availability. Algorithm 1 the score si,j,k
q of the function ft for each node dq to 384
359 shows faasHouse design which is discussed in the following: zero (Line 1). Then, it determines the desirability of placing 385
360 Monitoring: In each timeslot, the latest SoC of nodes is read the function on each node (Lines 2–4) according to con- 386
361 through HTTP-based communication (Lines 2–3). straints. Without the loss of generality, this determination 387
362 Analysis: The SoC value of each node can undergo a rigor- is made extensible by designing a Plugin module that 388
363 ous analysis. Here, we simply use the actual observation of allows scoring using customized requirements. A Plugin 389
364 the SoC of a node. Predictive analysis is out of the scope of gets an implemented plugin p name and its weight as a tuple 390
365 this work, but it can expand this phase further. (< name >, < weight >), a function, and a candidate node. 391
366 Planning: As the decision-making phase in the context of Next, it fetches the corresponding plugin and determines 392
367 serverless edge computing for IoT applications, it requires the node’s score for the given function. 393
368 special considerations such as practicality and extensibility. Remark: Extensibility is essential for a scheduler as it allows 394
369 We have developed a novel planner inspired by the princi- the inclusion of non-energy factors such as soft QoS con- 395
370 ples of the Kubernetes scheduler, the first-class scheduling straints like data locality and customization to meet diverse 396
371 model in containerized edge computing environments [2], IoT application requirements. 397
372 [5], [12]–[14]. Our scheduler follows a two-step process con- In this work, we have implemented the following plug- 398
373 sisting of a scoring phase followed by an assignment phase. ins: energy, locality, and stickiness. The energy plugin, defined 399
374 The detailed design of the scoring and assignment phases as p(energy, weight) | p ∈ P , weight ≥ 0, determines the 400
375 (Lines 6–8) is explained in the subsequent subsections. function’s preference on nodes based on the energy benefit 401
5
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
q
403 where the desirability of a candidate node zt | q ∈ [1, n] to nodes. The assignment is subject to the following con- 429
404 than the function’s local node zti is measured and weighted. straints. All functions are required to be assigned and each 430
q
405 Measuring the difference instead of using actual zt not to only one node, as in (12), excluding down nodes and their 431
406 only sets higher preferences for better-powered nodes but functions from each side. Nodes’ capacity constraints are 432
q q
407 judiciously deters lower-powered nodes. Multiplying by xt enforced by (13), where v j , yqi,j,k , and ct , denote the function 433
i
408 and xt excludes down nodes and functions of down nodes size, function assignment, and node capacity, respectively. 434
409 from scoring as they are not meant for placements. Given integral constraints, the problem can be solved using 435
The locality plugin, defined as p(locally, weight)|p ∈ Integer Programming techniques which are not scalable for 436
410 where the desirability of placing a function on its local science. House allocation is the problem of assigning houses 441
411 node, if operationally available, is measured and weighted (nodes) to people (functions) considering people’s prefer- 442
412 proportional to that of the node’s zti . This is important ences. Examples of this problem are school allocation for 443
413 to avoid excess offloading and to preserve QoS by letting Boston school [20] and office allocation for Harvard Uni- 444
414 functions stay closer to the data source. versity professors [18]. The mechanism to solve the House 445
The third plugin is named stickiness, defined as Allocation problem is that all functions request their most 446
p(sticky, weight) | p ∈ P, weight ≥ 0, to prevent recurring preferred nodes and the allocation occurs, if no conflict ex- 447
replacements of a function that deteriorate QoS [12]. The ists. Conflicts are resolved by a supplementary mechanism. 448
stickiness plugin determines a weighted preference for a A challenge in solving the assignment at hand is the need 449
function on a node hosted the function in the previous to prioritize the local functions of a node over offloading 450
scheduling round as: functions. This is to avoid situations where a function of 451
weight × xqt xit if fti,j,k ̸= di ∧ fti,j,k = dq out a host. This also helps reduce the offloading effects. 453
, (10)
0 else Thus we employ the special case of House Allocation with 454
lems (GAP) [19], such that it attempts to find an optimal A cycle list is used to resolve conflicts (Line 2). Func- 469
assignment of tasks (functions) to agents (nodes), subject tions are sorted in descending order, based on the sum of 470
to the agent’s capacity and task size constraints. A linear scores they have given to nodes. This brings functions with 471
programming model of the problem is as follows. higher preferences to the front of the queue (Line 3). The 472
subject to: another function, a conflict occurs and the function in hand 477
n
X joins the cycle list. This list holds functions and preferences 478
q=1 a cycle clearance. This means the assignment accepts all 480
(12) requests and every function receives the requested node. 481
n X
X λt
m X
i,j
Hence, a cycle clearance verification is always conducted 482
j
v × yqi,j,k ≤ cqt i
∀d ∈ [1, n] (13) before searching nodes for a function (Line 6). If the function 483
i=1 j=1 k=1 is not involved in the cycle list, the assignment proceeds 484
yqi,j,k ∈ {0, 1} (14) (Line 8–14); otherwise, the cycle clearance is triggered that 485
428 tive (11) is to maximize the total score of assigning functions owns any unassigned function (Line 10). If not, the assign- 489
6
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
Algorithm 3: Assignment
Node 1 Node 4 Node 7 Node 9
Data: D, F, S, C, V Node 2 Node 5 Node 8 Node 10
Result: F , updated functions’ assignment Node 3 Node 6
i,j,k i,j,k
1 foreach ft |ft ∈ F do fti,j,k ← ∅ 1000
490 ment is allowed (Line 11); otherwise, the local functions of ing UM25C2 USB power meters which are highly precise [2]. 533
491 the selected node are queued to the front, respecting their A 4-core Pi gives ω = 4000, but we keep a safety net of 10% 534
492 priority (Line 13) and the function under investigation is to avoid overheating. We empirically set the send (oj,send ) 535
493 added to cycle list (Line 14). A sample scenario is provided and receive (oj,recv ) overhead for function offloading at 536
494 in Appendix B for further information. 0.02 and 0.01, according to benchmarks in [2]. containerd 537
495 Computational Complexity: We analyze the computational is enabled on Pis for containerization, Kubernetes K3s (ver- 538
496 complexity of the assignment algorithm which is the main sion v1.212) is deployed for orchestration, and OpenFaaS 539
497 computation required for the scheduler. We use R to denote (version 8.0.4) enables serverless computing. Given their 540
498 the total number of functions to be assigned. Note that edge-friendly design, they impose a negligible amount of 541
499 R = |F | and is bounded by O(nmℵ). The computation computing overhead, i.e., 1.5% CPU use on Raspberry Pis, 542
500 complexity is bound by O(R2 ). Please, refer to Appendix and around 50MB of memory footprint, which will not 543
501 C for a detailed analysis. affect the execution of functions [2]. A containerized Redis 544
503 4.1 Experimental Setup of edge computing [2], [13], [14]. 548
504 We implement a highly practical and real-world distributed ◦ Application and Workload: A generic workflow that builds 549
505 software system, extending PiAgent [12], written in ∼ 5000 a commonly-used pipeline [2], [3], [22], i.e., sensing data, 550
506 lines of Python 3.9 code to control nodes, generate work- generating requests, invoking an execution unit, process- 551
507 load, perform scheduling, etc. We conduct a 24-hour exper- ing, storing results, and triggering actuators, is developed 552
508 iment and repeat each experiment five times as per sched- for evaluations. This pipeline applies to a wide range of 553
509 uler. The implemented system and key settings, extensively event-driven IoT applications [22], from Smart City, and 554
510 following [12], are as follows. Autonomous Vehicles to Industrial IoT and Smart Farming. 555
511 ◦ Edge Nodes: A cluster of 10 Raspberry Pi Model 3 B+ We acknowledge that, although our system design can ac- 556
512 (n = 10) is used. Although we expect our system to commodate a broad spectrum of applications, attempting to 557
513 work well in heterogeneous settings, we have deliberately host an excessive number of applications on a single edge 558
514 opted for a homogeneous setup to better analyze the perfor- node may prove impractical due to the inherent resource 559
515 mance evaluation results, given the system is tested under limitations of these devices. For instance, a Raspberry Pi 560
516 heterogeneous energy input and workload. This could be 3, equipped with only 1 GB of RAM, has its capabilities 561
7
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
562 constrained. Therefore, we have carefully selected three ◦ Optimal: It is a baseline offline algorithm that requires in- 620
563 microservices (m = 3) with distinct characteristics of CPU-, advance knowledge of renewable energy input and incom- 621
564 data-, and I/O-intensive tasks - as motivated in Section 2.1. ing workload for all future timeslots under consideration. 622
565 This selection ensures a diverse and representative sample It solves the constrained optimisation problem described in 623
566 for our comprehensive evaluations. It is worth noting that Section 2.3.1 modeled by MiniZinc4 (version 2.5.5) exploit- 624
567 adhering to this range aligns with practical use cases as ing Gurobi5 solver (version 9.1.12). 625
568 supported by existing literature [22]. ◦ Kubernetes: This is the built-in default scheduler of Kuber- 626
570 (1) soil moisture controlling—it constantly reads the soil ◦ Local: It is a baseline algorithm that always deploys 628
i,j,k
571 sensor data, calculates the moisture level, and stores re- functions locally, i.e., ft ← di . This is worth evaluating 629
572 sults in Redis; (2) irrigation management—it monitors the to understand the impact of computation offloading. 630
573 soil moisture level and applies the irrigation policies to ◦ Random: It is another baseline that randomly places 631
574 connect/disconnect water through the attached actuators; functions across the cluster. It demonstrates the worst-case 632
575 and (3) a pest-repellent AI-enabled application—it captures scenario where no intelligence is considered in the sched- 633
576 an image using an embedded camera, triggers an HTTP uler’s decisions. 634
577 request, the image is fetched from the generating node by ◦ Zonal: It is a dynamic energy-aware scheduler, as detailed 635
578 the executing function, and a Single Shot Detector (SSD) [23] in [12], that forms zones of nodes, having different ranges 636
579 machine learning model performs object detection on the of SoC. The scheduler aims to assign functions within a 637
580 image to find pest birds in the area [12]. All microservices zone to equally- or better-powered zones, under special 638
581 are developed in Python and are packaged as serverless considerations such as stickiness and warm scheduling. 639
616 using Helm,3 an automation tool for Kubernetes deploy- We conducted five sets of experiments, which are detailed 671
617 ments, which allows easy and fast (re-)deployments [12]. below, and analyzed the results obtained. Each experiment 672
was repeated five times, and the averages and standard 673
618 4.1.1 Benchmark algorithms deviations are reported. 674
8
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
675 ◦ First set of experiments, using the setting in Section 4.1, the local scheduler (35.55%). This finding confirms that the 735
676 gives overall results obtained by schedulers. improvement in minimum availability through computation 736
677 ◦ Second set evaluates the impact of varying the battery offloading has not resulted in a significant energy loss or 737
678 size of edge nodes (ϑ) from 1000 to 1250, 1500, and 1750 energy overhead. The similarity in operational availability 738
680 ◦ Third set evaluates the impact of varying workload throughput for schedulers such as optimal and faasHouse, 740
681 intensity from 25 to 50, and 75 to 100%. which incur higher energy consumptions. 741
682 ◦ Forth set evaluates the impact of varying the scheduling Failure: Lowering failure is crucial for sustainability, and 742
683 interval times (∆t) from 15 to 30, and 60 to 180 minutes. The it is important for the scheduler to actively prevent nodes 743
684 results are in Appendix G. from reaching a low battery state, minimizing the chances 744
685 ◦ Fifth set evaluates the impact of varying CPU governor, of failure. Fig. 5 shows the number of times on average 745
686 influencing resource capacity (ω ), from powersave to conserva- nodes have experienced a failure (off state) as per scheduler. 746
687 tive, ondemand, and performance. The results are in Appendix Compared to the Kubernetes scheduler (#73), the optimal (53), 747
688 H. faasHouse (59), and zonal (68) reduced the failures by 27%, 748
690 Operational Availability: As shown in Fig. 4, we observe energy state of the node and excessively pushes the node 751
691 that the maximum operational availability of the minimum back into the failed and booting-up state. This behavior 752
692 available node is achieved by the optimal (17.1%), faasHouse appears so extreme that the local scheduler is less practical 753
693 (15.9%), and zonal (12.2%), respectively, over the Kubernetes than the random scheduler, in this metric. 754
694 (11.7%) scheduler, demonstrating a 46%, 36%, and 4% im- Longest Availability: Fig. 6 shows that the optimal (520 min- 755
695 provement. In contrast, the local and random impair the value utes) and faasHouse (470 minutes), followed by the zonal (450 756
696 to 8.6% and 9.4%. minutes) schedulers succeed in prolonging the operational 757
697 In Fig. 4, the range, i.e., max-min difference, of oper- time of nodes on average, by up to 24%, 11%, and 7% against 758
698 ational availability shows that the optimal, faasHouse, and the Kubernetes scheduler, respectively. The local scheduler 759
699 zonal improve it to 36.63%, 42.5%, and 48.23%, respectively, performed poorly since it fails to recognize the low-power 760
700 over the Kubernetes scheduler. This minimization represents state on the nodes and keeps utilizing them even when 761
701 a more balanced utilization of energy by nodes despite they have just been turned on. This on-and-off situation can 762
702 skewed energy supply and load generation. continue for a long period of time in some cases, especially 763
703 Another indication of achieving a balanced energy usage if the energy input and consumption of a low-power node 764
704 is the reduction in the SD of the cluster-wide operational remain relatively at the same rate. 765
705 availability, where the optimal (11.04) followed by faasHouse Throughput: Fig. 7 illustrates the system’s overall through- 766
706 (13.4) minimize it up to 45% and 33%, respectively, over the put for different schedulers. The optimal (1.02 tasks/sec) and 767
707 Kubernetes scheduler (20.02). faasHouse (0.92) schedulers enhance the throughput by up 768
708 The key factor for such differences between sched- to 44% and 30%, respectively, compared to the Kubernetes 769
709 ulers is that well-performed schedulers utilize the well- scheduler (0.71). This improved throughput confirms that 770
710 powered nodes, particularly their wasted energy, to help nodes, particularly low-powered ones, can execute more 771
711 low-powered nodes. In detail, faasHouse exploits the enabled tasks when they remain operationally available for a longer 772
712 resource sharing to offload functions of low-powered nodes duration. Conversely, the reduced throughput observed in 773
713 to well-powered nodes. This intelligence is augmented with the random scheduler indicates that nodes experience inter- 774
714 future knowledge in the optimal scheduler. ruptions in their serviceability due to energy insufficiency. It 775
715 On the weak side, the local scheduler exhausts the low- is worth noting that since requests are sent synchronously, 776
716 powered nodes’ energy in isolation and fails to exploit meaning a new request is sent only after the response 777
717 resource sharing and offloading. The Kubernetes scheduler is for the previous one is received, nodes that are available 778
718 enabled with resource sharing, but its performance-driven for a longer duration generate more requests, ultimately 779
719 scheduler fails to satisfy sustainability requirements. The increasing the overall throughput. 780
720 random scheduler is enabled with resource sharing, but there Wasted Energy: Energy waste occurs when a node’s battery 781
721 is no rationale behind the re-scheduling decisions; hence, is fully charged, resulting in the underutilization of the 782
722 not only no improvement is observed for low-powered excess energy input. Fig. 8 shows that the optimal (14 mWh) 783
723 nodes by random, but also the experiments’ logs report that and faasHouse (103 mWh) managed to minimize the energy 784
724 it is highly likely that it offloads well-powered functions to wastage significantly compared to the Kubernetes scheduler 785
725 low power nodes. Lastly, while the zonal algorithm respects (433 mWh), demonstrating a 97% and 76% improvement, 786
726 the energy status of nodes and exhibits better performance respectively. This is justified by the fact that energy aware- 787
727 than the Kubernetes, local, and random schedulers, it fails ness, coupled with proper placement of functions, allows 788
728 to perform as efficiently as the faasHouse, mainly due to optimal and faasHouse to utilize well-powered nodes more 789
729 its dependence on sticky offloading and warm scheduling frequently, thereby reducing the likelihood of full battery 790
730 features that can make it perform over-cautiously. and mitigating energy wastage. This intelligence also ben- 791
731 Note that the average operational availability of nodes efits the low-powered nodes to survive longer, as shown 792
732 is observed to be approximately identical for all schedulers: before. However, the local scheduler, while continuously 793
733 optimal (41.8%), faasHouse (42.09%), Kubernetes (42.29%), ran- utilizing all nodes, fails to achieve such performance, since it 794
734 dom (40.61%), and zonal (39.98%), with the exception of fails to minimize the wastage of the energy of a node having 795
9
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
1.0
70 SD=11.04 SD=13.4 SD=19.6 SD=21.24 SD=17.4
SD=20.02 Range=57.09% Range=48.23%
60 Range=42.5% Range=55.98%
Op. Availability (%) 0.8 Range=37.63%
Range=58.01%
50
0.6
40
0.4
30
20
0.2
10
0.000.01 2 3 4 5 6 7 8 910 1 0.2
2 3 4 5 6 7 8 910 1 2 3 40.45 6 7 8 910 1 2 3 4 5 60.67 8 910 1 2 3 4 5 6 7 80.8910 1 2 3 4 5 6 7 8 9101.0
Optimal faasHouse Kubernetes Local Random Zonal
Fig. 4: Operational availability of nodes per scheduler
1.0 81 1.1
1.0
80 73 75 1.02
Throughput (task/sec.)
0.8 68 1.0
0.8
Nodes' Failure (#)
600
1.0
Longest Availability (minute)
Fig. 6: Longest availability of nodes per scheduler Fig. 8: Nodes’ wasted energy per scheduler
796 a low rate of workload. having fewer hosted functions on under-loaded nodes can 825
797 By keeping a low-powered node available longer, as in lead to a relatively shorter response time. This approach, 826
798 the optimal and faasHouse, more tasks are generated in the however, leads to lower throughput, which is in line with 827
799 cluster. More generated tasks means more task execution the resource-aware nature of the Kubernetes scheduler. 828
800 as well, thereby increasing the throughput. This behavior
801 arises from the dual role of nodes: being both load gener-
4.2.2 Impact of Battery Size 829
802 ators and executors. This factor is also a major reason for
803 the optimal and faasHouse to not increase the average avail- Figure 10 shows the gradual increase in availability for 830
804 ability of the cluster when utilizing computation offloading. the node with minimum availability as the battery size 831
805 That is, the increased load in the system also escalates the transitions from the smallest to the largest. This observation 832
806 overall energy consumption, which decelerates the increase is evident in the increasing trends of both the optimal and 833
807 in average operational availability. The advantage of this faasHouse measurements The larger battery storage capacity 834
808 increased load generation is that the IoT application serves effectively minimizes energy input wastage, providing a 835
809 more tasks and experiences a more desirable Quality of justification for the observed trend. Note that the largest 836
810 Experience (QoE). battery size, i.e., 1750 mWh, while consistent in improving 837
811 Response Time: Fig. 9 shows the CDF of response time per the minimum available node, fails to improve the average 838
812 scheduler. All schedulers present a relatively similar trend, availability of nodes significantly since the energy input 839
813 mainly due to the local-like network in the extreme edge. to batteries is limited and not scaled proportionally. The 840
814 The minor difference in obtained response time of sched- nodes’ failure metric in Figure 11 demonstrates improve- 841
815 ulers confirms that the balanced energy usage achieved ment across all schedulers as the battery size increases. 842
816 by optimal and faasHouse is not at the expense of signif- However, it is worth noting that the energy-aware sched- 843
817 icant QoS degradation (see Fig. 9). The main source of ulers, such as optimal, faasHouse, and zonal, outperform the 844
818 the difference lies in how much a scheduler promotes the other schedulers in this regard. 845
819 local placement of functions. Given this, the local scheduler The longest availability metric depicted in Figure 12 846
820 appears to benefit more. The faasHouse can encourage local demonstrates an initial substantial increase followed by a 847
821 placements by its locality plugin if a shorter response time more gradual improvement for the energy-aware sched- 848
822 is desired. Furthermore, for offloading-enabled schedulers, ulers., upon increasing the battery size. This slow-down in 849
823 another potential source of difference can be the trade-off the increase can be associated with the fact that at some 850
824 between throughput and response time. In other words, point onward the size of the battery does not matter since 851
10
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
Tail Latency (90th): The faasHouse remains performant, with an 11% distance 902
0.75 from the optimal. The local scheduler is the most affected 903
Optimal=1.19
faasHouse=1.36 scheduler when the workload is intense since bombarding
CDF
0.50
904
Kubernetes=0.97 a recently booted-up node using intensive workloads can
Local=1 905
0.25 Random=1.59 more quickly cause the node to fail and restart the timer for 906
Zonal=1.49
measuring the longest availability, which happens to be a
0.00 907
0.5 1.0 1.5 2.0 2.5 3.0 3.5 recurring accident for nodes when using the local scheduler.
Response Time (sec.)
908
Fig. 9: Response time per scheduler ward direction by the increase in the workload intensity, 910
852 the energy intake remains unchanged due to the unchanged between different schedulers’ throughput is increased as the 912
853 renewable energy inputs. workload becomes more intense. This once again dictates 913
854 The throughput measure, shown in Fig. 13, demonstrates the necessity of a dynamic energy-aware scheduler for chal- 914
855 a linear increase with the enlarged batteries since low- lenging workloads. While the local scheduler consistently 915
856 powered nodes can remain available longer and tasks are maintains a reasonable throughput, it falls short in meeting 916
857 continuously executed, where the random and local present sustainability metrics such as nodes’ failure and longest 917
858 the least improvement. This can be associated with the availability. 918
859 increased opportunity to utilize peers’ excess energy. That is, For further analysis in terms of other metrics and obser- 919
860 with larger batteries, not only well-powered (and/or under- vations, please refer to Appendix F. 920
863 more opportunity to benefit. Additionally, we analyzed the impact of varying scheduling 922
864 For further analysis in terms of other metrics and obser- intervals and resource capacity (i.e., CPU Governor) on 923
865 vations, please refer to Appendix E. faasHouse. For a detailed analysis of such sensitivity analysis, 924
867 In the main series of experiments, the workload is generated resources to functions, e.g., powersave, provide better perfor- 927
868 by nodes in the entire experiment time, i.e., intensity=100%, mance. 928
869 although the request rate is different per node and per appli-
870 cation. It is important, however, to observe the performance
5 D ISCUSSION 929
871 under varying workload intensity, i.e., 25, 50, 75, and 100%.
872 To achieve intensity=50%, for example, a node’s workload In this paper, we introduced an energy-aware scheduler 930
873 generator triggers the function during half the experiment called faasHouse to enhance the sustainability of serverless 931
874 time, in different timeslots that last for a certain time. The edge computing in extreme edge environments. It is worth 932
875 trigger times and timeslots are generated using Poisson and noting the following key innovations: 933
876 Exponential distributions. The overall decreasing trend in • The problem of imbalanced energy in edge is handled 934
877 Fig. 14 is attributed to the increase in both the workload by scheduler software design, which appears much 935
878 and consequently overall energy consumption. Maximizing more affordable than dealing with hardware upgrades. 936
879 the availability of the minimum available node shown in • faasHouse, unlike Kubernetes, adopts collective 937
880 Fig. 14 demonstrates the persistent dominance of the optimal placements—the decision and placement for all 938
881 and faasHouse, followed by the zonal, schedulers. According nodes and functions are made at once, as opposed to 939
882 to Fig. 14, the minimum available node is observed by the the one-by-one strategy that ignores the requirements 940
883 optimal scheduler at 22.1%, although the nodes experienced of other functions in the scheduling queue, resulting in 941
884 the workload only 25% of the time. This is because the major conflicts. 942
885 energy input limitation is a key factor and also the base • faasHouse supports both hard and soft constraints and 943
886 energy usage of nodes appears inevitable. Also, with the features an extensible design inspired by the Kuber- 944
887 increase in workload intensity, the importance of a dynamic netes scheduler. This adaptability allows it to meet 945
888 energy-aware scheduler is highlighted further. For example, the evolving requirements of edge applications. Unlike 946
889 with 25% intensity, the Kubernetes scheduler performs only faasHouse, the state-of-the-art, e.g., the zonal scheduler, 947
890 39% less efficiently than the optimal, but this gap increases generally requires a re-design to either accommodate a 948
891 to 46% when the intensity is 100%. different hard/soft constraint than energy or apply to a 949
892 The nodes’ failure results in Fig. 15 show that by increas- different application. For example, the zonal requires the 950
893 ing the workload intensity, the number of failures increases system admin to empirically determine zones’ ranges in 951
894 for all schedulers. This confirms a more challenging situa- advance, which is a non-trivial task, making it challeng- 952
895 tion for schedulers when experiencing intensive workloads. ing to reuse for diverse IoT applications. 953
896 It is observable that despite such challenges the optimal Key Insights: While the scheduler proves the appli- 954
897 and faasHouse retain their dominance by handling intensive cability of software-based adjustments to improve the 955
898 workloads more desirably. environment- and hardware-related challenges, which is 956
11
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
Optimal Kubernetes Random Optimal Kubernetes Random Optimal Kubernetes Random Optimal Kubernetes Random
faasHouse Local Zonal faasHouse Local Zonal faasHouse Local Zonal faasHouse Local Zonal
Throughput (task/sec.)
80
80 550 1.00
Throughput (task/sec.)
20
Nodes' Failure (#)
18 70 500 0.80
15 450 0.60
12 60
10 400 0.40
8 50
350
25% 50% 75% 100% 25% 50% 75% 100% 25% 50% 75% 100% 25% 50% 75% 100%
Workload Intensity Workload Intensity Workload Intensity Workload Intensity
Fig. 14: Workload intensity Fig. 15: Workload intensity Fig. 16: Workload intensity Fig. 17: Workload intensity
vs. minimum available nodes vs. nodes’ failure vs. longest availability vs. throughput
957 much more affordable than hardware-based solutions, there handling and high availability mechanisms in Kubernetes 996
958 are certain interesting observations worthy of consideration. such that the leadership can be shared among multiple 997
959 • The computation offloading allowed using well- controllers. For saturation concerns that can occur due to 998
960 powered nodes to host peers’ functions. However, heavy communication between nodes or lack of coverage for 999
961 we observed that the operational availability of well- the extended areas, leveraging multiple clusters of Kuber- 1000
962 powered nodes is not reduced proportionally to the netes appears a reasonable solution, as supported by Liqo.6 1001
963 increased available energy of low-powered nodes. This Lastly, a decentralization of the scheduler appears worthy 1002
964 insight shows that the offloaded functions tend to uti- of consideration as a potential avenue for future work. 1003
969 increase in the overall operational availability of the of sustainable serverless edge, we broadly discuss the liter- 1006
970 cluster. The rationale behind this lies in the improved ature on the resource scheduling of sustainable edge. 1007
12
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
1030 (e) Microservices are treated differently in our work scheduler’s design, a first-class scheduling tool at the edge 1086
1031 than in the literature by adopting the FaaS model. This [2], [5], [12]–[14]. The prototype platform of [12] is extended 1087
1032 facilitates the migration process (i.e., moving from one to incorporate our new algorithms. In addition, this work 1088
1033 node to another), encouraged by [7]. Also, to minimize encompasses a completely new set of experiments, focusing 1089
1034 the interruptions over migrations, our solution practices on sustainability-related metrics and parameters for evalua- 1090
1035 graceful terminations—launch the new container and then tion. Furthermore, Spillner et al. [6] suggested the need for 1091
1036 terminate the old one, which is not typically practiced in the energy-aware serverless edge computing. They theoretically 1092
1037 literature [4], [26]. analyzed viable techniques such as resource scheduling, 1093
1038 Finally, (f) computation offloading, a key contribution server consolidation, dynamic resource scaling, and utiliz- 1094
1039 of our work, is an ideal opportunity to achieve energy ing artificial intelligence. However, the suggestions lack an 1095
1040 awareness at the edge [7]. A simulation-based solution is implementation or simulation. 1096
1044 6.2 Scheduling in Serverless Cloud Computing augmented with serverless through resource scheduling, 1099
1045 Studies on resource scheduling of serverless functions in in the presence of variable renewable energy input and 1100
1046 clouds are primarily focused on non-energy matters [28]– workload. A realistic architecture and system model for 1101
1047 [31]. For instance, memory and cache improvements have sustainable edge was presented where an energy-aware 1102
1048 been implemented by a scheduler to enhance the cache hit scheduler called faasHouse was proposed to address the 1103
1049 ratio [28], [29]. Contrary to these approaches, we focus on challenge of imbalanced energy supply for renewable en- 1104
1050 energy challenges at the extreme edge. For instance, we ergy and battery-powered edge clusters. The experimen- 1105
1051 simply eliminate container caching effects by pre-caching. tal results demonstrate notable improvements in various 1106
1052 Lastly, Xtract [31] is a scalable high-performance platform, aspects: (a) a 46% increase in the utilization of the least 1107
1053 which is an extension of funcX [30], and adopts a random available nodes, (b) a 27% reduction in node failure rates, 1108
1054 placement of functions. (c) a 24% improvement in the reliability of edge nodes, and 1109
1056 Studies on scheduling problems in serverless edge comput- scheduling. This approach eliminates the need for hardware 1113
1057 ing have primarily focused on resource allocation, QoS, and adjustments while yielding substantial benefits. Moreover, 1114
1058 cost efficiency [5], [13], [32]–[34]. However, energy consid- it promotes the environmentally friendly deployment of 1115
1059 erations have been overlooked to a large extent. LaSS [34] modern computing models, such as serverless computing, 1116
1060 takes care of latency-sensitive workflows by vertically ad- at the edge. Future work includes extending the system 1117
1061 justing the functions’ allocated resources to the demand. model to support a continuum of cloud-to-edge nodes for 1118
1062 Similarly, LETO [32], a theoretical solution, considers the a broader range of IoT applications, including cost-aware 1119
1063 horizontal scaling of resources. In [33], the QoS and cost ones. Additionally, exploring other adaptation techniques 1120
1064 efficiency is approached through task offloading in a cloud- like load balancing and resource consolidation, as well as 1121
1065 to-edge continuum, where they adopt Pi 3 B+ as edge nodes, incorporating heterogeneous resources (e.g., GPU, TPU), 1122
1066 as we did. With cost efficiency in mind, ActionWhisk [5] can enhance the scheduler’s capabilities. 1123
1071 There is a significant gap in the existing literature regard- survey,” Journal of Systems Architecture, vol. 98, pp. 289–330, 2019. 1128
1072 ing the energy awareness of serverless platforms. Discus- [2] M. S. Aslanpour, A. N. Toosi, and R. Gaire, “WattEdge : A 1129
Holistic Approach for Empirical Energy Measurements in Edge 1130
1073 sions on sustainable serverless at the edge was commenced Computing,” vol. 2. ICSOC, Springer, 2021, pp. 531–547. 1131
1074 by [6], and we initiated practical investigations in [12]. The [3] T. Ojha, S. Misra, and N. S. Raghuwanshi, “Internet of Things 1132
1075 present work builds upon our previous study in [12] and for Agricultural Applications: The State-of-the-art,” IEEE Internet 1133
Things J., p. 1, 2021. 1134
1076 introduces significant advancements in several key aspects. [4] A. Karimiafshar, M. R. Hashemi, M. R. Heidarpour, and A. N. 1135
1077 Respectively, we introduce a practical architecture for the Toosi, “Effective Utilization of Renewable Energy Sources in Fog 1136
1078 serverless edge. The problem context is defined on top Computing Environment via Frequency and Modulation Level 1137
1079 of [12], where we here incorporate new factors into (a) Scaling,” IEEE Internet Things J., vol. 7, no. 11, pp. 10 912–10 921, 1138
2020. 1139
1080 the edge nodes, i.e., the inclusion of CPU governors, (b) [5] D. Bermbach, J. Bader, J. Hasenburg, T. Pfandzelter, and L. Tham- 1140
1081 application & workload, i.e., the inclusion of bandwidth and sen, “AuctionWhisk: Using an Auction-Inspired Approach for 1141
1082 data transfer, and (c) node availability, i.e., the inclusion Function Placement in Serverless Fog Platforms,” pp. 1–48, 2021. 1142
1083 of standby energy considerations. The algorithm design [6] Panos, J. Spillner, A. V. Papadopoulos, O. Rana, P. Patros, J. Spill- 1143
ner, A. V. Papadopoulos, B. Varghese, O. Rana, and S. Dustdar, 1144
1084 presents a completely novel and distinctive approach, prior- “Toward Sustainable Serverless Computing,” IEEE Internet Com- 1145
1085 itizing practicality and extensibility, inspired by Kubernetes puting, vol. 25, no. 6, pp. 42–50, 2021. 1146
13
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Transactions on Services Computing. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TSC.2024.3354296
1147 [7] C. Jiang, T. Fan, H. Gao, W. Shi, L. Liu, C. Cérin, and J. Wan, “En- [31] T. J. Skluzacek, R. Wong, Z. Li, R. Chard, K. Chard, and I. Foster, 1224
1148 ergy aware edge computing: A survey,” Computer Communications, “A Serverless Framework for Distributed Bulk Metadata Extrac- 1225
1149 vol. 151, no. 2018, pp. 556–580, 2020. tion,” HPDC 2021 - Proceedings of the 30th HPDC , pp. 7–18, 2021. 1226
1150 [8] Q. Luo, S. Hu, C. Li, G. Li, and W. Shi, “Resource Scheduling in [32] H. Ko, S. Pack, and V. C. M. Leung, “Performance Optimization 1227
1151 Edge Computing: A Survey,” IEEE Communications Surveys and of Serverless Computing for Latency-Guaranteed and Energy- 1228
1152 Tutorials, vol. 48202, no. c, pp. 1–36, 2021. Efficient Task Offloading in Energy Harvesting Industrial IoT,” 1229
1153 [9] Y. Li, Y. Lin, Y. Wang, K. Ye, and C.-Z. Xu, “Serverless Comput- IEEE Internet Things J., vol. 4662, no. c, pp. 1–1, 2021. 1230
1154 ing: State-of-the-Art, Challenges and Opportunities,” IEEE Trans. [33] A. Das, S. Imai, S. Patterson, and M. P. Wittie, “Performance 1231
1155 Services Comput., vol. 1374, no. c, pp. 1–1, 2022. Optimization for Edge-Cloud Serverless Platforms via Dynamic 1232
1156 [10] M. Aslanpour, A. Toosi, C. Cicconetti, B. Javadi, P. Sbarski, D. Taibi, Task Placement,” Proceedings - 20th IEEE/ACM CCGRID 2020, 1233
1157 M. Assuncao, S. Gill, R. Gaire, and S. Dustdar, “Serverless Edge no. 1, pp. 41–50, 2020. 1234
1158 Computing: Vision and Challenges,” in Australasian Computer [34] B. Wang, A. Ali-Eldin, and P. Shenoy, “LaSS: Running Latency 1235
1159 Science Week Multiconference, 2021. Sensitive Serverless Computations at the Edge,” HPDC 2021 - 1236
1160 [11] P. Castro, V. Ishakian, V. Muthusamy, and A. Slominski, “The Proceedings of the 30th HPDC , pp. 239–251, 2021. 1237
1176 survey,“ European Journal of Operational Research, pp. 774–793, 2007 degree from University of Melbourne, in 2015. 1250
1177 [16] R. Mahmud and A. N. Toosi, “Con-Pi: A Distributed Container- He is currently a senior lecturer with the De- 1251
1178 Based Edge and Fog Computing Framework,” IEEE Internet Things partment of Software Systems and Cybersecu- 1252
1179 J., vol. 9, no. 6, pp. 4125–4138, 2022. rity, Monash University, Australia. From 2015 to 1253
1180 [17] M. S. Aslanpour, A. N. Toosi, R. Gaire, and M. A. Cheema, “Auto- 2018, he was a postdoctoral research fellow with 1254
1181 scaling of Web Applications in Clouds: A Tail Latency Evaluation,” the University of Melbourne. Dr Toosi has pub- 1255
1182 in 2020 IEEE/ACM 13th UCC. IEEE, dec 2020, pp. 186–195. lished over 70 peer-reviewed publications in top- 1256
1183 [18] T. Sönmez and M. U. Ünver, “House allocation with existing tier venues such as IEEE TCC, IEEE TSC, and 1257
1184 tenants: A characterization,” Games and Economic Behavior, vol. 69, IEEE TSUSC. His publications have received 1258
1185 no. 2, pp. 425–445, 2010. over 3,500 citations with a current h-index of 1259
1186 [19] T. Öncan, “A survey of the generalized assignment problem and 28 (Google Scholar). His research interests include cloud, fog, edge 1260
1187 its applications,” Infor, vol. 45, no. 3, pp. 123–141, 2007. computing, software-defined networking, and sustainable computing. 1261
For further information, visit his homepage: http://adelnadjarantoosi.info. 1262
1188 [20] T. Sönmez and M. U. Ünver, “Matching, allocation, and exchange
1189 of discrete resources,” Handbook of Social Economics, vol. 1, no. 1 B,
1190 pp. 781–852, 2011. Muhammad Aamir Cheema is an ARC Future 1263
1191 [21] “Bureau of Meteorology.” [Online]. Available: http://www.bom. Fellow and Associate Professor at the Faculty 1264
1192 gov.au/climate/data-services/solar-information.shtml. of Information Technology, Monash University, 1265
1193 [22] S. Eismann, J. Scheuner, E. Van Eyk, M. Schwinger, J. Grohmann, Australia. He obtained his PhD from UNSW 1266
1194 N. Herbst, C. Abad, and A. Iosup, “The State of Serverless Ap- Australia in 2011. He is the recipient of 2012 1267
1195 plications: Collection, Characterization, and Community Consen- Malcolm Chaikin Prize for Research Excellence 1268
1196 sus,” IEEE Trans. Softw. Eng., vol. 5589, no. c, pp. 1–1, 2021. in Engineering, 2013 Discovery Early Career 1269
1197 [23] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. & Researcher Award, 2014 Dean’s Award for Ex- 1270
1198 Berg, A. SSD: Single Shot MultiBox Detector. Computer Vision – cellence in Research by an Early Career Re- 1271
1199 ECCV 2016. pp. 21-37 (2016) searcher, 2018 Future Fellowship, 2018 Monash 1272
1200 [24] S. Tuli, G. Casale, and N. R. Jennings, “PreGAN: Preemptive Student Association Teaching Award and 2019 1273
1201 Migration Prediction Network for Proactive Fault-Tolerant Edge Young Tall Poppy Science Award. He has also won two CiSRA best 1274
1202 Computing,” in Proceedings - IEEE INFOCOM, 2022. research paper of the year awards, two invited papers in the special 1275
1203 [25] Q. Zhang, M. Lin, L. T. Yang, Z. Chen, S. U. Khan, and P. Li, issue of IEEE TKDE on the best papers of ICDE, and three best paper 1276
1204 “A Double Deep Q-Learning Model for Energy-Efficient Edge awards at ICAPS 2020, WISE 2013 and ADC 2010, respectively. 1277
1205 Scheduling,” IEEE Trans. Services Comput., vol. 12, no. 5, pp. 739–
1206 749, 2019. Mohan Baruwal Chhetri is a Senior Research 1278
1207 [26] S. Ghanavati, J. H. Abawajy, and D. Izadi, “An Energy Aware Scientist with CSIRO’s Data61, Australia, where 1279
1208 Task Scheduling Model Using Ant-Mating Optimization in Fog he leads the Human-Machine Collaboration for 1280
1209 Computing Environment,” IEEE Trans. Services Comput., pp. 1–1, Cybersecurity research theme. He received his 1281
1210 2020. PhD degree from Swinburne University of Tech- 1282
1211 [27] W. Chen, D. Wang, and K. Li, “Multi-User Multi-Task Computa- nology, Australia. His research focus is on devel- 1283
1212 tion Offloading in Green Mobile Edge Cloud Computing,” IEEE oping technologies to facilitate decision support, 1284
1213 Trans. Services Comput., vol. 12, no. 5, pp. 726–738, 2019. automation and optimization in cyber-physical- 1285
1214 [28] P. Andreades, K. Clark, P. M. Watts, and G. Zervas, “Experimental social ecosystems. 1286
1215 demonstration of an ultra-low latency control plane for optical 1287
1216 packet switching in data center networks,” Optical Switching and
1217 Networking, vol. 32, pp. 51–60, 2019.
1218 [29] G. Aumala, E. Boza, L. Ortiz-Aviles, G. Totoy, and C. Abad,
1219 “Beyond load balancing: Package-aware scheduling for serverless
1220 platforms,” Proceedings - 19th IEEE/ACM CCGrid 2019, pp. 282–
1221 291, 2019.
1222 [30] R. Chard, A. Woodard, I. Foster, and K. Chard, “f unc X : A
1223 Federated Function Serving Fabric for Science,” pp. 65–76, 2020.
14
Authorized licensed use limited to: KIIT University. Downloaded on January 28,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.