You are on page 1of 5

Efficient Energy Aware Task Scheduling for Parallel Workflow

Tasks on Cloud Environment


First Author*1, Second Author2, Third Author2, Fourth Author3
*1Department, University/Institute/Company, City, State, Country
Example@ijsrset.com1
2
Department, University/Institute/Company, City, State, Country
Example@ijsrset.com2
3
Department, University/Institute/Company, City, State, Country
Example@ijsrset.com3

ABSTRACT
In cloud computing environments, energy efficiency is especially important when dealing with concurrent
workflow jobs that have different energy requirements and different degrees of priority. In this research, we
propose an algorithm called Prioritized Power-Aware Scheduling (PPAS), which optimizes task scheduling by
taking energy consumption limitations and job priorities into account. Our methodology sets priorities for
critical tasks to guarantee timely completion while consuming the least amount of energy. We assess and
evaluate PPAS by rigorous testing in a cloud simulation setting, contrasting it with a fundamental
First-Come-First-Served (FCFS) scheduling strategy. Our findings show that PPAS outperforms FCFS by attaining
greater completion rates for significant tasks within specified power restrictions, thus striking an appropriate
compromise between task completion and energy efficiency. The results highlight PPAS's effectiveness in cloud
environments for parallel workflow task management, indicating its applicability to real-world applications that
demand energy-conscious job scheduling.

Keywords: task-scheduling; energy-aware; cloud computing,


prioritized scheduling,

I. INTRODUCTION harmful CO2, using fossil fuels. Servers emit 2% of carbon


A new development in the IT sector is cloud computing, dioxide, contributing to global warming. Recent studies have
which defines computers as public services similar to shown that energy consumption in data centers now reaches
telecommunications and electricity services. NIST describes it 0.5% of the total energy used in the world. To reduce
as "a model that enables on-demand network access to a environmental problems, it is important to focus on energy
shared set of configurable computing resources that can be efficiency over clean processes. However, organizing these
rapidly deployed and disabled with minimal administrative, large and complex applications is a difficult task. Effective job
vendor, or service interaction.". Cloud computing features scheduling can not only increase system utilization but also
such as pay-as-you-go model, hidden default implementation, reduce energy consumption, resulting in significant cost
simplicity and flexibility create demand among service savings.[2]
providers. As a result, resources are distributed efficiently.[1]
The main purpose of this document is to provide an overview
Data centers are the biggest offenders because they expand of efficient job scheduling. energy-aware tasks for work flow
quickly and use a lot of capacity. An important consideration tasks. We use proposed models such as EHEFT, ECPOP, and
for the widespread use of cloud data centers is capacity DVFS to provide cloud context.
utilization. Data centers use up to 25,000 homes and emit
II. RELATED WORK Another paper published by Sanjay Ranka et al. [5],
Analogous research in energy-efficient cloud scheduling has published an article about an algorithm for energy-efficient
concentrated on job scheduling and resource allocation scheduling of applications on machines that can change
optimization to reduce energy usage and meet performance voltage. For distributed and parallel embedded systems, the
requirements. In the past, methods for allocating resources algorithm is intended. To arrive at nearly ideal results, the
based on the features of the workload and server usage were authors compare their approach against another algorithm.
frequently heuristics and algorithms. In more recent research, Compared to the other algorithms, theirs requires less memory
resource allocation and scheduling have been dynamically and time.
altered based on real-time data by utilizing machine learning
and optimization algorithms. By striking a balance between A paper published in 2022 by Parul Agarwal et al. [6] talks
performance and energy efficiency, these methods seek to about task scheduling techniques for energy efficiency in
maximize cloud resource utilization while reducing their cloud computing. The difficulties with energy efficiency in
negative effects on the environment. All things considered, cloud data centers are covered. Computer resources are
this field of research is still developing, with a focus assigned tasks using scheduling algorithms. The objective is
increasingly on sustainable cloud infrastructures and green to meet customer requirements and deadlines while
computing. consuming the least amount of energy possible. Although
A paper published by Salim Hariri et al. [3], proposes a machine learning holds great potential for task scheduling, its
performance-effective and low-complexity task scheduling current application is in resource utilization prediction rather
algorithm for heterogeneous computing systems. The than scheduling. For job scheduling, heuristic or
algorithm considers both the execution time and the meta-heuristic algorithms are increasingly frequently
communication cost of tasks when making scheduling employed. A model to lower energy usage and CO2 emissions
decisions. in cloud situations is put out by the authors.

Thanawut Thanavanich and Puchong Uthayopas [2] Guy Martin Tchamgoue et al. [7] in their paper, focused on
published a paper proposing two energy-aware task applying DVS within real-time scheduling frameworks, which
scheduling algorithms, EHEFT and ECPOP, to schedule manage tasks in these systems. The authors propose a system
parallel applications in cloud environments. Reducing energy that allows DVS to be used at different levels (system-wide,
use while preserving high scheduling quality is the aim. To component-based, and individual tasks). This approach
conserve energy, the algorithms detect inefficient processors optimizes energy savings. The study also presents dynamic
and turn them off. Then, to increase energy efficiency, tasks DVS techniques, which further minimize energy use by
are rescheduled on fewer processors. The suggested utilizing idle processing time. They also design power-aware
algorithms lower energy consumption without appreciably rules to guarantee that jobs are completed by the deadline
affecting scheduling quality, according to simulation findings. despite changes in voltage and speed. According to
As a result, cloud systems can be used as large-scale simulations, these methods can cut energy consumption by up
computing platforms effectively. to 96%. This strategy could save energy expenses in big data
centers and extend the battery life of portable electronics.
In another paper published by Mallari Harish Kumar and
Sateesh K. Peddoju [1], the main objective is to focus on B.R. Childers et al. [8] in their paper talked about
reducing server energy consumption, a major contributor, in scheduling with dynamic voltage/speed adjustment using
cloud environments with parallel applications. This study aims slack reclamation in multiprocessor real-time systems. It talks
to address this issue. The suggested technique lowers server about how current processors use a lot of power. To cut down
voltage and frequency during activity pauses by utilizing on energy usage, the authors suggest two brand-new,
Dynamic Voltage Frequency Scaling (DVFS). Energy is saved power-aware scheduling algorithms. To slow down
in this way, and concurrent application deadlines are subsequent jobs, the algorithms recover time that is not used
unaffected. Comparing simulations to current methods, by a task. This method lowers the system's overall energy
considerable energy savings are shown. usage. The impact of discrete voltage/speed levels on energy
savings is also examined in this study. According to the
A paper published by Rami G. Melhem et al. [4], proposes authors' findings, processors with a few discrete voltage/speed
two power-aware scheduling algorithms for multiprocessor levels save almost as much energy as those with continuous
systems, focusing on reducing power consumption. By voltage/speed levels.
recovering unused task time, these methods exploit processor
slack sharing to lower task speeds in the future. The study R.K. Jena [9] in a paper also talked about energy-efficient
looks at the effect of discrete voltage/speed levels on energy task scheduling in cloud environments. The paper talks about
savings and includes task sets with and without precedence cloud computing and the issues with makespan and energy
limitations. Significant energy savings are shown by the usage. To maximize processing time and energy, the paper
simulation findings, particularly for systems with variable suggests a method known as Task Scheduling using Clonal
voltage processors. With negligible impact from voltage/speed Selection Algorithm (TSCSA). CloudSim was used to model
adjustment overhead, the algorithms provide roughly equal the findings, which demonstrated that TSCSA offers the best
energy reductions for processors with discrete voltage/speed possible balance for several goals.
levels compared to continuous ones.
P. Anandhakumar and K. Kalai Arasan [10] in their paper
talked about about energy-efficient task scheduling and
resource management in a cloud environment. Task F: Capacitance (in Farads)
scheduling techniques and resource allocation are covered. To Y: Supply Voltage (in Volts)
increase energy efficiency, the authors suggest a technique z: Operating Frequency (in Hertz)
dubbed HSRLBA that combines SARSA and BWA.
uRank-TOPSIS is used by HSRLBA for task scheduling. As Now, to compute Edynamic:
per the document's conclusion, HSRLBA performs better than
other current methods. Edynamic_ni,j = DPow_j * Δwij * t_ni,j

where,

Edynamic_ni,j: Dynamic energy consumption of task ni on


processor j (in Joules)
DPow_j: Dynamic power consumption of processor j (in
III. MODELS Watts) (This value might be constant or vary depending on the
system and workload.)
A. Cloud System Models Δwij: Computation cost of task ni on processor j - This
could be measured in execution cycles, instructions, or
operations (consistent units across tasks).
t_ni,j: Execution time of task ni on processor j (Seconds)
This study[2] models cloud computing resources as a
collection of services. There are P different types of
We need to consider real-world scenarios where processors
processors in the data centre. Every CPU pj∈P has DVS
can dynamically adjust voltage (Y) and frequency (z) to
capability, allowing it to function at various voltage levels and
optimize performance and energy consumption. If we can
clock frequencies.We designate the processor pj's supply
access this information, we can incorporate it into the formula
voltage as set Yj of v and its clock frequency as set ωj of f.
by modifying DPow_j. DPow_j can be related to Y and z through a
The clock frequency works in fl in addition to the supply
formula specific to the processor architecture (often modeled
voltage operating at level vl When a processor is idle, it will
as Pdynamic_j ≈ Y^2 * z). Also, the units for Δwij need to be
run at its lowest voltage setting, or vlow. In this work, we
consistent with the chosen unit for t_ni,j.
assume that the frequency transition overhead is negligible
(equal to zero).Furthermore, it is believed that all processors
The above formula provides a more comprehensive
communicate with one another at the same speed.
estimation of dynamic energy consumption for a specific task
Additionally, it is believed that information can flow between
on a processor. However, it's important to consider the
processors while a task is being completed.
limitations of any model and the specific context of your
system.
B. Energy Consumption Model
Overall energy is the total energy consumption which is a
Of all the components in a cloud data center, servers are the sum of the dynamic energy and static energy.[2]
primary culprit for energy consumption. This[1] research
tackles this challenge by specifically focusing on reducing Overall Energy(Eoverall) = Edynamic + Estatic
server energy use.

The two components of the total energy a processor uses to C. Cloud Application Model
complete a task are dynamic and static energy.
Dynamic Energy (Edynamic), is the amount of energy the
processor uses to carry out the task at hand. It is contingent
upon variables such as the rate of processing and volume of
data being handled.
Static Energy or Idle Energy (Estatic) is the amount of energy
the processor uses even while it isn't actively working on a job
is known as static energy consumption, or Estatic.

For our purposes, we'll primarily focus on dynamic power


consumption as it dominates the overall energy usage of cloud
servers. Static consumption will be considered negligible.

To computer dynamic power consumption:

Dynamic Power (DPow) = α * F * Y2 * z


As shown in Fig 1, the DAG task set is taken into account in
where, this example. The tasks are represented by the circles with the
task number inside the circle and the numbers on the edges
α(alpha): Activity Factor (0 ≤ α ≤ 1) indicate the data transfer time in sec.[1]
P1 executes the critical path of the scheduling and there is no
slack time. The slack time is the time when the processor is Algorithm: Prioritized Power-Aware Scheduling
not busy during scheduling. The P2 is performing noncritical
tasks and has a limited amount of time. To reduce the Input: taskList: List of tasks with properties(priority,energyConsumption)
frequency and the corresponding voltage supply and thus availablePower:Total energy budget
reduce energy consumption, we can use this short period. To Output:scheduledTasks:List of tasks scheduled for execution
compensate for this lack of capacity, many scheduling Phase 1: #Task Prioritization
Sort taskList in descending order by priority.(Higher priority tasks
algorithms have been developed. It will use nearby tasks to first)
determine the global frequency for all tasks in the processor. Phase 2: # Scheduling with Power Check
The main disadvantage is from the figure, it cannot Initialize an empty list scheduledTaks
completely use slack time. When the long-running task like Iterate through the sorted taskList:
For each task:
task T5 is executing, the parallel-running independent tasks if remainingPower(initally availablePower) is greater than or
like T4 cannot use the idle time of 7-9 sec (2 sec) when it is equal to the energyConsumption of the task:
decided that the total operating frequency shall be determined. 1. Add the task to scheduledTasks
2. Subtract the task’s energyConsumption from
remainingPower.
Else:
1.Stop iterating through the list
D. Task Model
This[11] paper uses the assumption that the system is
frame-based real-time, where a frame of length D is executed
repeatedly. A set of tasks Q={Q1, Q2,… Qn} must execute and
finish within each frame before the completion of the frame.A V. EXPERIMENTAL DISCUSSION WITH AN
graph G represents the priority restrictions among the tasks in
ˇ. We only take into consideration the issue of scheduling ̇ in a EXAMPLE
single frame with deadline D due to the periodic nature of the This section details the explanation of the Prioritized
timetable. Power-Aware Scheduling (PPAS) algorithm using an example.
We assume that the system has N identical processors that This section explains the example case that prompted the
share a common memory.The tuple (e’i,a’i) (where e’i
creation of the suggested algorithm.
represents estimated worst case execution time and a’I
represents the actual execution time) represents how a task In this instance, the DAG task set is examined as displayed in
will be executed. For a given task Qi we assume that a’i will be Figure 1 that shows the tasks as circles with the task number
decided at run-time and c’i is known before the task is within and the data transfer time in seconds shown by the
executed. numbers on the edges. Here we are hypothetically assuming
The precedence constraint is written as G=(Q,E) where E is that the tasks and their corresponding power in units used are
the set of edges. An edge exits from Qi->Qj ∈ E is and only if displayed in Table 1.
Qi is an immediate predecessor of Qj.
Table 1: List of tasks and the power used by each task in units
IV. PRIORITIZED POWER-AWARE
SCHEDULING For above example, we will be assuming that all tasks have
This work prioritizes jobs and only schedules them again if been numbered according to their priority, that is, task 1 has
sufficient power is available within the allocated budget. This highest priority while task 7 has lowest priority.
method avoids overloading the system and guarantees that In the given example, task 1 is selected first because of it’s
important jobs are completed, but it does not explicitly
highest priority, followed by 2, 3, 4, 5, 6 and 7.
compute the total execution time or consider task
dependencies. It provides a starting point for energy-conscious
scheduling and can be further optimized according to Say, we assume the initially available power to be P.
particular requirements.
The following pseudo code demonstrates a heuristic On execution of task 1,
scheduling method: Remaining Power (P’)= Available Power - Power for 1
Thus, P’ = P- 10
1.Priority sorting: Tasks with the highest priority are sorted Following our proposed algorithm, if P’ < 0, we had stop the
first. This ensures that important work is more likely to be iteration as scheduling further tasks would exceed the budget.
scheduled. Similarly,
2. Power-Aware Option: Before scheduling a task, the code On execution of task 2,
loops over it to see if there is enough power to support its P’ = ((P-10)-10)
power usage. This prevents the total energy budget from being Thus, P’ = P-20
exceeded. Now, if P’ < 0, we had stop the iteration.
3. Iterative Scheduling: Tasks are scheduled one at a time On execution of task 3,
until sufficient energy is available. P’ = ((P-20)-30)
Thus, P’ = P-50 [6] Kak, S. M., Agarwal, P., & Alam, M. A. (2022). Task scheduling
techniques for energy efficiency in the cloud. EAI Endorsed Transactions on
Similarly, we had check whether P’>0, if not, further iteration Energy Web, 9(39), e6-e6.
is stopped. [7] Tchamgoue, G. M., Kim, K. H., & Jun, Y. K. (2015). Power-aware
On execution of task 4, scheduling of compositional real-time frameworks. Journal of Systems and
Software, 102, 58-71.
P’ = ((P-50)-10) [8] Zhu, D., Melhem, R., & Childers, B. R. (2003). Scheduling with dynamic
Thus, P’ = P-60 voltage/speed adjustment using slack reclamation in multiprocessor real-time
Assuming P’ is still positive, we had check again whether systems. IEEE transactions on parallel and distributed systems, 14(7),
686-700.
P’<0. [9] Jena, R. K. (2017). Energy efficient task scheduling in cloud
On execution of task 5, environment. Energy Procedia, 141, 222-227.
P’ = ((P-60)-50) [10] Kalai Arasan, K., & Anandhakumar, P. (2023). Energy‐efficient task
scheduling and resource management in a cloud environment using optimized
Thus, P’ = P-110 hybrid technology. Software: Practice and Experience, 53(7), 1572-1593.
The process continues if P’>0. [11] Zhu, D., Melhem, R., & Childers, B. R. (2003). Scheduling with dynamic
On execution of task 6, voltage/speed adjustment using slack reclamation in multiprocessor real-time
P’ = ((P-110)-10) systems. IEEE transactions on parallel and distributed systems, 14(7),
686-700.
Thus, P’ = P-120
Assuming Remaining Power exceeds the power consumed, we
continue with further scheduling.
On execution of task 7,
P’ = ((P-120)-20)
Thus, P’ = P-140
All the tasks would be carried out only if the remaining power
is greater than the power consumed so far by all the tasks. The
algorithm used considers both task priority and power
consumption for scheduling, hence named as Prioritized
Power-Aware Scheduling.

VI. CONCLUSION
The results of our experiments suggest that the Prioritized
Power-Aware Scheduling (PPAS) algorithm is an effective
method for managing tasks with varying importance levels
and energy requirements. Our evaluation demonstrates that
PPAS achieves a good balance between completing tasks and
minimizing energy consumption. By prioritizing essential
tasks and operating within the power constraints, PPAS
ensures a higher completion rate for important tasks compared
to a basic FCFS approach.

VII. REFERENCES
[1] Kumar, M. H., & Peddoju, S. K. (2014, July). Energy efficient task
scheduling for parallel workflows in cloud environment. In 2014 International
Conference on Control, Instrumentation, Communication and Computational
Technologies (ICCICCT) (pp. 1298-1303). IEEE.
[2] Thanavanich, T., & Uthayopas, P. (2013, September). Efficient energy
aware task scheduling for parallel workflow tasks on hybrids cloud
environment. In 2013 International Computer Science and Engineering
Conference (ICSEC) (pp. 37-42). IEEE.
[3] Topcuoglu, Haluk & Hariri, Salim & Wu, Min-You. (2002).
Performance-effective and low-complexity task scheduling forheterogeneous
computing. Parallel and Distributed Systems, IEEE Transactions on. 13.
260-274. 10.1109/71.993206.
[4] Zhu, D., Melhem, R., & Childers, B. R. (2003). Scheduling with dynamic
voltage/speed adjustment using slack reclamation in multiprocessor real-time
systems. IEEE transactions on parallel and distributed systems, 14(7),
686-700.
[5] Kang, J., & Ranka, S. (2008, April). DVS based energy minimization
algorithm for parallel machines. In 2008 IEEE International Symposium on
Parallel and Distributed Processing (pp. 1-12). IEEE.

You might also like