Professional Documents
Culture Documents
ABSTRACT
In cloud computing environments, energy efficiency is especially important when dealing with concurrent
workflow jobs that have different energy requirements and different degrees of priority. In this research, we
propose an algorithm called Prioritized Power-Aware Scheduling (PPAS), which optimizes task scheduling by
taking energy consumption limitations and job priorities into account. Our methodology sets priorities for
critical tasks to guarantee timely completion while consuming the least amount of energy. We assess and
evaluate PPAS by rigorous testing in a cloud simulation setting, contrasting it with a fundamental
First-Come-First-Served (FCFS) scheduling strategy. Our findings show that PPAS outperforms FCFS by attaining
greater completion rates for significant tasks within specified power restrictions, thus striking an appropriate
compromise between task completion and energy efficiency. The results highlight PPAS's effectiveness in cloud
environments for parallel workflow task management, indicating its applicability to real-world applications that
demand energy-conscious job scheduling.
Thanawut Thanavanich and Puchong Uthayopas [2] Guy Martin Tchamgoue et al. [7] in their paper, focused on
published a paper proposing two energy-aware task applying DVS within real-time scheduling frameworks, which
scheduling algorithms, EHEFT and ECPOP, to schedule manage tasks in these systems. The authors propose a system
parallel applications in cloud environments. Reducing energy that allows DVS to be used at different levels (system-wide,
use while preserving high scheduling quality is the aim. To component-based, and individual tasks). This approach
conserve energy, the algorithms detect inefficient processors optimizes energy savings. The study also presents dynamic
and turn them off. Then, to increase energy efficiency, tasks DVS techniques, which further minimize energy use by
are rescheduled on fewer processors. The suggested utilizing idle processing time. They also design power-aware
algorithms lower energy consumption without appreciably rules to guarantee that jobs are completed by the deadline
affecting scheduling quality, according to simulation findings. despite changes in voltage and speed. According to
As a result, cloud systems can be used as large-scale simulations, these methods can cut energy consumption by up
computing platforms effectively. to 96%. This strategy could save energy expenses in big data
centers and extend the battery life of portable electronics.
In another paper published by Mallari Harish Kumar and
Sateesh K. Peddoju [1], the main objective is to focus on B.R. Childers et al. [8] in their paper talked about
reducing server energy consumption, a major contributor, in scheduling with dynamic voltage/speed adjustment using
cloud environments with parallel applications. This study aims slack reclamation in multiprocessor real-time systems. It talks
to address this issue. The suggested technique lowers server about how current processors use a lot of power. To cut down
voltage and frequency during activity pauses by utilizing on energy usage, the authors suggest two brand-new,
Dynamic Voltage Frequency Scaling (DVFS). Energy is saved power-aware scheduling algorithms. To slow down
in this way, and concurrent application deadlines are subsequent jobs, the algorithms recover time that is not used
unaffected. Comparing simulations to current methods, by a task. This method lowers the system's overall energy
considerable energy savings are shown. usage. The impact of discrete voltage/speed levels on energy
savings is also examined in this study. According to the
A paper published by Rami G. Melhem et al. [4], proposes authors' findings, processors with a few discrete voltage/speed
two power-aware scheduling algorithms for multiprocessor levels save almost as much energy as those with continuous
systems, focusing on reducing power consumption. By voltage/speed levels.
recovering unused task time, these methods exploit processor
slack sharing to lower task speeds in the future. The study R.K. Jena [9] in a paper also talked about energy-efficient
looks at the effect of discrete voltage/speed levels on energy task scheduling in cloud environments. The paper talks about
savings and includes task sets with and without precedence cloud computing and the issues with makespan and energy
limitations. Significant energy savings are shown by the usage. To maximize processing time and energy, the paper
simulation findings, particularly for systems with variable suggests a method known as Task Scheduling using Clonal
voltage processors. With negligible impact from voltage/speed Selection Algorithm (TSCSA). CloudSim was used to model
adjustment overhead, the algorithms provide roughly equal the findings, which demonstrated that TSCSA offers the best
energy reductions for processors with discrete voltage/speed possible balance for several goals.
levels compared to continuous ones.
P. Anandhakumar and K. Kalai Arasan [10] in their paper
talked about about energy-efficient task scheduling and
resource management in a cloud environment. Task F: Capacitance (in Farads)
scheduling techniques and resource allocation are covered. To Y: Supply Voltage (in Volts)
increase energy efficiency, the authors suggest a technique z: Operating Frequency (in Hertz)
dubbed HSRLBA that combines SARSA and BWA.
uRank-TOPSIS is used by HSRLBA for task scheduling. As Now, to compute Edynamic:
per the document's conclusion, HSRLBA performs better than
other current methods. Edynamic_ni,j = DPow_j * Δwij * t_ni,j
where,
The two components of the total energy a processor uses to C. Cloud Application Model
complete a task are dynamic and static energy.
Dynamic Energy (Edynamic), is the amount of energy the
processor uses to carry out the task at hand. It is contingent
upon variables such as the rate of processing and volume of
data being handled.
Static Energy or Idle Energy (Estatic) is the amount of energy
the processor uses even while it isn't actively working on a job
is known as static energy consumption, or Estatic.
VI. CONCLUSION
The results of our experiments suggest that the Prioritized
Power-Aware Scheduling (PPAS) algorithm is an effective
method for managing tasks with varying importance levels
and energy requirements. Our evaluation demonstrates that
PPAS achieves a good balance between completing tasks and
minimizing energy consumption. By prioritizing essential
tasks and operating within the power constraints, PPAS
ensures a higher completion rate for important tasks compared
to a basic FCFS approach.
VII. REFERENCES
[1] Kumar, M. H., & Peddoju, S. K. (2014, July). Energy efficient task
scheduling for parallel workflows in cloud environment. In 2014 International
Conference on Control, Instrumentation, Communication and Computational
Technologies (ICCICCT) (pp. 1298-1303). IEEE.
[2] Thanavanich, T., & Uthayopas, P. (2013, September). Efficient energy
aware task scheduling for parallel workflow tasks on hybrids cloud
environment. In 2013 International Computer Science and Engineering
Conference (ICSEC) (pp. 37-42). IEEE.
[3] Topcuoglu, Haluk & Hariri, Salim & Wu, Min-You. (2002).
Performance-effective and low-complexity task scheduling forheterogeneous
computing. Parallel and Distributed Systems, IEEE Transactions on. 13.
260-274. 10.1109/71.993206.
[4] Zhu, D., Melhem, R., & Childers, B. R. (2003). Scheduling with dynamic
voltage/speed adjustment using slack reclamation in multiprocessor real-time
systems. IEEE transactions on parallel and distributed systems, 14(7),
686-700.
[5] Kang, J., & Ranka, S. (2008, April). DVS based energy minimization
algorithm for parallel machines. In 2008 IEEE International Symposium on
Parallel and Distributed Processing (pp. 1-12). IEEE.