Intelligent Computing and Optimization
Intelligent Computing and Optimization
Pandian Vasant ·
Mohammad Shamsul Arefin ·
Vladimir Panchenko · J. Joshua Thomas ·
Elias Munapo · Gerhard-Wilhelm Weber ·
Roman Rodriguez-Aguilar Editors
Intelligent
Computing and
Optimization
Proceedings of the 6th International
Conference on Intelligent Computing
and Optimization 2023 (ICO2023),
Volume 2
Lecture Notes in Networks and Systems 852
Series Editor
Janusz Kacprzyk , Systems Research Institute, Polish Academy of Sciences, Warsaw,
Poland
Advisory Editors
Fernando Gomide, Department of Computer Engineering and Automation—DCA,
School of Electrical and Computer Engineering—FEEC, University of
Campinas—UNICAMP, São Paulo, Brazil
Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici
University, Istanbul, Türkiye
Derong Liu, Department of Electrical and Computer Engineering, University of
Illinois at Chicago, Chicago, USA
Institute of Automation, Chinese Academy of Sciences, Beijing, China
Witold Pedrycz, Department of Electrical and Computer Engineering, University of
Alberta, Alberta, Canada
Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS
Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia,
Cyprus
Imre J. Rudas, Óbuda University, Budapest, Hungary
Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon,
Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest developments
in Networks and Systems—quickly, informally and with high quality. Original research
reported in proceedings and post-proceedings represents the core of LNNS.
Volumes published in LNNS embrace all aspects and subfields of, as well as new
challenges in, Networks and Systems.
The series contains proceedings and edited volumes in systems and networks, span-
ning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks,
Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular
Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing,
Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic
Systems and other. Of particular value to both the contributors and the readership are the
short publication timeframe and the world-wide distribution and exposure which enable
both a wide and rapid dissemination of research output.
The series covers the theory, applications, and perspectives on the state of the art
and future developments relevant to systems and networks, decision making, control,
complex processes and related areas, as embedded in the fields of interdisciplinary and
applied sciences, engineering, computer science, physics, economics, social, and life
sciences, as well as the paradigms and methodologies behind them.
Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago.
All books published in the series are submitted for consideration in Web of Science.
For proposals from Asia please contact Aninda Bose ([Link]@[Link]).
Pandian Vasant · Mohammad Shamsul Arefin ·
Vladimir Panchenko · J. Joshua Thomas ·
Elias Munapo · Gerhard-Wilhelm Weber ·
Roman Rodriguez-Aguilar
Editors
Intelligent Computing
and Optimization
Proceedings of the 6th International Conference
on Intelligent Computing and Optimization
2023 (ICO2023), Volume 2
Editors
Pandian Vasant Mohammad Shamsul Arefin
Faculty of Electrical and Electronics Department of Computer Science
Engineering, Modeling Evolutionary Chittagong University of Engineering
Algorithms Simulation and Artificial and Technology
Intelligence Chittagong, Bangladesh
Ton Duc Thang University
Ho Chi Minh City, Vietnam J. Joshua Thomas
Department of Computer Science
Vladimir Panchenko UOW Malaysia KDU Penang University
Laboratory of Non-traditional Energy College
Systems, Department of Theoretical George Town, Malaysia
and Applied Mechanics, Federal Scientific
Agroengineering Center VIM Gerhard-Wilhelm Weber
Russian University of Transport Faculty of Engineering Management
Moscow, Russia Poznań University of Technology
Poznan, Poland
Elias Munapo
School of Economics and Decision Sciences
North West University
Mmabatho, South Africa
Roman Rodriguez-Aguilar
Facultad de Ciencias Económicas y
Empresariales, School of Economic
and Business Sciences
Universidad Panamericana
Mexico City, Mexico
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors
or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
The sixth edition of the International Conference on Intelligent Computing and Opti-
mization (ICO’2023) was held during April 27–28, 2023, at G Hua Hin Resort and Mall,
Hua Hin, Thailand. The objective of the international conference is to bring the global
research scholars, experts and scientists in the research areas of intelligent computing
and optimization from all over the world to share their knowledge and experiences on the
current research achievements in these fields. This conference provides a golden oppor-
tunity for global research community to interact and share their novel research results,
findings and innovative discoveries among their colleagues and friends. The proceedings
of ICO’2023 is published by SPRINGER (in the book series Lecture Notes in Networks
and Systems) and indexed by SCOPUS.
Almost 70 authors submitted their full papers for the 6th ICO’2023. They repre-
sent more than 30 countries, such as Australia, Bangladesh, Bhutan, Botswana, Brazil,
Canada, China, Germany, Ghana, Hong Kong, India, Indonesia, Japan, Malaysia, Mau-
ritius, Mexico, Nepal, the Philippines, Russia, Saudi Arabia, South Africa, Sri Lanka,
Thailand, Turkey, Ukraine, UK, USA, Vietnam, Zimbabwe and others. This worldwide
representation clearly demonstrates the growing interest of the global research commu-
nity in our conference series. The organizing committee would like to sincerely thank all
the authors and the reviewers for their wonderful contribution for this conference. The
best and high-quality papers will be selected and reviewed by International Program
Committee in order to publish the extended version of the paper in the international
indexed journals by SCOPUS and ISI WoS.
This conference could not have been organized without the strong support and help
from LNNS SPRINGER NATURE, Easy Chair, IFORS and the Committee of ICO’2023.
We would like to sincerely thank Prof. Roman Rodriguez-Aguiler (Universidad Panamer-
icana, Mexico) and Prof. Mohammad Shamsul Arefin (Daffodil International University,
Bangladesh), Prof. Elias Munapo (North West University, South Africa) and Prof. José
Antonio Marmolejo Saucedo (National Autonomous University of Mexico, Mexico) for
their great help and support for this conference.
We also appreciate the wonderful guidance and support from Dr. Sinan Melih Nigdeli
(Istanbul University—Cerrahpaşa, Turkey), Dr. Marife Rosales (Polytechnic Univer-
sity of the Philippines, Philippines), Prof. Rustem Popa (Dunarea de Jos University,
Romania), Prof. Igor Litvinchev (Nuevo Leon State University, Mexico), Dr. Alexan-
der Setiawan (Petra Christian University, Indonesia), Dr. Kreangkri Ratchagit (Maejo
University, Thailand), Dr. Ravindra Boojhawon (University of Mauritius, Mauritius),
Prof. Mohammed Moshiul Hoque (CUET, Bangladesh), Er. Aditya Singh (Lovely Pro-
fessional University, India), Dr. Dmitry Budnikov (Federal Scientific Agroengineering
Center VIM, Russia), Dr. Deepanjal Shrestha (Pokhara University, Nepal), Dr. Nguyen
Tan Cam (University of Information Technology, Vietnam) and Dr. Thanh Dang Trung
(Thu Dau Mot University, Vietnam). The ICO’2023 committee would like to sincerely
thank all the authors, reviewers, keynote speakers (Prof. Roman Rodriguez-Aguiler,
vi Preface
Prof. Kaushik Deb, Prof. Rolly Intan, Prof. Francis Miranda, Dr. Deepanjal Shrestha,
Prof. Sunarin Chanta), plenary speakers (Prof. Celso C. Ribeiro, Prof. José Antonio
Marmolejo, Dr. Tien Anh Tran), session chairs and participants for their outstanding
contribution to the success of the 6th ICO’2023 in Hua Hin, Thailand.
Finally, we would like to sincerely thank Prof. Dr. Janusz Kacprzyk, Dr. Thomas
Ditzinger, Dr. Holger Schaepe and Ms. Varsha Prabakaran of LNNS SPRINGER
NATURE for their great support, motivation and encouragement in making this event
successful in the global stage.
1 Introduction
The Traveling Salesman Problem (TSP) is one of the most famous problems in com-
binatorial optimization due to its easy formulation, theoretical significance, and many
applications. The problem can be formulated as follows: “Given a list of nodes (e.g.,
cities) and a cost metric (e.g., the distance) between each pair of nodes, find the shortest
possible route that visits each node exactly once and returns to the origin node.” This
problem has significant implications for theoretical computer science and operations
research as it is an NP-hard problem. Furthermore, solving TSP substantially reduces
the manufacturing costs in various areas such as chip manufacturing, task scheduling,
DNA sequencing, and path planning [12]. The worst-case time complexity of an optimal
TSP algorithm is known to grow exponentially with the number of cities [28]. Despite
the easy formulation of the problem, so far, there is no algorithm with polynomial time
complexity to solve the TSP problem optimally.
2 Related Work
The TSP has received significant attention from the exact and heuristic communities.
Many algorithms have been proposed, such as linear programming techniques, which
have been extensively exploited to deal with small and medium size instances. For
example, the Concorde solver [3] has successfully solved a substantial TSP instance,
including 85,900 cities available in the well-known TSPLIB [25].
Due to its NP-hard nature, many researchers have applied heuristic, and metaheuris-
tic approaches to solving TSP instances. For instance, the authors of [13] surveyed six
heuristic algorithms, including Nearest Neighbor (NN), Genetic Algorithm (GA), Sim-
ulated Annealing (TS), Tabu Search (TS), Ant Colony Optimization (ACO), and Tree
Physiology Optimization (TPO) for solving several small TSPLIB instances. Through
Application of Metaheuristic Algorithms and Their … 5
simulations, the authors indicated that the computation time of TS and ACO is six and
three times longer than other algorithms, respectively. And among the fastest algorithms
are NN, followed by TPO and GA. In the statistical comparison, the authors indicated
that TS, TPO, and GA could obtain higher-quality solutions than other algorithms. In
[6], the authors introduced SCGA, a hybrid Cellular Genetic Algorithm with Simulated
Annealing (SA) to solve thirteen small instances from TSPLIB. SCGA is motivated by
the GA’s fast convergence and insufficient optimization precision. Compared to a classi-
cal GA and SA, results show that a mean of 7% shortens the distance obtained by SCGA
and has good robustness. Similarly, the authors of [9] introduced a new optimization
model named multiple bottlenecks TSP (MBTSP) to handle various salesmen and tasks.
And they proposed a novel hybrid genetic algorithm (VNSGA) with variable neighbor-
hood search (VNS) for multi-scale MBTSP. The experiments show that VNSGA can
achieve better solution quality than the state-of-the-art algorithms for MBTSP problems,
demonstrating hybrid methods’ superiority.
Differentiation techniques have substantially impacted the quality of solutions com-
pared to other evolutionary methods. Those methods have been essentially used to
deal with continuous optimization problems. However, there have been many perti-
nent attempts to devise differential evolution methods for combinatorial optimization
problems, such as the TSP. For example, the authors in [2] introduced a novel discrete
differential evolution algorithm for improving the performance of the standard differ-
ential evolution algorithm for TSP. The authors used a mapping mechanism between
continuous and discrete variables, a k-means method to enhance the initial population,
and an ensemble of mutation strategies to increase diversity. Interestingly, the approach
was compared with 27 state-of-the-art algorithms for solving 45 TSP instances of dif-
ferent sizes. The experimental results demonstrated the efficiency of the approach in
terms of the average error to the optimal solution. In [4], the authors studied a new TSP
variant called a profitable tour problem (PTP) that maximizes the total profit minus total
travel cost. The paper proposed three methods, including a multi-start hyper-heuristic
(MSHH), a multi-start iterated local search (MS-ILS), and a multi-start general VNS
(MS-GVNS) to solve the PTP. MSHH uses eight low-level heuristics, whereas MS-ILS
and MS-GVNS use five different neighborhoods for local search. A set of TSPLIB
instances was solved to prove the effectiveness of the various combinations.
Nature-based metaheuristics have been very popular recently in dealing with large-
scale optimization problems [24]. For example, the authors of [15] proposed an improved
Artificial Bee Colony algorithm with multiple update rules and K-opt operation to solve
the TSP. The authors used eight rules to update solutions in the algorithm via an employed
bee or an onlooker bee. The proposed method was tested with benchmark test problems
from TSPLIB. It is observed that the algorithm’s efficiency is adequate concerning the
accuracy and consistency of solving standard TSPs. In addition, the authors of [30] pre-
sented a discrete Pigeon-inspired optimization (PIO) algorithm (DPIO), which uses the
6 Y. Liu et al.
Metropolis acceptance criterion of simulated annealing for the TSP. To enhance explo-
ration and exploitation ability, the authors proposed a new map and compass operator
with comprehensive learning ability and a landmark operator with cooperative learning
ability to learn from the heuristic information. Systematic experiments were performed
on 33 large-scale TSP instances from TSPLIB, and the results validated the advantage
of DPIO compared with most state-of-the-art meta-heuristic algorithms. Moreover, the
authors of [20] solved the rich vehicle routing problem (RVRP) using a Discrete and
Improved Bat Algorithm (DaIBA). Two neighborhood structures were used and explored
depending on the bat’s distance regarding the best individual of the swarm. DaIBA was
compared with evolutionary simulated annealing and a firefly algorithm. Based on sta-
tistical analysis and a benchmark of 24 datasets from 60 to 1000 customers, the authors
concluded that the proposed DaIBA is a promising technique for addressing RVRP.
Ant Colony Optimization (ACO) has been a prevalent method for solving many
variants of optimization problems. However, like GAs, ACO tends to fall into local min-
ima prematurely. Therefore, there have been many attempts to improve ACO through
various hybridization techniques. The authors of [27], a rank-based ant system (ASrank )
proposed for TSP. In ASrank , the ant agents that have found elite solutions are selected to
update the pheromone on a specific route. As a result, the computational time is reduced,
but the algorithm becomes more prone to falling into a local solution due to the concen-
tration of pheromones on a specific route. The authors proposed a new ant system based
on individual memories (ASIM) to improve diversity. Another ACO-based hybrid app-
roach was proposed by the authors of [17]; a Slime Mold-Ant Colony Fusion Algorithm
(SMACFA) was proposed in which an optimized path is first obtained by Slime Mold
Algorithm (SMA) for TSP; Then, the high-quality pipelines are selected, and their ends
are directly applied to the ACO by the principle of fixed selection. Several techniques
have been proposed in [29], such as entropy-weighted learning, nucleolus game, and
mean filtering techniques to diversify the population and avoid early convergence. To
reduce the search space in ACO, some attempts have been to restrain the candidate set
of nodes to the k nearest cities [14]. Despite its local nature, this assumption works well
in practice as the observation drives that in TSP, reasonable solutions are often found
via local transitions. Likewise, the authors of [21] proposed a restricted variant of the
ACO pheromone matrix with linear memory complexity to reduce the ACO memory
complexity.
Motivated by the need to efficiently visit the search space in medium and large-scale
optimization problems, we study several combinations of hybrid metaheuristics for the
traveling salesman problem in this paper. We aim to balance the exploitation and explo-
ration of metaheuristics through careful hybridization of single solutions and population-
based metaheuristics [18, 19]. We consider ACO, GA, SA, and TS and several coop-
eration techniques between them to improve the convergence, reduce the computation
time, and decrease the sensitivity toward initial settings.
Application of Metaheuristic Algorithms and Their … 7
3 Hybrid Metaheuristics
Marco Dorigo, the proposer of ACO, and many studies have shown that ACO shows
better capabilities than genetic algorithms when applied to the Traveling Merchant Prob-
lem (TSP) [10, 11, 23]. Therefore, this paper focuses on combining ACO with single
solutions metaheuristics.
3.1 ACO + TS
Due to the existence of the pheromone mechanism, during the iteration process, the ants
have a high probability of choosing the route with a high pheromone concentration, which
allows the algorithm to converge at a faster rate. However, it also makes the algorithm
lose the ability to jump out of the local optima. After a certain number of iterations,
the pheromone matrix will likely promote one route due to pheromone concentration
on specific edges. With that, the ant agents will select one route with almost 100%
probability and ignore other possible better paths. Therefore, to improve the exploitation
and exploration ability of ACO and to deal with premature convergence toward local
optima, we integrate tabu search in the ACO search process. Adding TS will improve
the exploration ability of the algorithm without compromising the exploitation as TS
heavily relies on intense local search and short-term, long-term tabu memories.
Combining ACO and TS would intriguingly increase the computational time of the
hybrid approach. We follow a selective process to reduce the algorithm’s time consump-
tion. Whenever the ACO goes through k iterations (e.g., 5), the best route S0 found via
the ACO search mechanism will be used as the initial solution for the Tabu Search. The
Tabu Search is used to check whether the neighbor region of the incumbent one contains
more promising solutions, which helps the ACO to jump out of the local optima’s area.
While performing the TS process, information about the search space is progressively
retrieved and shared with the ACO ant agents by updating the pheromone matrix. There-
fore, both ACO and TS collectively tackle the search space and successively orchestrate
the exploitation and exploration of the algorithm. Performing TS on the ant with the best
fitness is not the only strategy; an alternative option is to let TS helps the ant agent with
the worst fitness or, more interestingly, follow a dynamic approach and select a random
ant every time. However, that might significantly delay the convergence and increase the
computation time.
3.2 ACO + SA
The ACO used in this paper adopts a primitive pheromone update strategy, i.e., all ants
in the colony release pheromones along their visited paths based on the length of the
route. Alternatively, the ant’s agent having the best fitness can control the pheromone
concentration. Compared with keeping only the pheromone of the best ant, the collective
strategy reduces the probability of falling into local minima, effectively improving the
8 Y. Liu et al.
algorithm’s global search capability. However, this strategy also makes the pheromone
distribution so vast that the algorithm’s convergence is slow. Therefore, to balance the
convergence speed and fitness of the algorithm, we introduce a simulated annealing
mechanism in the ACO so that the individuals in the ant colony converge to the optimal
solution faster while maintaining better fitness.
The hybrid SAACO starts with an initial solution S0 of length L0 computed via the
simulated annealing algorithm. Then, the pheromones associated with S0 are used to
initialize the pheromones matrix of the ACO. With that, the algorithm will likely avoid
premature convergence due to the uniform distribution of pheromones in the earlier
optimization stage. When m ants complete one iteration, the distance set L and path
set S of the obtained solutions are examined. Notably, we denote the sets Sk and Lk as
the routes and distances traveled by an ant k. The best solutions in the candidate set
are characterized by Smin and Lmin . According to the simulated annealing mechanism,
the candidate set is filtered upon Lk , Lmin , a random number ζ ∈ [0, 1), and the current
temperature value. The solutions fulfilling the SA requirements will eventually be used
to update the pheromone matrix.
After computing Pk , the following is applied. If Pk = 1, then Sk and Lk are added
to the updating list. Otherwise, the algorithm generates a random ζ in the interval [0, 1)
following a uniform distribution. If Pk > ζ , then allow Sk and Lk to join the update set.
Otherwise, let Sm in and Lm in join the update set instead of Sk and Lk .
By following the above process, all solutions can participate in the definition of the
search direction. That is, the pheromone update and evaporation are not only controlled
by the behavior of the ant’s agent having the best fitness. Instead, the SA mechanism
involves all ants taking into consideration their quality, the stage in the search, and a
random factor. Similar to ACO with TS, the rate of exploitation and exploitation is
dynamic and automatically controlled, taking advantage of the various features of ACO
and SA.
In this section, we present the computational results of the four classical metaheuris-
tics Ant Colony Optimisation (ACO), Genetic Algorithm (GA), Tabu Search (TS), and
Simulated Annealing (SA), as well as the two novel hybrid metaheuristics TSACO and
SAACO. For each algorithm, we repeat each simulation 15 times to address any bias
related to the structure of the initial state or any random steps in the algorithm’s search
process. A total of 10 benchmark instances of different sizes and difficulties have been
solved, and the average of the 15 simulations is used to compare the algorithms. The
hardware and software specifications of the assessment platform are as follows: Operat-
ing System: Windows 10; CPU: Intel(R) Core(TM) i7-8750H; RAM: 16 GB; Platform:
MATLAB2022a; Network: Gigabit Ethernet.
Application of Metaheuristic Algorithms and Their … 9
bold, we also highlight the best algorithm based on the average value indicator. The
parameter settings for each algorithm in Small and Medium Instances are tabulated in
Table 2.
The experimental results for large instances are shown in Table 6. For each dataset,
the performance of the best algorithm is highlighted in bold. The parameter settings in
the context of large instances are tabulated in Table 5. Compared with the parameters of
small & medium instances, the candidate set sizes (i.e., the number of inner loops) of TS,
SA, TSACO, and SAACO are increased along with the maximum number of iterations
to enable the algorithm to handle a larger objective space, and potentially finding better
solutions.
Application of Metaheuristic Algorithms and Their … 11
In terms of running time, the single-solution metaheuristics (TS and SA) have a defi-
nite advantage over the population-based metaheuristics (GA, ACO, SAACO, TSACO).
For a given identical instance, the single-solution metaheuristic often takes only one-fifth
to one-hundredth of the time required by the population-based metaheuristic. Further-
more, as the number of instances increases, the advantage of the single solver heuristic
algorithm in terms of running time will become more apparent. For the two single-
solution metaheuristics (TS and SA) covered in the paper, TS takes slightly better run-
ning time than SA for small and medium-sized instances. In contrast, for large instances,
TS takes nearly twice as long to run as SA because TS has twice the number of candidate
solutions when dealing with large instances than small and medium-sized instances to
reduce the fitness of TS.
For the two hybrid metaheuristics proposed in this paper (SAACO and TSACO),
SAACO and ACO are almost identical regarding running time. However, SAACO shows
an unusually high efficiency in medium-sized instances, which proves that giving the
initial solution to ACO can, to some extent, guide the algorithm to converge to the
optimal solution faster to improve the algorithm’s efficiency. Like SAACO, TSACO
outperforms ACO in terms of running time for two medium-sized instances, pr226 and
lin318, demonstrating that introducing the TS into ACO can also speed up the algorithm’s
Application of Metaheuristic Algorithms and Their … 13
convergence to the optimal solution to some extent. However, for small and two large
instances (pr439 and rat575), TSACO requires significantly more runtime than ACO.
TSACO requires significantly better runtime than ACO for rat783, which may be related
to the distribution of instance nodes for rat783.
Based on Fig. 1, it can be seen that the larger the size of the instance, the more average
running time the algorithm demands. Meanwhile, When compared to TSP problem sizes,
each algorithm exhibits a distinct pattern of computing time concerning the input size:
GA shows a vital exponential characteristic, and ACO, TSACO, and SAACO show weak
exponential behavior.
The results of each algorithm for each instance are tabulated in Tables 3, 4, and 6. The
values highlighted in bold represent the best results for each instance and category. The
relative error metric is used to calculate the gap between the average effect of each
algorithm and the optimal solution (located under the name of each instance.) of each
instance.
14 Y. Liu et al.
Based on the experimental results in the tables above, TSACO shows better average
results when tackling small-size instances, proving that introducing dynamic tabu lists
in ACO can help jump out of local optima regions and find better solutions. However,
for kroa150 and medium instances, SA outperforms TSACO in general, even though the
average fitness of TSACO in instance pr226 is slightly better than SA. This result shows
that TSACO does not perform as well as SA for medium-sized instances. However, it is
undeniable that the TSACO, with the introduction of the tabu search mechanism, still
outperforms the original ACO in the ability to search for the optimal global solution.
In significant instances, TSACO, SA, and SAACO each achieve the best performance
once. Therefore, it is difficult to conclude which algorithm is more suitable for large-
size instances in terms of accuracy when the average case is considered. Nonetheless,
it is verified that ACO performs better than GA for all size instances. At the same
time, SAACO with an annealing mechanism and TSACO with a tabu search mechanism
outperform the traditional ACO.
4.7 Robustness
In this paper, the robustness metric is assessed. It can be interpreted as the results’ stability
of an algorithm, which means the magnitude of the error between solutions obtained for
the same instance over multiple experiments. The smaller the error between solutions,
the more robust the algorithm and vice versa. Figure 2 shows the robustness exhibited
by the above four metaheuristics and two hybrid metaheuristics for ten instances of
different sizes. The length of each box represents the algorithm’s robustness, and the
longer the length of the box, the weaker the robustness of its corresponding algorithm.
Based on Fig. 2, TS shows weak robustness in all ten instances, indicating that TS has
difficulty consistently obtaining better solutions. The robustness of GA is better than TS
in general, but GA is still weak compared to the other four algorithms. The robustness
of ACO and SA is stronger. However, for instance, kora200, ACO shows unusually
weak robustness. For both hybrid metaheuristics, the robustness of TAACO outperforms
SAACO in terms of global performance. Both exhibit strong robustness when computing
small-scale instances of size less than 150. However, when facing instances of size
16 Y. Liu et al.
greater than or equal to 150, the robustness of SAACO sharply decreases, while TSACO
consistently maintains excellent robustness.
st70 (Opt. Fitness = 675) lin105 (Opt. Fitness = 14379) xqf131 (Opt. Fitness = 564)
840 19000
750
820
780 690
17000
760
Fitness
Fitness
Fitness
660
740
16000
720 630
700
15000 600
680
570
660 14000
640 540
GA ACO TS SA SAACO TSACO GA ACO TS SA SAACO TSACO GA ACO TS SA SAACO TSACO
kora150 (Opt. Fitness = 26524) kora200 (Opt. Fitness = 29368) pr226 (Opt. Fitness = 80369)
33000
38000 115000
32000
37000
110000
31000 36000
105000
35000
Fitness
Fitness
30000 100000
Fitness
34000
29000 95000
33000
90000
28000 32000
31000 85000
27000
30000 80000
GA ACO TS SA SAACO TSACO GA ACO TS SA SAACO TSACO GA ACO TS SA SAACO TSACO
lin318 (Opt. Fitness = 42029) pr439 (Opt. Fitness = 107217) rat575 (Opt. Fitness = 36905)
58000 140000 8800
8600
56000
135000
8400
54000
130000 8200
Fitness
52000
Fitness
Fitness
8000
125000
50000 7800
120000 7600
48000
7400
115000
46000
7200
5 Conclusions
This paper considers six algorithms, including two novel hybrid approaches, to solve
ten TSP problems of different sizes and difficulties. Each algorithm is presented and
assessed in terms of its average, best, and worst fitness accuracy compared to the optimal
solution, average running time, and robustness across multiple simulations. The results
show that TS and SA outperform the other four algorithms for running time. Comparing
the fitness’s accuracy of the algorithms, the average performance of both TSACO and
SAACO outperforms ACO, demonstrating that introducing a tabu search mechanism to
ACO for local search can effectively compensate for ACO’s lack of ability to jump out of
Application of Metaheuristic Algorithms and Their … 17
local optima. Meanwhile, SAACO outperforms ACO in average running time because its
average fitness is better than ACO, which indicates that introducing a simulated annealing
mechanism into ACO can effectively accelerate the algorithm’s convergence and obtain
similar or even better fitness. Meanwhile, a hybrid metaheuristic algorithm yields more
accurate results for small and large instances. However, for medium-sized instances, SA
is a better choice. Combining the algorithm’s running time, adaptability, and robustness,
SA is a suitable trade-off method for solving traveling salesman problems of arbitrary
size.
References
1. Aarts, E.H., Korst, J.H., van Laarhoven, P.J.: A quantitative analysis of the simulated annealing
algorithm: a case study for the traveling salesman problem. J. Stat. Phys. 50(1), 187–206
(1988)
2. Ali, I.M., Essam, D., Kasmarik, K.: A novel design of differential evolution for solving
discrete traveling salesman problems. Swarm Evol. Comput. 52, 100607 (2020)
3. Applegate, D., Bixby, R., Chvatal, V., Cook, W.: Concorde tsp solver (2006), [Link]
[Link]/concorde
4. Dasari, K.V., Pandiri, V., Singh, A.: Multi-start heuristics for the profitable tour problem.
Swarm Evol. Comput. 64, 100897 (2021)
5. Deb, K., Agrawal, S., et al.: Understanding interactions among genetic algorithm parameters.
Found. Genetic Alg. 5(5), 265–286 (1999)
6. Deng, Y., Xiong, J., Wang, Q.: A hybrid cellular genetic algorithm for the traveling salesman
problem. Math. Probl. Eng. 2021 (2021)
7. Dib, O.: Novel hybrid evolutionary algorithm for bi-objective optimization problems. Sci.
Rep. 13(1), 4267 (2023)
8. Dib, O., Moalic, L., Manier, M.A., Caminada, A.: An advanced ga-vns combination for
multicriteria route planning in public transit networks. Expert Syst. Appl. 72, 67–82 (2017)
9. Dong, X., Zhang, H., Xu, M., Shen, F.: Hybrid genetic algorithm with variable neighborhood
search for multi-scale multiple bottleneck traveling salesmen problem. Future Gener. Comput.
Syst. 114, 229–242 (2021)
10. Dorigo, M., Gambardella, L.M.: Ant colony system: a cooperative learning approach to the
traveling salesman problem. IEEE Trans. Evol. Comput. 1(1), 53–66 (1997)
11. Drigo, M.: The ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst.
Man Cybern. Part B 26(1), 1–13 (1996)
12. Erol, M.H., Bulut, F.: Real-time application of travelling salesman problem using google maps
api. In: 2017 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting
(EBBT), pp. 1–5. IEEE (2017)
13. Halim, A.H., Ismail, I.: Combinatorial optimization: comparison of heuristic algorithms in
travelling salesman problem. Arch. Comput. Methods Eng. 26(2), 367–380 (2019)
14. Ismkhan, H.: Effective heuristics for ant colony optimization to handle large-scale problems.
Swarm Evol. Comput. 32, 140–149 (2017)
15. Khan, I., Maiti, M.K.: A swap sequence based artificial bee colony algorithm for traveling
salesman problem. Swarm Evol. Comput. 44, 428–438 (2019)
16. Knox, J.: Tabu search performance on the symmetric traveling salesman problem. Comput.
Oper. Res. 21(8), 867–876 (1994)
17. Liu, M., Li, Y., Li, A., Huo, Q., Zhang, N., Qu, N., Zhu, M., Chen, L.: A slime mold-ant colony
fusion algorithm for solving traveling salesman problem. IEEE Access 8, 202508–202521
(2020)
18 Y. Liu et al.
18. Luo, Y., Dib, O., Zian, J., Bingxu, H.: A new memetic algorithm to solve the stochastic tsp. In:
2021 12th International Symposium on Parallel Architectures, Algorithms and Programming
(PAAP), pp. 69–75. IEEE (2021)
19. Nan, Z., Wang, X., Dib, O.: Metaheuristic enhancement with identified elite genes by machine
learning. In: Knowledge and Systems Sciences, pp. 34–49. Springer, Singapore (2022)
20. Osaba, E., Yang, X.S., Fister, I., Jr., Del Ser, J., Lopez-Garcia, P., Vazquez-Pardavila, A.J.: A
discrete and improved bat algorithm for solving a medical goods distribution problem with
pharmacological waste collection. Swarm Evol. Comput. 44, 273–286 (2019)
21. Peake, J., Amos, M., Yiapanis, P., Lloyd, H.: Scaling techniques for parallel ant colony
optimization on large problem instances. In: Proceedings of the Genetic and Evolutionary
Computation Conference, pp. 47–54 (2019)
22. Peker, M., Şen, B., Kumru, P.Y.: An efficient solving of the traveling salesman problem: the
ant colony system having parameters optimized by the Taguchi method. Turk. J. Electr. Eng.
Comput. Sci. 21(7), 2015–2036 (2013)
23. Putha, R., Quadrifoglio, L., Zechman, E.: Comparing ant colony optimization and genetic
algorithm approaches for solving traffic signal coordination under oversaturation conditions.
Comput.-Aided Civ. Infrastruct. Eng. 27(1), 14–28 (2012)
24. Qiu, Y., Li, H., Wang, X., Dib, O.: On the adoption of metaheuristics for solving 0–1 knapsack
problems. In: 2021 12th International Symposium on Parallel Architectures, Algorithms and
Programming (PAAP), pp. 56–61. IEEE (2021)
25. Reinhelt, G.: {TSPLIB}: a library of sample instances for the tsp (and related problems) from
various sources and of various types. [Link]
(2014)
26. Stodola, P., Otřísal, P., Hasilová, K.: Adaptive ant colony optimization with node clustering
applied to the travelling salesman problem. Swarm Evol. Comput. 70, 101056 (2022)
27. Tamura, Y., Sakiyama, T., Arizono, I.: Ant colony optimization using common social
information and self-memory. Complexity 2021 (2021)
28. Tang, Z., Hoeve, W.J.v., Shaw, P.: A study on the traveling salesman problem with a drone. In:
International Conference on Integration of Constraint Programming, Artificial Intelligence,
and Operations Research, pp. 557–564. Springer (2019)
29. Yang, K., You, X., Liu, S., Pan, H.: A novel ant colony optimization based on game for
traveling salesman problem. Appl. Intell. 50(12), 4529–4542 (2020)
30. Zhong, Y., Wang, L., Lin, M., Zhang, H.: Discrete pigeon-inspired optimization algorithm
with metropolis acceptance criterion for large-scale traveling salesman problem. Swarm Evol.
Comput. 48, 134–144 (2019)
Evolutionary Optimization of Entanglement
Distillation Using Chialvo Maps
1 Introduction
In quantum computing and quantum information theory, entanglement distillation is
a critical component for designing quantum computer networks as well as quantum
repeaters. The central idea of entanglement distillation is to restore the quality of diluted
entanglement states of photons transmitted over long distances. This dilution of entan-
glement states is a direct result of inevitable decoherence effects. Many theoretical and
empirical research works have been focused on investigating quantum distillation frame-
works [1, 2]. In the recent work [3], the authors experimentally employed pairs of single
photons entangled in multiple degrees of freedom to determine the domain of distillable
states (and their relative fidelity). In that work, comparative analyses and benchmark-
ing studies were also carried out on different distillation schemes to understand the
design of resilient quantum networks. Similarly, in [4] the authors designed a proof-of-
concept experiment to investigate the implementation of filtering protocols (in atomic
ensembles) for constructing quantum repeater nodes. The experiment was conducted in
a rare-earth-ion-doped crystal, where the entanglement was prepared. In [5], the authors
theoretically investigated the relations between entanglement distillation, bit thread and
entanglement purification within the holographic framework. The authors provided a bit
thread interpretation of the one-shot entanglement distillation tensor network where they
demonstrated that the holographic entanglement purification process could be thought
of as a special case of a type of surface growth scheme. The authors in [5] were aiming to
provide an effective framework that accurately describes physical entanglement struc-
tures. Recent works show that many optimization-based research efforts have also been
directed towards quantum information systems. For instance, in [6], the authors showed
that the read-out procedure of local unitaries of a high-retrieval efficiency quantum
memory could be optimized is an unsupervised manner. The signal-to-noise ratio and
the retrieval efficiency of the high-retrieval efficiency quantum memory were examined.
This work reformulates the practical entanglement distillation problem in a bilevel
optimization framework. The objective of this work is to propose an effective optimiza-
tion technique that combines evolutionary algorithms and the chialvo map for solving
the bilevel practical entanglement distillation problem. The primary idea is to leverage
on the complex behavior of chialvo maps to improve the optimization capabilities of the
evolutionary algorithm - in this case: the differential evolution (DE) algorithm [7]. This
paper is organized as follows: the second section describes the model formulation for the
bilevel practical entanglement distillation problem. The third section provides details on
the chialvo map and its integration with the evolutionary algorithm. The fourth section
presents analysis on the results generated by the computational experiments. This paper
ends with some concluding remarks and recommendations for future research works.
2 Methods
In this work, a bipartite entanglement distillation model is considered – where the central
idea is to convert a state, ρAB (density matrix form) into a state which is close to a
maximally entangled state (utilizing only local operations and classical communication).
This communication takes place between two nodes of a communication network A and
B. This is represented mathematically as follows:
d −1
1
F = d |ηÂB̂ |d such that, |d = √ |i |iB̂ (1)
d i=0 Â
where F ∈ (0, 1) is the fidelity – i.e., closeness of the converted state to the maximally
entangled state. A and B are the input registers while  and B̂ are the output registers.
d is the dimension of the quantum state and ηÂB̂ is the converted state (ρAB → ηÂB̂ ).
|d is the maximally entangled state across output registers  and B̂. Therefore, as an
example if the dimension, d = |A| = |B| = 2, then ρAB would be as follows:
Ĉ1,ÂA ≥ 0, Ĉ1,B̂B ≥ 0
IA IB
Ĉ1,A ≤ , Ĉ1,B ≤ , |A| = |B| = d ≥ 0 : d ∈ N
|A| |B|
where A and B are Choi state equivalent for output registers A and B. Similarly,
Ĉ1,ÂA , Ĉ1,B̂B , Ĉ1,A and Ĉ1,B are matrices depicting Choi states which correspond to
quantum channels. The symbol ⊗ represents the Kronecker product and the dimensions
of the identity matrices, IA and IB depend on the dimensions of the registers A and B .
provided the motivation for researchers to explore more simplistic models which are
equally accurate. One such model is the coupled map lattice formulation (CML); which
maps a dynamical system with continuous state variables in discreet space and time.
Map-based neuronal modeling like the coupled map lattices have been observed to be
robust, computationally efficient, and effective.
The Chialvo map is a two-dimensional map-based model employed for modeling
neurons as well as modeling dynamics of excitable systems [11, 12]. The Chialvo map
has been shown to mimic single as well as interconnected neuronal networks using
only three (or four) parameters. The mentioned map has been proven to show diverse
behavior, from oscillatory to chaotic dynamics. The iterative Chialvo map for a single
neuron is as follows:
where z is the recovery variable and y is the activation variable, k is the bias constant, a1
is the constant of recovery and a2 describes the activation-dependence of the recovery
process and a3 serves as an offset constant.
Inspired by the development of evolutionary algorithms, differential evolution (DE)
is a type of meta-heuristic algorithm introduced to solve multidimensional, nonlinear,
nonconvex, and real-valued optimization problems [7, 13]. The DE algorithm is a direct
result from the assimilation of perturbative techniques into evolutionary/meta-heuristic
algorithms. The DE algorithm begins by spawning a population of candidate solutions
(minimum of four). These candidate solutions are real-coded N-dimensional vectors.
Then a single principal parent and three auxiliary parents are chosen randomly. From
the candidate solutions, one of the solutions is designated as a principal parent and the
other three are designated as auxiliary parents. A generic mutated vector (via differential
mutation) is recombined with the principal parent to generate child trial vectors. The
survival of a child trial vector would depend on its performance with respect to the
fitness criterion. At the next iteration, the selection of the principal parent is done, and
the process is repeated until no further improvement of fitness function occurs.
The bilevel optimization problem in Eq. (3) is solved within a Stackelberg game
theoretic framework [14]. The upper level is the fidelity objective function, F (follower)
while the lower level is the probability of success, P(δ) (leader). In this sense, the strategy
played by the leader involves optimization of the sub-problem (P(δ)) which will then
influence the strategy played by the follower – optimization of the overall problem (F).
Thus, the computational techniques employed in this work iteratively solves each level
of the optimization problem as a Stackelberg game (until the most optimal solution
is attained). In this work, the entanglement distillation problem was solved using two
approaches: (i) Combined Chialvo-map and Differential Evolution strategy (Chialvo-
DE) and (ii) Differential Evolution technique using pseudo-random number generators
(PRNG-DE).
In the PRNG-DE approach, the sub-problem problem is solved by searching for
the optimal dimension, d that maximizes, P(δ). The quantum state, ρAT B and the Choi
states: C 1,A and C 1,B are generated using the PRNG. Consequently, using the obtained
dimension, d , the density state ρAT B and P(δ), the upper-level problem is solved by
Evolutionary Optimization of Entanglement Distillation 23
Step 2: Solve P(δ) using DE to find optimal quantum state, ρAT B and Choi states: C 1,A
and C 1,B .
Step 3: Initiate and run simulation on Chialvo single neuron model.
Step 4: Determine statistical moments on two-dimensional simulation data – mean and
variance.
Step 5: Using statistical moments on a Gaussian distribution, simulate random values
for quantum state ρAT B .
Step 6: Using standard PRNG, simulate random values for Choi states: C and
1,AA
C
.
1,BB
Step 7: Solve for F in the upper-level problem.
Step 8: Re-initialize Stackelberg game framework until fitness function cannot be further
improved.
The parameter settings used in this work for the DE segment are population size = 15,
mutation factor = 0.7, recombination factor = 0.7 and maximum iterations = 300. The
parameter settings for the Chialvo map are lattice length = 25, maximum iteration =
100, constant of recovery (a1 ) = 0.5, activation-dependence (a2 ) = 0.5, offset constant,
(a3 ) = 0.8 and Bias constant, (k) = 0.02).
3 Results
In this work, the Stackelberg game-theoretic framework was employed to solve the
entanglement distillation problem using: the combined Chialvo-map and DE strategy
(Chialvo-DE) and the standard DE with pseudo-random number generators (PRNG-
DE). The computational experiments were conducted using the Python programming
language on Google Collaboratory platform using Python 3 Google Compute Engine
(RAM 12.68 GB and Disk space:107.72 GB). Each approach was executed with a total
of 40 times - where each time the technique was run 3 times; and the best solution was
taken for each execution. Therefore, both techniques were individually executed a total
of 120 times. The computational results obtained using both techniques were measured
using the weighted hypervolume indicator: wHVI = w1 (x∗ − x) + w2 xo∗ − xo . The
optimal solution candidate is denoted (x∗ , x0∗ ) and the nadir point is (x, xo ). The weights
w1 and w2 enables the relative importance of the contribution of the upper-level problem
and lower-level problem. In these experiments the weights: w1 = 0.7 and w1 = 0.3.
The nadir point is for the upper-level problem (or fidelity objective) and lower-level
24 T. Ganesan et al.
subproblem (or probability of success) is x = 1andxo = 1. The larger the value of the
wHVI metric, the better the optimization performance. The graphical depiction of the
optimal quantum state of the best solution obtained using the Chialvo-DE and PRNG-DE
are shown in Fig. 1(a) and (b) respectively.
Fig. 1. The quantum state, ρAT B for the best individual solution produced using the Chialvo-DE
(a) and PRNG-DE (b) approaches
The ranked individual solutions obtained using the PRNG-DE and Chialvo-DE
approaches are given in Tables 1 and 2.
It can be observed in Fig. 1 that the best individual solution reached by the PRNG-DE
and Chialvo-DE approaches have a quantum state, ρAT B with the dimensions, d = 9
and d = 4 respectively. In addition, the optimality of best individual solutions achieved
by both approaches have very minimal difference (about 0.673%) when measured with
wHVI metric. The overall optimization of all generated solutions achieved by both
techniques was measured by taking the sum of the individual solution values measured
using the wHVI. The wHVI values for the PRNG-DE and Chialvo-DE approaches are
28.64 and 30.42 respectively. Therefore, in terms of overall optimization performance,
Evolutionary Optimization of Entanglement Distillation 25
the Chialvo-DE approach outperforms the PRNG-DE approach by about 6%. This is
due to the generic/wide-range dynamical behavior of the Chialvo map. This output
behavior enables the Chialvo-DE to explore larger regions of the objective space and
hence obtain better candidate solutions - while avoiding stagnation issues in certain local
optima. In terms of computational time, the implementation of the Chialvo-DE approach
was 163.16% more computationally costly than the PRNG-DE approach. This is because
their additional computational complexity is integrated to simulate the Chialvo neuronal
segment of the Chialvo-DE algorithm. Hence, this additional complexity contributes to
the additional computational cost. Both approaches were robust and performed stable
computation and generated results for the bilevel quantum entanglement distillation
problem.
References
1. Li, M., Fei, S., Li-Jost, X.: Bell inequality, separability and entanglement distillation. Chin.
Sci. Bull. 56(10), 945–954 (2011)
2. Ruan, L., Dai, W., Win, M.Z.: Adaptive recurrence quantum entanglement distillation for
two-Kraus-operator channels. Phys. Rev. A 97(5), 052332 (2018)
26 T. Ganesan et al.
3. Ecker, S., Sohr, P., Bulla, L., Huber, M., Bohmann, M., Ursin, R.: Experimental single-copy
entanglement distillation. Phys. Rev. Lett. 127(4), 040506 (2021)
4. Liu, C., et al.: Towards entanglement distillation between atomic ensembles using high-fidelity
spin operations. Commun. Phys. 5(1), 1–9 (2022)
5. Lin, Y.Y., Sun, J.R., Sun, Y.: Bit thread, entanglement distillation, and entanglement of
purification. Phys. Rev. D 103(12), 126002 (2021)
6. Gyongyosi, L., Imre, S.: Optimizing high-efficiency quantum memory with quantum machine
learning for near-term quantum devices. Sci. Rep. 10(1), 1–24 (2020)
7. Raghul, S., Jeyakumar, G.: Investigations on distributed differential evolution framework with
fault tolerance mechanisms. In: Kumar, B.V., Oliva, D., Suganthan, P.N. (eds.) Differential
Evolution: From Theory to Practice. SCI, vol. 1009, pp. 175–196. Springer, Singapore (2022).
[Link]
8. Rozp˛edek, F., Schiet, T., Elkouss, D., Doherty, A.C., Wehner, S.: Optimizing practical
entanglement distillation. Phys. Rev. A 97(6), 062333 (2018)
9. Jiang, M., Luo, S., Fu, S.: Channel-state duality. Phys. Rev. A 87(2), 022310 (2013)
10. Girardi-Schappo, M., Tragtenberg, M.H.R., Kinouchi, O.: A brief history of excitable map-
based neurons and neural networks. J. Neurosci. Methods 220(2), 116–130 (2013)
11. Muni, S.S., Fatoyinbo, H.O., Ghosh, I.: Dynamical effects of electromagnetic flux on Chialvo
neuron map: nodal and network behaviors. Int. J. Bifurc Chaos 32(09), 2230020 (2022)
12. Bashkirtseva, I., Tsvetkov, I.: Noise-induced excitement and mixed-mode oscillatory regimes
in the Chialvo model of neural activity. In: AIP Conference Proceedings, vol. 2522, No. 1,
p. 050002. AIP Publishing LLC (2022)
13. Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.): Intel-
ligent Computing & Optimization: Proceedings of the 5th International Conference on
Intelligent Computing and Optimization 2022 (ICO2022), vol. 569. Springer (2022)
14. Ganesan, T., Vasant, P., Litvinchev, I.: Chaotic simulator for bilevel optimization of virtual
machine placements in cloud computing. J. Oper. Res. Soc. China 10(4), 703–723 (2022)
Optimization of Seismic Isolator Systems
via Teaching-Learning Based Optimization
1 Introduction
Seismic base isolators are control devices that act as an intermediary in breaking the
relationship between the energy that will be transmitted from the base to the structure and
the soil. The working method of isolators is based on rapidly reducing the acceleration
coming to the structure by exhibiting a flexible movement against a load transmitted
from the base. These systems, which are generally considered rigid vertical and suitable
for movement horizontal, have rubber and slip-based types, as well as mixed system
types where these two features are combined, and spring-type types that allow vertical
movement [1]. In general, control is provided depending on the flexibility of the isolator.
One of the ways to adjust the stiffness of the isolator is to increase the mass, while
another option is to adjust the period of the isolator. The ductility level can be balanced
by the correct adjustment of the damping ratio of the isolator system. Although it is
desired for the isolator to show ductile behavior, highly ductile-designed isolators cause
excessive damage to the architectural elements of the building, even in small earthquakes.
Therefore, the correct setting of the isolator movement capability and thus the parameters
affecting them requires the optimization process to increase the efficiency to be obtained
from the control.
Optimization is a method that provides the best suitable result for the problem with
various algorithms. In recent years, metaheuristic algorithms inspired by its easy-to-
apply structure and the enormous balance in nature and the instinctive behavior of living
things have been frequently used in the optimization process. These algorithms are
inspired by the echolocation used by bats to find direction, such as formulating and
transforming the path followed by bees to search for food and are fed by natural events and
living behaviors. There are many types such as Genetic Algorithm, Flower Pollination
Algorithm, Adaptation Search Algorithm, Ant Colony Optimization, Bat Algorithm,
Artificial Bee Colony Algorithm, and Teaching-Learning Based Optimization Algorithm
[2–8]. There are many studies on its use in civil engineering [9–13]. The Teaching-
Learning Based Optimization Algorithm (TLBO) is a two-stage metaheuristic algorithm
that includes the teaching and learning process developed by Rao et al. [8]. Studies
have supported the remarkable effects of concrete compressive strength and size-shape
optimization of structures, sizing of truss structures, and minimization of the construction
cost of shallow foundations [14–18]. Today, studies have been carried out on the use
of control systems used in the vibration control of structures in systems where the
structure-soil interaction is taken into account in the design optimization [19–22].
In this study, a structure model with a single degree of freedom (SDOF) isolator with
a 30% damping limit placed at the base is optimized under FEMA P-695 earthquake
records, which include far fault earthquake records [23]. By using TLBO in optimization,
the damping ratio and period of the rigid SDOF system with isolators were optimized
within the given limits. FEMA P-695 earthquake records were excited to the structure
via Matlab Simulink and optimum parameters were determined to minimize the total
acceleration of the structure [24]. For earthquake recording, which is critical in the
structure without an isolator, the displacement and total acceleration values of the system
with isolators were presented, and the results were compared according to the isolator
mobility.
2 Methodology
In this section, the equations of motion of the structure with seismic isolators and the
algorithm equations of TLBO, the optimization algorithm used in the study, are given.
Seismic isolators are control systems that break the bond between the vibrations trans-
mitted from the ground to the structure and the structure. It is usually placed on the base
of the building with a weight of one floor of the building. For an SDOF system with
Optimization of Seismic Isolator Systems 29
isolators, the total mass (mtotal ) is obtained by adding the isolator mass (mb ) and the
structure mass (mstructure ). Equation 1 shows the total mass calculation.
For a single degree of freedom (SDOF) systems isolated from the base, the structure,
and the isolator act together to show a common period, damping, and rigidity. For the
SDOF system, the system period; Tb is calculated as in Eq. 2, the system stiffness; kb is
calculated as in Eq. 3 and the damping coefficient; cb of the system is calculated as in
Eq. 4.
2π
Tb = (2)
wb
cb = 2 × ζb × mtotal × wb (4)
wb in the equations denotes the natural angular frequency of the system, and ζb denotes
the damping ratio of the system. The basic equation of motion of the SDOF system is
shown in Eq. 5.
In the equation of motion of the SDOF system, the displacement of the system is X ,
and its velocity is Ẋ , its acceleration is Ẍ , and the ground acceleration is Ẍg .
result is reached by updating the old solutions with the better solution by the objective
function.
The teaching-learning-based optimization algorithm (TLBO) is a metaheuristic algo-
rithm inspired by the teaching and learning process developed by Rao et al. [8]. It includes
the process of sharing knowledge by transferring knowledge among a group of students
after being trained by a teacher. This algorithm, consists of two phases, the first phase
refers to the education of the students and the second phase refers to the interaction
between the students. In optimization with TLBO, initial values are generated randomly
within limits, taking into account the design constants and design constraints introduced
to the system. By substituting the produced values in the objective function, the results
obtained as the number of population form the objective function vector. In Eq. 8, the
objective function vector is shown. The population number is indicated by “pn” in the
equations.
⎡ ⎤
f(x1 )
⎢ f(x2 ) ⎥
⎢ ⎥
⎢ .. ⎥
f(x) = ⎢ . ⎥ (8)
⎢ ⎥
⎣ f(xpn−1 ) ⎦
f(xpn )
The Xmean value, which is the average of the population, is calculated by taking the
average of the random values substituted in the objective function, and the first phase
is completed. The random value that gives the minimum value in the objective function
vector is called Xteacher . Equation 9 shows Xteacher .
Xteacher = Xminf(x) (9)
Here, in the first phase, the teacher is training a group, while in the second phase, the
interaction between the students is mentioned. Each value produced in the first stage is
produced depending on the teaching factor, which is expressed as a teaching factor and
takes 1 or 2 values. The instruction factor (TF ) random assignment expression is given
in Eq. 10 and the solution generation equation is given in Eq. 11.
TF = round[1 + rand(0, 1)] → {1 − 2} (10)
Fig. 1. (a) SDOF structure model with base isolator (b) 3D view of the model.
Table 3. Displacement and total acceleration values of the system for critical earthquake
recording.
The graphs of the system displacement and total acceleration values obtained in the
critical earthquake recording are given in Figs. 2, 3, and 4, respectively, for the isolator
system with 30, 40, and 50 cm mobility.
Fig. 2. Displacement and total acceleration graphs under critical earthquake analysis for a 30%
damping ratio and 30 cm displacement limit.
Fig. 3. Displacement and total acceleration graphs under critical earthquake analysis for a 30%
damping ratio and 40 cm displacement limit.
Fig. 4. Displacement and total acceleration graphs under critical earthquake analysis for a 30%
damping ratio and 50 cm displacement limit.
34 A. Ocak et al.
Table 4. Structure displacement and total acceleration reduction percentages with isolator for a
10-story structure [26].
According to Table 4, while the AHS optimization for 30 cm, which has a lower
displacement limit, shows almost the same success as TLBO, it is seen that AHS is more
successful with a difference of about 2.5% in acceleration reduction. It is seen that TLBO
gives a better result by about 2% for the 40 cm range of motion, while the performance
is higher by 4.5% in acceleration reduction. The same level of success was achieved in
both optimizations for the 50 cm range of motion.
The results obtained with the design optimization for the acceleration reduction of
the isolator system can be summarized as follows,
i. It is seen that both algorithms are successful in systems with low isolator movement
limits and there is no notable performance difference. As the isolator flexibility
increased, it was understood that TLBO optimization was more successful, especially
in reducing acceleration, and the continued increase in flexibility provided the same
performance for both algorithms. Considering this situation, it is seen that the TLBO
algorithm comes to the forefront for acceleration minimization in isolators with
medium mobility such as 40 cm.
ii. When the movement of the isolator is limited or very mobile, the convergence of
both algorithms to the optimum result is very close. In addition, the best optimization
results for the 30% damping limit were obtained at the highest defined mobility. In
light of the given limit values, it can be said that increasing the flexibility of the
isolator system will increase the optimization efficiency.
iii. As the allowable movement limit for the isolator increases, the optimum system
period gets longer. Compared to the increase in the range of motion, a period of
approximately 1 s was observed at 30 and 40 cm, which is a lower range of motion,
and about 0.5 s increase at 50 cm compared to that at 40 cm. Although the period
is a parameter proportional to the stiffness, the optimum system period increment
pattern was deviated by 0.5 s in the increase of the movement limit. In this case, it
is understood that systems with high ductility can provide successful control with a
lower period of movement.
In light of all the data, it is understood that the optimization of the systems with
isolators can provide a good acceleration control at the level of over 95% and isolated
system parameters of the two-phase TLBO algorithm give successful results among
similar heuristic algorithms in optimization.
Optimization of Seismic Isolator Systems 35
References
1. Bakkaloğlu, E.: Seismic isolator systems in hospital buildings the effects of the use decision on
the building manufacturing process. Master’s thesis, Istanbul Technical University, Institute
of Science, Istanbul, Turkey (2018)
2. Holland, J.H.: Adaptation in Natural and Artificial Systems. The University of Michigan
Press, Ann Arbor, MI (1975)
3. Yang, X.S.: Flower pollination algorithm for global optimization. In: Durand-Lose, J.,
Jonoska, N. (eds.) Lecture Notes in Computer Science, vol. 7445, pp. 240–249. Springer,
London (2012)
4. Geem, Z.W., Kim, J.H., Loganathan, G.V.: A new heuristic optimization algorithm: harmony
search. SIMULATION 76(2), 60–68 (2001)
5. Dorigo, M., Maniezzo, V., Colorni, A.: The ant system: an autocatalytic optimizing process.
IEEE Trans. Syst. Man Cybern. B 26, 29–41 (1996)
6. Yang, X.S.: A new metaheuristic bat-inspired algorithm. In: Nature-Inspired Cooperative
Strategies for Optimization (NICSO 2010), pp. 65–74. Springer, Berlin, Heidelberg (2010)
7. Karaboğa, D.: An idea based on honey bee swarm for numerical optimization, vol. 200, pp. 1–
10. Technical report-tr06, Erciyes University, Engineering Faculty, Computer Engineering
Department (2005)
8. Rao, R.V., Savsani, V.J., Vakharia, D.P.: Teaching-Learning-based optimization: a novel
method for constrained mechanical design optimization problems. Comput. Aided Des. 43,
303–315 (2011)
9. Atmaca, B.: Determination of proper post-tensioning cable force of cable-stayed footbridge
with TLBO algorithm. Steel Compos. Struct. 40(6), 805–816 (2021)
10. Kaveh, A., Hosseini, S.M., Zaerreza, A.: Improved Shuffled Jaya algorithm for sizing opti-
mization of skeletal structures with discrete variables. In: Structures, vol. 29, pp. 107–128.
Elsevier (2021)
11. Zhang, H.Y., Zhang, L.J.: Tuned mass damper system of high-rise intake towers optimized
by the improved harmony search algorithm. Eng. Struct. 138, 270–282 (2017)
12. Yahya, M., Saka, M.P.: Construction site layout planning using multi-objective artificial bee
colony algorithm with Levy flights. Autom. Constr. 38, 14–29 (2014)
13. Bekdaş, G., Niğdeli, S.M., Aydın, A.: Optimization of tuned mass damper for multi-
storey structures by using impulsive motions. In: 2nd International Conference on Civil and
Environmental Engineering (ICOCEE 2017), Cappadocia, Turkey (2017)
14. Öztürk, H.T.: Modeling of concrete compressive strength with Jaya and teaching-learning
based optimization (TLBO) algorithms. J. Invest. Eng. Technol. 1(2), 24–29 (2018)
15. Zhao, Y., Moayedi, H., Bahiraei, M., Foong, L.K.: Employing TLBO and SCE for optimal
prediction of the compressive strength of concrete. Smart Struct. Syst. 26(6), 753–763 (2020)
16. Degertekin, S.O., Hayalioglu, M.S.: Sizing truss structures using teaching-learning-based
optimization. Comput. Struct. 119, 177–188 (2013)
17. Dede, T., Ayvaz, Y.: Combined size and shape optimization of structures with a new meta-
heuristic algorithm. Appl. Soft Comput. 28, 250–258 (2015)
18. Gandomi, A.H., Kashani, A.R.: Construction cost minimization of shallow foundation using
recent swarm intelligence techniques. IEEE Trans. Ind. Inf. 14(3), 1099–1106 (2017)
19. Bekdaş, G., Kayabekir, A.E., Nigdeli, S.M., Toklu, Y.C.: Transfer function amplitude mini-
mization for structures with tuned mass dampers considering soil-structure interaction. Soil
Dyn. Earthq. Eng. 116, 552–562 (2019)
20. Ocak, A., Bekdaş, G., Nigdeli, S.M.: A metaheuristic-based optimum tuning approach for
tuned liquid dampers for structures. Struct. Des. Tall Spec. Build. 31(3), e1907 (2022)
36 A. Ocak et al.
21. Kaveh, A., Ardebili, S.R.: A comparative study of the optimum tuned mass damper for
high-rise structures considering soil-structure interaction. Period. Polytech. Civ. Eng. 65(4),
1036–1049 (2021)
22. Bekdaş, G., Nigdeli, S.M., Yang, X.S.: Metaheuristic-based optimization for tuned mass
dampers using frequency domain responses. In: International Conference on Harmony Search
Algorithm, pp. 271–279. Springer, Singapore (2017)
23. FEMA P-695: Quantification of Building Seismic Performance Factors. Washington
24. Singh, M.P., Singh, S., Moreschi, L.M.: Tuned mass dampers for response control of torsional
buildings. Earthq. Eng. Struct. Dynam. 31(4), 749–769 (2002)
25. The MathWorks, Matlab R2018a. Natick, MA (2018)
26. Ocak, A., Nigdeli, S.M., Bekdaş, G., Kim, S., Geem, Z.W.: Optimization of seismic base
isolation system using adaptive harmony search algorithm. Sustainability 14(12), 7456 (2022)
TOPSIS Based Optimization of Laser Surface
Texturing Process Parameters
ishwer.s@[Link]
1 Introduction
Laser beam machining has been widely used for various operations such as drilling,
cutting, welding, marking, surface texturing etc. in today’s manufacturing processes.
Various methods have been utilized to produce a texture on the surface on the material.
The laser surface texturing has been referred as most advanced in comparison to other
methods of texturing [1]. Due to its greater flexibility, good accuracy and controllability,
the researchers have shown keen interest on LST [2]. The LST can be performed in
many ways such as texturing using ablation and through laser interference [3]. LST can
modify the surface topography of the material by altering its various surface properties
like optical, tribological etc. [4]. The selection of suitable parameters is important and
become a thriving area in precision manufacturing processes. Multi criteria decision
making methods (MCDM) shows a potential in determining the appropriate parametric
combination for improving the process efficiency. Researchers across the globe has
utilized various MCDM methods to determine the same. In the present paper, Technique
for Order Preference by Similarity to Ideal Solution (TOPSIS) was employed to identify
the finest possible parametric combination during surface texturing using lasers. It is
one of the extensively utilized MCDM techniques owing to which it is receiving a lot of
attention from researchers. [5]. Chodha et al. utilized the TOPSIS and entropy MCDM
methods to select the industrial arc welding robot and found that TOPSIS method to be
simple and accurate for the same [6]. Kannan et. al successfully employed the TOPSIS
method for finding the optimal machining parameters in LBM for generating elliptical
profiles [7]. Rao [Link] demonstrated the hybrid approach of AHP-TOPSIS to select the
optimal level of EDM parameters while machining AISI D2 steel [8]. Tran [Link] during
their experimental research utilized GRA based TOSIS using weights from entropy
method to find the suitable parameters and concluded that the hybrid approach of grey
theory and TOSIS is useful to facilitate the MCDM problems having vagueness and
uncertainty [9]. Banerjee [Link] demonstrated the usefulness of AHP-TOPSIS hybrid
approach to select USM parameters for producing quality zirconia bio ceramic parts
with high productivity [10]. Based on the literature review a systematic experimentation
planning was done based on L25 Taguchi design of experiments. Based on the design,
the experiments were conducted and TOPSIS based MCDM method has been employed
for selecting the suitable parameters to achieve excellent process capability.
2 Experiment Details
The experiment was conducted with a multi diode pumped fibre laser with a wavelength
of 1060 ± 10 nm and an average power of 100 W. The experimental setup for LST
consist of zirconia ceramic with a size of 10 mm × 10 mm × 3 mm manufactured by
powder metallurgy process. Based on literature survey and sufficient pilot experiments
the suitable process parameters namely transverse feed, pulse frequency, average power
and scanning speed and its levels were selected and is presented in Table 1. The surface
roughness (Ra and Rz) has been considered as process response.
Based on the design, the experiments were executed with several different combina-
tion of parameters and performance criteria such as Ra and Rz were measured with the
help of Mitutoyo 178-561-02A surftest SJ-210 Surface Roughness Tester.
The experiments were conducted at room temperature and normal atmosphere. Single
pass horizontal laser scanning was executed to produce micro grooved pattern with area
TOPSIS Based Optimization of Laser Surface Texturing Process 39
Sl. No. Average power (W) Pulse frequency Scanning speed Transverse feed
(kHz) (mm/s) (mm)
01 20 15 2 0.04
42 S. Pradhan et al.
5 Conclusion
In present research work attempt has been done to create texture on the zirconia ceramic
with the help of fibre laser. Four parameters namely average power, pulse frequency,
scanning speed and transverse feed and performance criteria surface roughness (Ra
and Rz) has been considered. Taguchi based L25 orthogonal array was adopted for
experimental design and TOPSIS based MCDM method has been utilized for selection
of suitable parametric combinations. Based on the work it was found that TOPSIS
may be utilized for determining the optimal levels of parameters. Based on TOSIS the
experimental run 18 is the best choice to obtain the desired roughness in the surface of
the zirconia ceramic. The obtained parametric combinations for the same are average
power of 20 W, pulse frequency of 15 kHz, scanning speed of 2 mm/s and a transverse
feed of 0.04 mm.
Furthermore, the present result may provide a guideline for researchers working in
LST to use TOPSIS based MCDM in their research work.
References
1. Etsion, I.: State of the art in laser surface texturing. J. Tribol. 127(1), 248–253 (2005). https://
[Link]/10.1115/1.1828070
2. Mao, B., Siddaiah, A., Liao, Y., Menezes, P.L.: Laser surface texturing and related techniques
for enhancing tribological performance of engineering materials. A review. J. Manuf. Process.
53, 153–173 (2020). [Link]
3. Shivakoti, I., Kibria, G., Cep, R., Pradhan, B.B., Sharma, A.: Laser surface texturing for
biomedical applications: a review. Coatings 11(2), 124 (2021). [Link]
gs11020124
4. Han, J., Zhang, F., Van Meerbeek, B., Vleugels, J., Braem, A., Castagne, S.: Laser surface
texturing of zirconia-based ceramics for dental applications: a review. Mater. Sci. Eng.: C
123, 112034 (2021). [Link]
5. Çelikbilek, Y., Tüysüz, F.: An in-depth review of theory of the TOPSIS method: an exper-
imental analysis. J. Manag. Anal. 7(2), 281–300 (2020). [Link]
2020.1748528
6. Chodha, V., Dubey, R., Kumar, R., Singh, S., Kaur, S.: Selection of industrial arc welding
robot with TOPSIS and Entropy MCDM techniques. Mater. Today: Proc. 50, 709–715 (2022).
[Link]
7. Kannan, V.S., Navneethakrishnan, P.: Machining parameters optimization in laser beam
machining for micro elliptical profiles using TOPSIS method. Mater. Today: Proc. 21,
727–730 (2020). [Link]
8. Rao, K.M., Kumar, D.V., Shekar, K.C., Singaravel, B.: Optimization of EDM process param-
eters using TOPSIS for machining AISI D2 steel material. Mater. Today: Proc. 46, 701–706
(2021). [Link]
9. Tran, Q.P., Nguyen, V.N., Huang, S.C.: Drilling process on CFRP: multi-criteria decision-
making with entropy weight using grey-TOPSIS method. Appl. Sci. 10(20), 7207 (2020).
[Link]
10. Banerjee, B., Mondal, K., Adhikary, S., Paul, S.N., Pramanik, S., Chatterjee, S.: Optimization
of process parameters in ultrasonic machining using integrated AHP-TOPSIS method. Mater.
Today: Proc. 62, 2857–2864 (2022). [Link]
11. Odu, G.O.: Weighting methods for multi-criteria decision-making technique. J. Appl. Sci.
Environ. Manag. 23(8), 1449–1457 (2019). [Link]
PCBA Solder Vision Inspection Using Machine
Vision Algorithm and Optimization Process
Graduate School, Polytechnic University of the Philippines, Valencia St. Near Ramon
Magsaysay Blvd. Sta. Mesa, Manila, Philippines
[Link]@[Link], {rgdeluna,
marosales}@[Link]
1 Introduction
In-process inspection plays a very critical role in producing high-quality products based
on the standard set by the company. Implementing in-process inspection systems for
all critical processes in production lines will help to improve production productivity,
quality, and customer satisfaction [1]. It also helps to contain the problem in the specific
affected process and not to flow down the issue to the next processes. The operator
randomly collected samples from all identified critical processes that will undergo for
100% quality inspection to check if there are any abnormalities occurred in the product
after being processed in the machine.
The disadvantage of human inspection is very subjective in product judgment that
depends on the operator’s experience and physical condition [2]. The judgment of a
highly skilled experienced operator may be different from the newly trained operator
which is not so familiar with some of the critical inspection criteria. Unstable judgment
may result in a product defect escapee that leads to product failure in the customer end
user.
the image was detected, compare, and judge whether the image captured is acceptable
by defined standards. The sensor amplifier serves as the central processing unit that
is responsible for monitoring the screen operation and information from the camera
setup parameter to output the image result. All the image history was stored in a limited
memory capacity. All the history images can be retrieved from the sensor amplifier using
USB memory or an SD card. All the image was directly projected into the control panel
to display the output image.
In order to achieve the optimum image vision capturing and avoid the false vision rejec-
tion judgment (either over-rejecting or under-rejecting), the inspection camera position
must be optimized with respect to the target object of inspection or the region of interest.
X (horizontal), Y (vertical), and Z (focus angle) position is manually adjusted during the
installation to move and position the camera. This is one of the critical parts of during
installation automated optical inspection system as the sensor head is very sensitive to
the X, Y & Z positions, any wrong position of any axis will result in wrong camera
image capturing that may result in unable to focus image of the target object, blur image
capture or unable to capture the image of the target.
Once the camera hardware configuration has been installed, camera sensor head opti-
mization is the next step to capture the good quality and accuracy of the image. Field of
View (FOV) is one factor that needs to consider where ambient light will affect the image
quality. Ambient light including solar light, lights of other devices, and photoelectric
sensors are some of the factors that may affect the image during the capturing [5]. The
light intensity from nearby ambient light may interfere with the light produced by the
camera sensor head. The light shield around the camera is recommended if there are too
many ambient light sources nearby the camera where installed (Fig. 2).
3 Methodology
Industrial revolution four (IR4) automated optical inspection (AOI) technology is com-
monly used to provide high-quality image inspection [6]. It helps the manufacturing
industry to improve overall equipment efficiencies (OEE) such as productivity, process
yield, and product quality. Image segmentation is one of the machine vision method-
ologies that is widely used in digital imaging processing where the target image was
analyzed into multiple layers or regions based on the characteristics of the pixels in the
image. Image segmentation can separate the image foreground from the background or
cluster regions of pixels based on similarities in color or shape [7].
The proposed system will capture all the image data collected from automated optical
inspection using sensor image detection. All captured images will undergo in machine
vision MATLAB programming algorithm where the image segmentation and boundaries
will execute. Image where the region of interest will be measured to classify if the image
is within the specification which considers “Good” samples or not met the required specs
which are classified as “No Good” samples. All image measure data will be trained and
validated using the supervised machine learning classifier K Nearest Neighbor algorithm
to ensure the accuracy of the image judgment (Fig. 3).
Image has been captured from the camera where head imaging startup signal by synchro-
nizing to the target position from a photoelectric switch of programmable logic control,
the imaging sensor will trigger in regular intervals, it uses build-in light to obtain an
image of the target with the CMOS image sensor. Once the image has been captured
it was saved in an allocated machine memory storage and processed to MATLAB pro-
gramming machine vision image segmentation. The image area will be measured and
judge base on the measurement if it is meet the criteria (passed) or not (failed) (Fig. 4;
Table 1).
PCBA Solder Vision Inspection Using Machine Vision Algorithm 47
Create a set of data points within the identified target area to specify the region of interest
(ROI) [8]. Roipol is a popular MATLAB function where the binary image is the source
of a mask for the masked filtering process [9].
Image boundaries are the traces of the outer boundaries of the active region of the image
after the image has been segmented. MATLAB bwboundaries is the function where the
coordinates of the border pixel of the object refer to row and column. Those binary
images having a pixel value of 1 (one) belong to the object where all the pixels with the
value of 0 (zero) is representing the background [11].
Once the image has been segmented and defined boundaries were obtained using
bwboundaries function, the area of the target image will be calculated using the MAT-
LAB function of regionprops This function measures different kinds of image quantities
and features in a black-and-white image. Image automatically determines the properties
of each contiguous white region that is connected to data in the masking process. It will
measure the centroid or center of the mass from the given image boundaries [12].
Image measured data from machine vision will undergo the training and validation pro-
cess using different machine learning classifier algorithms such as Logistic Regression
(LR), K Nearest Neighbor (KNN), and Support Vector Machine (SVM). The data set
was divided into 80:20 ratio to accommodate the required data set for training & testing
respectively (Tables 2 and 3).
PCBA Solder Vision Inspection Using Machine Vision Algorithm 49
Cross-validation Accuracy
Model Setting Mean Deviation
Logistic regression (LR) Solver = lbfgs 0.800000 0.478680
K Nearest neighbour (KNN) Default, k = 5 0.954901 0.017647
Support vector machine (SVM) Default, kernel = rbf 0.949019 0.019996
Based on the quality standard specs defined by the company to ensure the continuity
of electronics components attached to PCB, the required diameter of solder must be >
0.80 mm in diameter to be considered as a “Good” solder condition, less than < 5 mm
judge as “No Good” or reject samples. 510pcs samples of PCBA with the combined
condition of good and no good. It was run to the proto-type machine vision system using
an automated optical inspection (AOI) camera type. All captured image was undergone
with machine vision MATLAB programming algorithm where the image segmentation
and boundaries will be defined to calculate the diameter of the PCBA solder.
Machine vision MATLAB programming algorithms are able to classify the image
samples as either good or no good base on the calculated diameter. Out of 510 samples,
291 were judged as good samples while 219 samples are no good samples.
All image sample was split into 80:20 ratios for training and testing samples respectively.
K Nearest Neighbor (KNN) classifier was recommended to use in the proposed system
as the accuracy result shows a 95% level in both cross & hold-out validation. It also
shows that there is no variation in the process as the standard deviation value is 0.0176.
5 Conclusion
References
1. Martinez, P.A., Ahmad, R.: Quantifying the impact of inspection processes on production
lines through stochastic discrete-event simulation modeling. Modelling 2(4), 406–424 (2021).
[Link]
2. Yang, F., Ho, C., Chen, L.: Automated optical inspection system for O-ring based on photo-
metric stereo and machine vision. Appl. Sci. 11(6), 2601 (2021). [Link]
11062601
3. Jalayer, M., Jalayer, R., Kaboli, A., Orsenigo, C., Vercellis, C.: Automatic visual inspection
of rare defects: a framework based on GP-WGAN and enhanced faster R-CNN. In: 2021
IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications
Technology (IAICT) (2021). [Link]
4. Vitoriano, P., Amaral, T.: 3D solder joint reconstruction on SMD based on 2D images. SMT
Surface Mount Technol. Mag. 31 (2016)
5. Keyence IV2 Series User Manual, “Mounting the Sensor”, p. 34
6. Copyright (C) 2023 Keyence Corporation [Online]
7. Moru, D.K., Borro, D.: A machine vision algorithm for quality control inspection of gears.
Int. J. Adv. Manufact. Technol. 106(1–2), 105–123 (2019). [Link]
019-04426-2
8. The Math Work, Inc.: 1994–2021, Image Processing Toolbox Documentation Image
Segmentation
9. Brinkmann, R.: The art and science of digital compositing. Choice Rev. Online 37(07), 37–
3935 (2000). [Link]
10. Roipoly (Image Processing Toolbox). [Link] [Online]
11. Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., Terzopoulos, D.: Image seg-
mentation using deep learning: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 1 (2021).
[Link]
52 A. Dayrit et al.
12. The Math Works, Inc.: 1994–2021, Image Processing Toolbox Documentation Boundary
Tracing in Images
13. Stack Overflow Website: Explanation of Matlab’s bwlabel, regionprops and centroid functions
[Online]
AI-Based Air Cooling System in Data Center
Abstract. The increasing demand for storage, networking, and computing power
has increased the number, size, complexity, and power density of data centers,
creating significant energy challenges. Cooling energy consumption accounts for
the majority of total data center consumption and can account for as much as 40%
in inefficient cooling systems. The objective of our research is to highlight and
present the effectiveness as well as the pros and cons of Artificial intelligence
(AI)-based cooling systems in a data center that is being manually controlled and
monitored. Additionally, we wanted to focus mainly on the effectiveness of AI-
based cooling systems and how it can be helpful for monitoring as well as being
much greener in case of cost, power, and human resources. In order to carry out
the ambition and enthusiasm of the research, a Support Vector Machine (SVM)
was implemented. This SVM will later provide a more precise and accurate result
which will help the cooling system to regulate much more efficiently in case
of power and cost. The accuracy of the model which was implemented is 82%.
This paper is composed with the aim to convert current manually controlled and
less efficient data centers which are in similar condition, dimensional size, and
equipment.
1 Introduction
Data centers are one of the largest energy consumers, especially in many developed
countries. Rapid advances in information technology have created massive data centers
around the world. This is due to the increased use of cloud systems and the exchange of
information between users. It is recorded that the number has increased year by year and
reached more than 100 million by 2014 [1]. This massive data management increased
power consumption. The trend in data center development is leading to the densification
of data centers. IT devices that contribute to excessive energy consumption and costs.
Temperature is a factor in data center devices. There is a possibility of malfunction or
failure due to overheating. However, the devices also transport items that are vital to
the mission. Continuous operation (24 h a day, 7 days a week). As a result, controlling
airflow and the cooling system is vital. The task of supplying cold air to the data center
and ensuring that all equipment is properly cooled. IT devices consume 50% of total
energy consumption and cooling systems about 37% [1, 2]. The target of this document
is to focus on efficiency because energy efficiency programs are meant to reduce the
data center energy consumption of the cooling system. There are numerous methods
to improve cooling system efficiency, which can result in energy savings of up to 45%
[2]. We introduce intelligent data center cooling as a dynamic cooling system that splits
cooling and computing distributions through modeling and measurement by recording
the temperature distribution and expected temperature distribution in the data center in
real-time [3]. The optimization of the cooling system is a compromise between these
two goals [4, 5]. The primary contributions of this paper are:
(i) Formulas are customized to suit the conditions of the data center which includes the
size, equipment, models, and planning of the entire data center.
(ii) A SVM model was implemented as per the conditions and data were collected from
the data center in a much greener way which will provide a much higher accuracy and
precision that will make the cooling system installed to be more sensitive to changes
and thus may decrease the power consumption and may become more cost-efficient.
The biggest challenge in data center O&M is figuring out which components of the
system need to be changed and then figuring out the best combination after doing
so. In his current O&M practice, there are no formulas or algorithms to refer to. A
prediction model (SVM model) is sent to the framework for implication. Potential
cooling guidelines are scanned and simulated using an evolutionary algorithm to
replicate with the inference platform’s powerful inference and computational capa-
bilities. Moreover, researchers have covered a wide range of effective cooling aspects
in data centers and figured out ways to make it more efficient in the case of power and
energy [11]. Additionally, it is quite often seen that researchers have heavily focused
not only on graphical presentations but also have focused on confusion matrix as
well [18]. Besides that, it is important to highlight the measurement aspect of how
the data centers are efficient and that can be determined by observing the Power
Usage Effectiveness (PUE) [7–9, 16, 17].
2 Literature Review
In this piece of writing, we are focusing on the current and related work of authors
and discussing the analytics, observations made, contributions, and an overview of the
works. In [1], the authors explained and provided analysis of airflow is important for
data centers that are cooled with air-cooling systems. A data center with an elevated
floor’s thermal environment and energy efficiency is impacted by the flow path and
distribution. However, the research was done in a State-of-the-Art Datacenter. So a
proper understanding of how a small-scale data center might operate is difficult.
In [2], the authors discussed and explained optimizing the power consumption due
to the cooling system in the data center. The authors provided a detailed and in-depth
analysis of Power Usage Effectiveness (PUE) used to determine the suitable methods
for cooling systems.
AI-Based Air Cooling System in Data Center 55
In [4], the authors shared their in-depth research on smart established ways and how
it can be re-implemented into a better and smart system by redesigning the layout and
aisle positions. However, the data center in this paper was also state of the art and hence
similar comments cannot be done for a smaller-scale data center. On the other hand, we
have covered the failings which were found in this paper by implementing much greener
models.
In [5], the authors considered the primary factors in the data centers’ room-level
thermal environment are layout, raised floor plenum and ceiling heights, and perforated
tiles. Two major air distribution problems have been identified in data centers: bypass
air and recirculated air. Recirculated air occurs when there is insufficient airflow to
the equipment, and some of the hot air is recirculated, which can lead to large inlet
temperature differences between the bottom and top of the rack. Cold bypass is caused
by high flow or leakage from the cold air path.
In [11], the authors have covered a wide range of aspects by thoroughly analyzing
cooling methods, configurations, models, airflow, IT load, equipment, power consump-
tion, etc. Mathematical formulas and their uses are clearly given and also promoted to
readers to save energy and use power efficiently.
In [18], the authors have discussed the importance of the confusion matrix as a metric
measuring system of Machine Learning and Deep Learning models and how important
they are for researchers to understand and use it to measure their models. The authors
have also mentioned the importance of the confusion matrix in the Data Science field
and its importance in understanding the aspects of it when it comes down to using it in
the work field.
In [19], the authors have done a very detailed analysis of the specific sector of the
cooling system. The method used in this paper was mostly on how natural cold resources
can be included to reduce power consumption and make it more cost-effective as well as
reduce power consumption due to cooling. Different seasons in different regions were
also covered, however, tropical regions like Bangladesh, India, etc. were not taken into
account. Researchers tried to address many issues related to green computing [20–29]
to make our planet safe for living.
In this research paper, we have considered and analyzed the above-mentioned papers
as well as other similar papers to construct this proposed methodology. In this paper, we
proposed an AI-based cooling system that is much more efficient for a small scaled data
center and it can be implemented for data centers in every country in every region.
For the research purpose, we have collected all the datasets that are required to proceed
with our research and aim from the officers who are in charge of the data center in
56 S. N. Zaman et al.
East West University via interviews over several days. We did not use any datasets
from outside the university due to various limitations which come with the terms and
conditions of data centers.
The datasets which were collected contains various type of data which includes the
following:
(i) Power room and Server Room temperature and Humidity Status: This includes the
status of the temperature inside the power room and also provides a plot graph
to show the change in temperature via a statistical plot. It also shows the tem-
perature/Humidity status with a graphical representation as well. The power room
humidity status is also collected along with the graph. all of the mentioned data are
collected from all the connectors of the server room. Table 1 shows the readings of
both the server and the power rooms’ temperature and humidity readings which are
provided by the data center officer:
Table 1. Dataset of power room and server room of EWU data center
After doing so, the dataset was then constructed in a CSV file so that it can be
used for AI models later on.
(ii) Current model Dataset:
Table 2 is the dataset chart which was provided by the officers of EWU Data
Center. The dataset includes the floor area, number of servers and racks, server
power consumption, UPS with its battery power consumption, and lighting. All the
information within the columns is provided, not calculated by us.
Both models are basic and they are used due to the lack of both model usage in small-scale
data centers globally.
• LSTM Model:
Long Short-Term Memory is a Machine Learning model which is a special kind
of recurrent neural network that is able to learn dependence over time in data. We
have implemented an LSTM model for prediction and classification purposes.
Algorithm of Model:
Input: Datasets.
Output: power consumption graph and Energy Distribution graph.
1: Import the libraries.
2: Read the dataset.
3: Extract the data and store them in DateTime format and the data from all date-based
columns.
4: Print the unique years.
5: Import the style from libraries.
6: Plot the Power consumption graph.
7: Generate the subplot Energy in week versus Date.
8: Plot the Energy Distribution graph.
Both graphs are explained and provided in the results section in Figs. 1 and 2.
• SVM Model:
Support Vector Model is a Machine Learning model which is widely used for
classification and regression analysis. For that reason, we have implemented this
model to aid the LSTM model to provide a much more accurate and much more
efficient output for the data center to operate using it.
10: Use Numpy concatenate and store the y_true vertical and y test vertical.
11: Predict SVM by using x test dataset and store it.
12: Reshape the predicted y and store it.
13: Import confusion matrix from library.
14: Seaborn heatmap to plot confusion matrix.
15: Generate accuracy by using accuracy score function with y test and y predicted
values.
16: Print the SVM prediction.
The confusion Matrix has been explained and provided in the Results section in
Fig. 3.
4 Results
In this section, the figures of the plots which were simulated from both the LSTM model
and SVM model are provided with descriptions of them below each of them.
Figure 2 was simulated via the LSTM model as a graphical representation of how the
energy consumption is happening on a weekly basis. It can be seen that there is a linear
increase in power consumption from the first of the month to the first day of the next
month. This weekly calculation can be easily understood by seeing how it turned zero
after 4 intervals which indicated the 4 weeks in a month. The line then has a constant
gradient for the next 4 weeks. It can also be seen that the power increases from 33.00 to
34.00 kWh.
Figure 3 is plotted to the energy distribution throughout the week. This indicates
how the cooling system as well as other equipment in the data center are consuming
power to operate on a weekly basis. The bars indicate how much power is consumed in
that slot, and the curve shows the consumption in continuous terms.
60 S. N. Zaman et al.
Fig. 3. Energy distribution graph from LSTM model, density versus week
Figure 4 is the confusion matrix which is generated by the SVM model which we
implemented. The Confusion matrix is a very well-liked metric for classifying problems.
Both binary classification and multiclass classification issues can be solved using it
[18]. The confusion matrix was utilized for analyzing the performance evaluation of the
methods used after classification is carried out.
Table 3 is the dataset which is obtained from the proposed models if it is launched in
the Data Center. We can see that the power consumption by the data center dimensions
(floor) and energy-saving lights usage in a much less quantity has reduced the total power
consumption. This reduction in power consumption has led the PUE of the proposed
model to an efficient path compared to that of the current models.
In this sector, the formulas and the numeric methods which were used for the proposed
models and methods implemented by us are provided. The reasons behind each of them
are included below each of them. Equation (1) is the formula to calculate the Power
Usage Effectiveness (PUE) of a data center.
PUE calculations of the Current model and proposed model:
The answer from (2) above is the PUE of the current model that is being used in the
Data Center of East West University (EWU).
Now, calculating the Data Center Infrastructure Efficiency (DCIE) from Eq. (3). This
is used to figure out the efficiency of the systems used for the Data Center:
Now, calculating the PUE of the proposed model for the data center (data taken from
Table 3):
Now, calculating the Data Center Infrastructure Efficiency (DCIE). This is used to
figure out the efficiency of the systems used for the Data Center:
Firstly, just from both DCIE percentages, we can conclude that our proposal will be
much more efficient. Secondly, from Table 4 we can see that the current DCIE is average
and ours is efficient. The following table illustrates the levels of efficiency based on PUE
and DCIE:
Table 5 is used to convert the units in Table 3 that were required. Table 4 is the
general measurement that is used to determine the efficiency level of a data center.
Table 5 provides an easier understanding of the conversions used in this paper.
Additionally, we have simulated a pie chart for a statistical representation of the PUE
from both the current and proposed models in Fig. 5.
62 S. N. Zaman et al.
To convert Multiply by
BTU/hour into Watts 0.293
Watts into BTU/hour 3.41
Watts to kiloWatts 0.001
BTU/hour into kiloWatts 0.000293
kiloWatts into BTU/hour 3414
5 Discussion
So far we have seen other authors like Capozzoli in reference [1] have pointed out the
power and cost efficiency of the cooling system that was mostly based on the state of
Art Data centers all around the world. In their papers, they have covered a wide range of
consumption and cost analyses, how they have approached and utilized the data they had
collected, and finally provided an analysis of the system the data center is using. Some
authors like R. Kumar [3], Patel [4], and Chaoqiang Jin [6] covered the design aspect of
the cooling aspect of the data centers in their paper which contains a complete analysis
he and his co-authors have done. However, from what we have found out, almost all of
the papers so far didn’t have much research that contained a broader analysis of Data
centers that use Air cooling systems with a green AI model which can be used to make the
AI-Based Air Cooling System in Data Center 63
system a much more efficient system. In [21], the author performed a detailed analysis
and research on air cooling systems in Data centers and provided a well-explained
analysis of his observation and approach. Additionally, there was not much research on
how small scaled Data Centers operate, what cooling systems they have, and how to
make small scaled Data centers more efficient without the need of installing a chilled
water cooling system. The research that we have done was to cover the gap which was
left by other researchers and we were able to provide a much wider perspective of a
small scaled data center and how it operates and what can be done in order to make it
greener. However, there are a few limitations in our research data collection and they
are that the datasets which are used have limitations on their own, it doesn’t have a huge
amount of data which makes them hard to use in the models. Due to the limitations
of the datasets which were provided by the data center, the accuracy was below 90%.
Figures 2 and 3, in Results Sect. 4.1 are examples of how the dataset has performed and
contributed to the capabilities and reliabilities of the model. The accuracy which was
obtained from the model was possible due to the usage of the efficient power calculations
conducted using the mathematical formulas from Sect. 4.2, Eq. 5. The application of
models provided efficiently controlled outputs which later provided efficient results.
However, the models on their own are the best in the cases of classification, regression,
and accuracy prediction. Thus, we have the limitations of the datasets and the models
would be further reliable if more standard datasets are used for the models.
6 Conclusion
This research was done properly, and we were able to meet the goals and aims which
were set at the beginning by providing sufficient data analysis, detailed explanations of
what models were implemented, how they were implemented, why they were used, what
outcomes they provide and how it is greener than the current models. Both the SVM
model and LSTM model were successfully implemented and they were able to generate
the graphs which are required. Therefore, it is seen that in this study it is possible to show
that by using the combination of machine learning model and deep learning models, an
efficient solution for the power consumption of cooling systems in data centers can be
obtained and thus can be utilized and implemented to make data center efficiency and
overall greener. Additionally, it can be concluded that this study can be used as a guide to
convert manually controlled and monitored data centers, which are sized similarly to the
one which is used, to an automatically controlled, power efficient, and fast cooling system
of data centers. Overall, the study provides a creditable contribution by addressing the
energy-related challenges which are faced by data centers. To conclude, the gap which
we focused on was to conduct a study and create a model which is based on small-scale
data centers. As authors of this paper, it is recommended to focus on collecting more
proper datasets which consist of more features and precision so that it can be used for
further efficient studies and implementations.
64 S. N. Zaman et al.
References
1. Capozzoli, A., Primiceri, G.: Cooling systems in data centers: state of art and emerging
technologies. Energy Procedia 83, 484–493 (2015)
2. Kumar, R., Khatri, S.K., José Diván, M.: Effect of cooling systems on the energy effi-
ciency of data centers: machine learning optimisation. In: 2020 International Conference
on Computational Performance Evaluation (ComPE), pp. 596–600 (2020)
3. Patel, C., Bash, C., Sharma, R., Beitelmal, M., Friedrich, R.: Smart cooling of data centers,
pp. 4–5 (2003) [Link]
4. Fei, Z., Song, X.: Deep neural networks can reverse spiraling energy use in data centers &
cut PUE, pp. 5–9 (2020)
5. Jin, C., Bai, X., Yang, C.: Effects of airflow on the thermal environment and energy efficiency
in raised-floor data centers. Sci. Total Environ. 695, 133–801 (2019). ISSN 0048-9697
6. Mukaffi, A.R.I., Arief, R.S., Hendradjit, W., Romadhon, R.: Optimization of cooling system
for data center case study: PAU ITB data center. Procedia Eng. 170, 552–557 (2017). ISSN
1877-7058
7. Zuo, W., Wetter, M., Van Gilder, J., Han, X., Fu, Y., Faulkner, C., Hu, J., Tian, W., Con-
dor, M.: Improving data center energy efficiency through end-to-end cooling modeling and
optimization, pp. 1–109. Report for US Department of Energy, DOE-CUBoulder-07688
(2021)
8. Bhatia, A.: HVAC cooling systems for data centers, pp. 15–23 (2016)
9. Sharma, M., Arunachalam, K., Sharma, D.: Analyzing the data center efficiency by using
PUE to make data centers more energy efficient by reducing the electrical consumption and
exploring new strategies. Procedia Comput. Sci. 48, 142–148 (2016). [Link]
[Link].2015.04.163
10. Tschudi, W., Xu, T., Sartor, D., Stein, J.: High-Performance Data Centers: A Research
Roadmap, pp. 25–37 (2004)
11. Zhang, Y., Liu, C., Wang, L., Yang, A.: Information Computing and Applications—Third
International Conference, ICICA 2012, Chengde, China, September 14–16. Proceedings,
Part II. Volume 308 of Communications in Computer and Information Science, pp. 179–186.
Springer, Berlin (2012)
12. Van Houdt, G., Mosquera, C., Nápoles, G.: A review on the long short-term memory model.
Artif. Intell. Rev. (2020)
13. Zhang, Y.: Support vector machine classification algorithm and ıts application. In: Liu, C.,
Wang, L., Yang, A. (eds.) Information Computing and Applications. ICICA, Communications
in Computer and Information Science, vol. 308. Springer, Berlin (2012)
14. Pries, R., Jarschel, M., Schlosser, D., Klopf, M., Tran-Gia, P.: Power consumption analysis
of data center architectures, vol. 51 (2012). [Link]
15. Dai, J., Ohadi, M.M., Das, D., Pecht, M.: Optimum Cooling of Data Centers, pp. 47–90
(2014). ISBN 978-1-4614-5602-5
16. Borgini, J.: How to calculate data center cooling requirements (2022)
17. Deng, X., Liu, Q., Deng, Y., Mahadevan, S.: An improved method to construct basic proba-
bility assignment based on the confusion matrix for classification problem. Inf. Sci. 340–341,
250–261 (2016). ISSN 0020-0255
18. Kannan, R., Roy, M.S., Pathuri, S.H.: Artificial intelligence based air conditioner energy
saving using a novel preference map. IEEE Access 8, 206622–206637 (2020). [Link]
10.1109/ACCESS.2020.3037970
19. Riccardo Lucchese, R.C.: Cooling control strategies in data centers for energy efficiency and
heat recovery, pp. 79–97 (2019). ISBN 978-91-7790-438-0
AI-Based Air Cooling System in Data Center 65
20. Yeasmin, S., Afrin, N., Saif, K., Reza, A.W., Arefin, M.S.: Towards building a sustainable
system of data center cooling and power management utilizing renewable energy. In: Vas-
ant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent
Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569.
Springer, Cham (2023). [Link]
21. Liza, M.A., Suny, A., Shahjahan, R.M.B., Reza, A.W., Arefin, M.S.: Minimizing e-waste
through ımproved virtualization. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A.,
Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture
Notes in Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
978-3-031-19958-5_97
22. Das, K., Saha, S., Chowdhury, S., Reza, A.W., Paul, S., Arefin, M.S.: A sustainable e-waste
management system and recycling trade for Bangladesh in green IT. In: Vasant, P., Weber,
G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing &
Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
23. Rahman, M.A., Asif, S., Hossain, M.S., Alam, T., Reza, A.W., Arefin, M.S.: A sustainable
approach to reduce power consumption and harmful effects of cellular base stations. In:
Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent
Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569.
Springer, Cham (2023). [Link]
24. Ahsan, M., Yousuf, M., Rahman, M., Proma, F.I., Reza, A.W., Arefin, M.S.: Designing a
sustainable e-waste management framework for Bangladesh. In: Vasant, P., Weber, G.W.,
Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Opti-
mization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
25. Mukto, M.M., Al Mahmud, M.M., Ahmed, M.A., Haque, I., Reza, A.W., Arefin, M.S.: A sus-
tainable approach between satellite and traditional broadband transmission technologies based
on green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas,
J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and
Systems, vol. 569. Springer, Cham (2023). [Link]
26. Meharaj-Ul-Mahmmud, Laskar, M.S., Arafin, M., Molla, M.S., Reza, A.W., Arefin, M.S.:
Improved virtualization to reduce e-waste in green computing. In: Vasant, P., Weber, G.W.,
Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Opti-
mization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
27. Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S.: Developing an energy
cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo,
E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in
Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
19958-5_75
28. Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustain-
able and profitable IT infrastructure of Bangladesh using green IT. In: Vasant, P., Weber,
G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing &
Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
29. Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for
ımproving e-waste management in Bangladesh. In: Vasant, P., Weber, G.W., Marmolejo-
Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO
2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
org/10.1007/978-3-031-19958-5_95
Reinforced Concrete Beam Optimization
via Flower Pollination Algorithm by Changing
Switch Probability Parameter
Abstract. Reinforced concrete (RC) is one of the most widely used and preferred
types of construction. Therefore, the optimum design of RC-bearing elements is an
important design problem. Metaheuristic algorithms are suitable for the optimum
design of the RC beam. In this study, the problem of minimizing the cost of
reinforced concrete beams will be solved with the Flower Pollination Algorithm
(FPA). For this purpose, the dimensions of the single-span rectangular RC beam
are optimized to give minimum cost. In the analysis, the effects of defining different
values of switch probability (sp) and defining the sp value randomly on the change
of the optimum result were determined. Results showed that using random sp is
have near performance to best sp cases.
1 Introduction
Optimization is the technique of finding the best solution among the set of all possible
solutions. Optimization has been widely used in engineering for a long time. Cost is one
of the most important parameters in the construction of structures. A good engineering
design must be economic and safe at the same time.
Reinforced concrete (RC) is one of the most widely used and preferred types of
construction because it is rigid and economical. It also consists of a combination of
concrete and steel. Thus, it has high fire resistance to provide a long service life. In
the design of reinforced concrete (RC) structures, the optimum design problem that
minimizes cost can only be solved by iterative methods. The best and most effective
methodology for the iterative discovery of different design-variable combinations is
provided by metaheuristic methods. There is a need for an improved method for frame
members [1].
Since reinforced concrete is widely used in civil engineering applications, the opti-
mum design of reinforced concrete structures is very important. Metaheuristic algorithms
are effective in the optimization of structural engineering problems. Different methods
have been developed and used to find the optimum design for engineering design prob-
lems. Optimization enables the efficient finding of purpose-based designs for complex
problems in structural engineering [2]. In the literature, it is seen that many studies
have been done on cost optimization. Bekdaş et al. [1] proposed a modified harmony
search methodology for optimization cost and CO2 of RC frames including dynamic
force resulting from earthquake motion. As a result of the study, it was seen that the
use of recycled elements provides a sustainable design. Camp and Huq [3] used a big
bang-big crunch (BB-BC) algorithm for CO2 and cost optimization of reinforced con-
crete frames. The BB-BC algorithm has produced designs that reduce costs and carbon
footprint for frames. Jelušič [4] presented the cost optimization of a reinforced con-
crete section, using the optimization method of mixed-integer nonlinear programming
(MINLP). It has been observed that the use of direct crack width calculation instead
of restricting the bar spacing reduces material costs in concrete sections. Lee et al. [5]
employed Genetic Algorithm (GA) in the minimization of the cost and CO2 emissions
of RC frames. According to the results, the design with a small number of reinforcement
bars and the lowest carbon dioxide emissions has a relatively large amount of reinforce-
ment bars. Nigdeli et al. [6] proposed a Harmony Search-based methodology for the
biaxial loading of RC columns. The proposed method is effective to find a detailed opti-
mum result by using optimum bars with different sizes. Kayabekir et al. [7] used Jaya
algorithm to optimize T-beams. They found that CO2 emission optimization is more
effective than cost optimization in reducing CO2 emission value. Ulusoy et al. [8] used
various metaheuristic algorithms in the design of several RC beams for cost minimiza-
tion. The results showed that using high-strength materials for high bending moment
capacity is less costly than low-strength concrete as doubly-reinforced structures are not
the most suitable choice. Bekdaş and Nigdeli [9] used an education-based metaheuris-
tic algorithm called Teaching-Learning-Based Optimization to investigate the optimum
design of reinforced concrete columns. It was concluded that the cost changed due to
the increase in the dimension and quality of the reinforcements. Nigdeli et al. [10] pro-
posed a metaheuristic-based methodology for the cost optimization of RC footings by
employing several algorithms to deal with non-linear optimization problems. According
to the results, detailed optimization and location optimization using a truncated pyramid
shape reduces the optimum cost.
In this study, the effect of the switch probability parameter on the optimum RC beam
was investigated. As a result of the study, the researchers were informed about the effects
of the switch probability parameter as well as the most effective metaheuristic method
for optimum design.
In this study, one of the optimization methods called Flower Pollination Algorithm
(FPA) is used. The flower pollination algorithm is a population-based metaheuristic
algorithm and inspired by many creatures and the natural life behaviors of the society
they live in. It was developed by Yang [11]. The basis of the algorithm is pollination
activity, a natural process of flowering plants. The unique feature of the algorithm is one
of the reasons for the selection of the algorithm. FPA is efficient in terms of computa-
tional performance [12]. Yang et al. also extended FPA for multiobjective optimization
problems [13]. The effect of the FPA algorithm on structural design optimization has
been supported by studies [14–17]. The present study applies, the FPA with different
68 Y. Aydın et al.
switch probability (sp) values to the structural design of RC beam for a minimum eco-
nomic cost. The effect of the switch probability parameter on the optimum RC beam
was investigated.
2 Methodology
For this study, the cost optimization of RC beams on the Matlab [18] program is exam-
ined. FPA has different parameters. The FPA-specific parameter that will be examined in
this study is the switch probability. Cost of RC beam optimized with flower pollination
algorithm by choosing different switch probability (sp). Also, sp was chosen as random
in each iteration. Thus, the algorithm became parameter setting free.
j,t j,t+1
where Xi i is the solution vector at iteration t, Xi is the solution of (t + 1)th iteration
i,∗
and Xbest is the current best solution. Because pollinators travel over long distances with
various distance intervals, the Lévy flight can be drawn from a Lévy distribution as Eq. 2.
1 1
L= × r −1.5 × e 2r (2)
2π
Abiotic pollination occurs by other factors without any pollinators. The local
pollination (Rule 2) and flower constancy (Rule 3) can be represented as Eq. 3.
j,t+1 j,t
Xi = Xi + ε Xia,t − Xib,t if sp ≤ r1 (3)
Two randomly chosen existing solutions (a and b) are chosen in the modification due
to self-pollinations. Xia,t and Xib,t are two random flowers and ε is a linear distribution.
Reinforced Concrete Beam Optimization via Flower Pollination 69
This equation essentially mimics the flower constancy in a limited neighborhood. For a
local random walk, if Xia,t and Xib,t comes from the same species then ε is drawn from
a uniform distribution as [0, 1].
Rule 4 can be done as the probability of using global and local pollination. At the
beginning of the algorithm, the initial values are randomly chosen according to a solution
range defined for the design variables. Existing and generated values are compared
according to the optimization objective. The solution matrix is updated. Results are only
updated if new solutions are better than existing ones [19].
FPA has its specific parameters. Changing these parameters affects the search abilities
of the algorithm. Flower constancy is about the tendency of specific pollinators to specific
flowers. This relationship is combined with global pollination and global pollination to
define two types of optimization processes. These processes are selected due to an
algorithm parameter called switch probability (sp ∈ [0, 1]). Switch probability is used
to switch between common global pollination to intensive local pollination (Fig. 1).
In the analyzes made for the sample in question, the effects of defining different
values of switch probability and defining the switch probability value as random (rand)
70 Y. Aydın et al.
on the change of the optimum result were determined. As a result of the research,
the performances of the algorithm were evaluated according to the different switch
probability values.
The equation expressing the minimization of the cost and the objective function in
the optimization problem is given in Eq. 4. Cc is the cost of the concrete per m3 ($), Cs
is the cost of the steel per ton ($) and γs is the specific gravity of steel (t/m3 ).
The variation ranges for the width and depth dimensions of the RC beam section are
given in Eqs. 5 and 6. These are lower and upper limits design variables.
In the rest of the present study, the following two cases are examined for the switch
probability.
Case 1: sp = (0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1).
Case 2: Random switch probability values in all iterations.
4 Results
The performance of the FPA was compared based on switch probability. In this section,
the main results are obtained by the different switch probability values for the RC beam
cost minimization problem. In this optimization process where the parameter effect is
observed, the population size is 20. The performance of FPA is evaluated by changing
switch probability. For Cases 1 and 2, the algorithm was run 100 times with 10000
iterations. For Case 1, the switching probability is increased from 0 to 1 such that the
number of steps is 0.1. For Case 2, random switch probability values are generated.
The minimum values, mean and standard deviations of the optimized total cost for
different switch probabilities, and the number of iterations with the optimum value,
which will be examined while comparing the algorithm, are shown in Table 1.
Global and local pollination preferences change as the switch probability changes in
the FPA algorithm. When the sp value is between 0–0.4, the dominant type of pollination
72 Y. Aydın et al.
is local pollination, while between 0.6–1, the dominant type of pollination is global
pollination. When the sp value is 0.5, the probability of the local search and the global
search is equal.
In Case 1, in analysis where the sp value is between 0 - 0.7, since the minimum value,
mean and standard deviation are the same, speed comparison can be examined for the
numbers of iterations. It is seen that the sp value increases between 0–0.5, that is, the
local pollination decreases and the global pollination increases. The number of iterations
decreases with a slope. It means that, as the global pollination weight increases, it finds
the minimum value in lower iterations. A minimum value was found for all switch
probability values. However, the mean value and standard deviation increased in the
0.8–1 switch probability values. The range where the result is found the fastest is the
range where the switch probability value is 0.4–0.6. The smallest number of iterations,
hence the fastest analysis, was realized at 0.5 switch probability. There is more than 1.5
times difference between 0 and 0.5 switch probability values, that is, the optimum value
was found 1.5 times faster.
In Case 2, analyzes were made with random sp values, and the average of the mini-
mum value, mean, standard deviation and number of iterations with the optimum values
for different random switch probability values were found. Since random values are
used for the switch probability value, there will be random switch values whose average
is close to 0.5. The minimum value, mean and standard deviation values of 0.5 switch
probability values in Case 1 and Case 2 are the same, but their speeds are different.
When the sp value is 0.5, a lower iteration number than the randomly generated values
is obtained.
5 Conclusions
The problem of optimizing the dimensions of the single-span rectangular RC beam with
minimum cost was carried out with flower pollination algorithm. FPA is a frequently
preferred optimization method because it is fast and the number of parameters is less.
There are 2 types of pollination in FPA. When deciding which of these two types of
pollination will occur, the switch probability parameter specific to FPA is used. In this
study, different switching probability values and their effects on the performance of the
algorithm are investigated. The analysis results are divided into two cases according
to which the probability to change is increased by 0.1 between 0 and 1, and randomly
generated switch probability value. When comparing the performance of the algorithm,
the minimum value, mean, standard deviation and number of iterations with the optimum
value are taken into account.
The results from the optimization process can be summarized as follows:
• Between 0–0.5 sp values, the algorithm gives faster results because as the switch
probability value increases, the number of iterations decreases with a slope.
• In Case 1, the minimum value was found for all switch probability values. However,
the mean value and standard deviation increased in the 0.8–1 switch probability
values. This shows that there is no minimum value for every run in these sp values.
• In Case 1, the fastest analysis was realized at 0.5 switch probability.
Reinforced Concrete Beam Optimization via Flower Pollination 73
• When Case 1 and Case 2 are examined, it is seen that when sp value is 0.5, a lower
iteration number is obtained than when randomly generated values. But, randomly
selected sp is also nearly effective as the best case and it is competitive. So, it may
be advantageous to eliminate the only algorithm parameter tuning.
Thus, the performance analysis proves the effect of the switch probability changing
technique for the optimization of the RC beam.
References
1. Bekdaş, G., Nigdeli, S.M., Kim, S., Geem, Z.W.: Modified Harmony search algorithm-based
optimization for eco-friendly reinforced concrete frames. Sustainability 14(6), 3361 (2022)
2. Arora, J.: Introduction to Optimum Design, 3rd edn. Academic Press, Waltham, MA, USA
(2012). ISBN 978-0-12-381375-6
3. Camp, C.V., Huq, F.: CO2 and cost optimization of reinforced concrete frames using a big
bang-big crunch algorithm. Eng. Struct. 48, 363–372 (2013)
4. Jelušič, P.: Cost optimization of reinforced concrete section according to flexural cracking.
Modelling 3(2), 243–254 (2022)
5. Lee, M.S., Hong, K., Choi, S.W.: Genetic algorithm based optimal structural design method
for cost and CO2 emissions of reinforced concrete frames. J. Comput. Struct. Eng. Inst. Korea
29, 429–436 (2016)
6. Nigdeli, S.M., Bekdas, G., Kim, S., Geem, Z.W.: A novel harmony search based optimization
of reinforced concrete biaxially loaded columns. Struct. Eng. Mech. 54(6), 1097–1109 (2015)
7. Kayabekir, A.E., Bekdaş, G., Nigdeli, S.M.: Optimum design of T-beams using Jaya algo-
rithm. In: 3rd International Conference on Engineering Technology and Innovation (ICETI),
Belgrade, Serbia (2019)
8. Ulusoy, S., Kayabekir, A.E., Bekdaş, G., Niğdeli, S.M.: Metaheuristic algorithms in optimum
design of reinforced concrete beam by investigating strength of concrete (2020)
9. Bekdaş, G., Nigdeli, S.M.: Optimum design of reinforced concrete columns employing
teaching learning based optimization. Challenge J. Struct. Mech. 2(4), 216–219 (2016)
10. Nigdeli, S.M., Bekdaş, G., Yang, X.S.: Metaheuristic optimization of reinforced concrete
footings. KSCE J. Civ. Eng. 22(11), 4555–4563 (2018)
11. Yang, X.S.: Flower pollination algorithm for global optimization. In: International Confer-
ence on Unconventional Computing and Natural Computation, pp. 240–249. Springer, Berlin
(2012)
12. Alyasseri, Z.A.A., Khader, A.T., Al-Betar, M.A., Awadallah, M.A., Yang, X.S.: Variants
of the flower pollination algorithm: a review. In: Nature-Inspired Algorithms and Applied
Optimization, pp. 91–118 (2018)
13. Yang, X.S., Karamanoglu, M., He, X.: Flower pollination algorithm: a novel approach for
multiobjective optimization. Eng. Optim. 46(9), 1222–1237 (2014)
14. Mergos, P.E.: Optimum design of 3D reinforced concrete building frames with the flower
pollination algorithm. Journal of Building Engineering 44, 102935 (2021)
15. Mergos, P.E., Mantoglou, F.: Optimum design of reinforced concrete retaining walls with the
flower pollination algorithm. Struct. Multidiscip. Optim. 61(2), 575–585 (2019). [Link]
org/10.1007/s00158-019-02380-x
16. Nigdeli, S.M., Bekdaş, G., Yang, X.S.: Application of the flower pollination algorithm in
structural engineering. In: Metaheuristics and Optimization in Civil Engineering, pp. 28–42.
Springer, Berlin (2016)
74 Y. Aydın et al.
17. Bekdaş, G.: New improved metaheuristic approaches for optimum design of posttensioned
axially symmetric cylindrical reinforced concrete walls. Struct. Design Tall Spec. Build. 27(7),
e1461 (2018)
18. The MathWorks, Matlab R2022a. Natick, Massachusetts (2022)
19. Kayabekir, A.E., Bekdaş, G., Nigdeli, S.M., Yang, X.S.: A comprehensive review of the flower
pollination algorithm for solving engineering problems. In: Nature-Inspired Algorithms and
Applied Optimization, pp. 171–188 (2018)
20. TS500: Betonarme Yapıların Tasarım ve Yapım Kuralları. Türk Standartları Enstitüsü, Ankara
(2000)
Cost Prediction Model Based on Hybridization
of Artificial Neural Network with Nature
Inspired Simulated Annealing Algorithm
Abstract. This paper proposes a cost prediction model for construction projects.
The key main implication of this research is to reduce the error between actual and
predicted cost value of the cost prediction model. This is achieved by deploying
artificial neural network for it and determine the optimal weight valued of artificial
neural network (ANN) algorithm using natured inspired simulated annealing algo-
rithm. Next, performance parameters, namely, root mean square error (RMSE),
normalized mean absolute error (NMAE), and mean absolute percentage error
(MAPE) are measured on the standard dataset to verify the performance of the
cost prediction model. Besides that, convergence rate parameter is determined for
simulated annealing algorithm to determine how quickly it find optimal weight
values. The result shows that the proposed model achieves lower values of per-
formance parameter over the existing ANN model. Thus, the proposed model is
very useful in construction projects to predict cost parameter.
1 Introduction
Although the construction industry is booming, it still faces the same old problems like
high threats, low quality, excessive costs, and delays in completion [1]. Time, money,
and quality are the “triple constraint,” and everyone in the business world agrees they are
what ultimately decide whether or not a project is successful [2–4]. It is a well-known
truth, however, that the building sector is less productive and efficient than the service
and industrial sectors. The discrepancy between the planned and real costs of major
building initiatives is one such issue. Eventually, due to the one-of-a-kind character of
building projects, the gap between their planned and real budgets expands [5]. Differ-
ences between observed and predicted values can be as high as 150%, as noted by Alzara
et al. [6]. As a result, methods for predicting future costs are required to bring it down
to a more manageable level.
In the literature, machine learning algorithms are deployed for design cost prediction
models. “Support vector machine” [7], “decision tree” [8], “random forest” [9], “neural
network” [10] is the successfully used machine learning algorithms in the cost predic-
tion models. Out of these, neural network is most preferred algorithm over others [11,
12]. The weight and bias values of the neural network is impacting its performance. The
determination of optimal weight value is a complex problem. To overcome this problem,
nature inspired algorithm are successfully used in neural network in the number of appli-
cations. The term “nature inspired algorithm” is used to describe a class of algorithms
that takes cues from the natural surroundings, such as “swarm intelligence”, “biologi-
cal systems”, and “physical and chemical systems” [13]. The nature inspired algorithm
searches the optimal solution based on the objective function. According to the given
problem, the objective function is either maximized or minimized. In the cost prediction
models, the objective function is minimizing the error between actual and predicted cost
value. Initially, in these algorithms random population is generated in the lower and
upper bound. After that, the objective function is used to evaluate population fitness
and determine the fittest population. Next, new populations are generated based on the
initial population and its fitness evaluation is done based on the objective function. The
fitness of the generated population is compared with the best population and updated it if
required. The whole operation is iterated for fixed number of iterations. In the literature,
genetic algorithm [14], particle swarm optimization [15], cuckoo search algorithms [16]
are used for neural network. This paper uses the simulated annealing algorithm because
of its simple structure, ability to handle noisy data, and highly non-linear models.
The main contribution of this paper is to design an efficient cost prediction model
using artificial neural network. The performance of the ANN algorithm is dependent on
the weight values which connects the neurons. The determination of optimal weight val-
ues adjusts the strength of the connections between neurons. Therefore, nature inspired
simulated annealing algorithm is taken under consideration in the proposed model to
determine weight values. The nature inspired algorithm based on the objective func-
tion determines the optimal weight of the ANN. Root Mean Square Error (RMSE) is
taken as an objective function. Further, for simulation evaluation, the dataset of con-
struction industry in the Gaza Strip is taken under consideration. The database contains
169 building projects. Further, four performance metrics such as root mean square error
(RMSE), normalized mean absolute error (NMAE), and mean absolute percentage error
(MAPE) are determined for the proposed model. Finally, these performance metrics are
compared to the existing ANN algorithm and result shows the proposed cost estimation
model performs well over the ANN based cost estimation model.
The remaining sections are classified into six sections. Section 2 shows the related
work in which existing cost prediction models are studied. Section 3 explains the algo-
rithms such as ANN and SA is taken under consideration in the proposed model. Section 4
explains the proposed cost estimation model. Section 5 shows the result and discussion.
In the last Sect. 6, conclusion is defined.
2 Related Work
In this section, we have investigated and analysed the existing cost prediction models
are designed by various researchers.
Cost Prediction Model Based on Hybridization of Artificial 77
where n represents the total number of inputs, x_i represents the data sent to each
individual processing element, w_i represents its weights, and b represents a bias. The
activation function is denoted by the letter f. The outgoing signal undergoes an algebraic
procedure as a result of the activation function f.
Additionally, MLP networks have numerous hidden levels and are feed-forward neu-
ral networks. The linearly separable issues can be solved by the single layer perceptron.
Multilayer perceptron is a model that takes the single layer perceptron and adds more
layers to it so that it can solve a problem that cannot be solved using a linear combination
of simpler problems. Multiple artificial neurons, representing the fundamental working
part of a neural network, are linked with one another in a multi-layer perceptron (MLP)
neural network. There’s a linear combiner and then a transfer function. Figure 1 shows
multi-layer perception architecture of artificial neural network.
⎛ ⎞
(n)
(n−1) (n) ⎠
yk = f (n) ⎝ yj .wjk (2)
j
(n−1)
(n−1) (n) (n)
yk = xi wij + bij (3)
i
(n−1) (n)
Every unit j in layer j takes activations yj .wjk from the preceding layer of
(n) (n)
processing units and directs activations yk . Here, yk represents the rate of predicted,
(n−1) (n)
f (n) represents the function of activation xi represents the rate of materials, wjk and
(n)
wij shows the connection weights among the material rate as well as the hidden neuron
(n)
and among the hidden neuron and the predicted cost, bij represents the bias terms and
i, j and k represents the number of neurons in every layer. In this research, we make use
of the unique characteristics of a transfer function with a linear unit that is exponential.
Cost Prediction Model Based on Hybridization of Artificial 79
Initially, standard dataset is read. Next, dataset is split into input and output targets. In
the cost estimation model, input target attributes are area of typical floor, number of floors,
footing type, slab type, elevator type, air conditioning, electrical, and mechanical type
whereas output target is cost. Next, ANN algorithm is applied on the input and output
targets for cost prediction. Feedforward network is chosen in the ANN algorithm. It
consists of a series of layer. The first layer is connected with input target whereas last
layer is connected to output target. Besides that, internal subsequent layers are connected
with the previous layers. The hidden layer size is 5 taken in the proposed method. After
that, artificial neural network is trained using input, output, and hidden layers. The weight
value of ANN algorithm is randomly initialized after that the optimal weight values of
artificial neural network are determined using simulated annealing algorithm. Based on
the objective function, the SA algorithm computes the best possible solutions. Root mean
square error (RMSE) is taken as a objective function. This function calculates the error
between actual and predicted cost value and which weight value gives the minimum
error is taken as an optimal value in the ANN algorithm. After determining the optimal
weight values, it is assigned to artificial neural network and final cost prediction is done.
Finally, the proposed method evaluation is done using for performance metrics, such as
MAPE, NMAE, and RMSE.
Start
Train the Artificial Neural Network using Determine Optimal Weight Values using
Input, Output, and Hidden Layer Size Simulated Annealing Algorithm
Performance Analysis
End
Parameter Values
Iterations 100
Objective function RMSE
Population weight value 1
Tolerance 4–10
Hidden layer size 5
The proposed model (SA-ANN) with the actual cost value is compared with predicted
value in Fig. 3. The result shows that the proposed model efficiently predict the cost.
Next, we have measured the various performance metrics and based on it compare
with the existing model [19–21].
• Mean Absolute Percentage Error (MAPE): The mean absolute percentage error is
determined using Eq. (4).
100 Ca − Cp
N
MAPE = (4)
N Ca
i=1
In Eq. (4), Ca , Cp denotes the actual and predicted cost whereas N denotes the
total number of samples.
Table 2 shows the comparative analysis based on the MAPE parameter. The
proposed model achieves a MAPE value of 15.922 over the existing ANN model’s
MAPE value of 21.081. This reflects the fact that the proposed model achieves lower
value than the existing model.
• Normalized Mean Absolute Error (NMAE): This parameter measures the normalized
mean absolute error between the actual and predicted cost of the proposed model using
Eq. (5).
1 Ca − Cp
N
NMAE = (5)
N CPeak
i=1
82 V. Kumar et al.
Table 3 shows the comparative analysis based on the NMAE parameter. The pro-
posed model achieves a NMAE value of 7.0647e−05 over the existing ANN model’s
NMAE value of 0.5151. This reflects the fact that the proposed model achieves lower
value than the existing model.
• Root Mean Square Error (RMSE): This parameter measures the root mean square
error between the actual and predicted cost of the proposed model using Eq. (6).
1
N
2
RMSE = Ca − Cp (6)
N
i=1
Cost Prediction Model Based on Hybridization of Artificial 83
Table 4 shows the comparative analysis based on the RMSE parameter. The pro-
posed model achieves a RMSE value of 51132 over the existing ANN model’s NMAE
value of 1.43e+05. This reflects the fact that the proposed model achieves lower value
than the existing model.
• Convergence Rate: Convergence rate defines how quickly nature inspired algorithm
searches the optimal solution. It is plotted between iterations vs. fitness function.
Figure 4 shows the convergence rate of the simulated annealing algorithm to determine
the optimal weight value. The result shows that the simulated annealing algorithm
determines the optimal weight values according to the fitness function in the initial
iterations (approximate in the 38th iteration in the graph).
5.1 Discussion
From the simulation results, we have analysed the ANN algorithm performance enhanced
if the optimal weight values are assigned to the network over the random values. Besides
84 V. Kumar et al.
that, simulated annealing is efficiently determined the optimal weight values based on
the objective function and gives the minimum error between actual and predicted cost
value.
6 Conclusion
The cost prediction in the earlier phase of construction projects increases the overhead
costs and increases the revenue. Thus, to achieve this goal, in this paper, we have hybrid
the artificial neural network (ANN) and simulated annealing (SA) algorithm for design
cost prediction model. In the proposed model, the simulated annealing algorithm based
on the objective function determines the optimal weight values of ANN algorithm. We
have taken RMSE as the objective function. Further, standard dataset has taken under
consideration and various parameters such as MAPE, NMAE, RMSE, and convergence
rate are measured and compared with ANN algorithm. The proposed model provides
lower value of MAPE, NMAE, RMSE parameter and better convergence rate. In the
future, we will explore enhance the proposed model using the following approaches.
• In the proposed model, single objective function is taken under consideration. In the
future, we will work on multi-objective function to enhance the prediction model.
• In the proposed model, nature-inspired simulated annealing algorithm is taken under
consideration. In the future, we will explore the recent nature inspired algorithm
that provides better performance and provides lesser complexity over the existing
algorithms.
References
1. Sanni-Anibire, M.O., Mohamad Zin, R., Olatunji, S.O.: Developing a preliminary cost esti-
mation model for tall buildings based on machine learning. Int. J. Manag. Sci. Eng. Manag.
16(2), 134–142 (2021)
2. Gunduz, M., Nielsen, Y., Ozdemir, M.: Fuzzy assessment model to estimate the probability
of delay in Turkish construction projects. J. Manag. Eng. 31(4), 04014055 (2015)
3. Ghosh, M., Kabir, G., Hasin, M.A.A.: Project time–cost trade-off: a Bayesian approach to
update project time and cost estimates. Int. J. Manag. Sci. Eng. Manag. 12(3), 206–215 (2017)
4. Sacks, R.: Modern Construction: Lean Project Delivery and Integrated Practices (2013)
5. Ahiaga-Dagbui, D.D., Smith, S.D.: Dealing with construction cost overruns using data mining.
Constr. Manag. Econ. 32(7–8), 682–694 (2014)
6. Alzara, M., Kashiwagi, J., Kashiwagi, D., Al-Tassan, A.: Using PIPS to minimize causes
of delay in Saudi Arabian construction projects: university case study. Procedia Eng. 145,
932–939 (2016)
7. Wang, Y.R., Yu, C.Y., Chan, H.H.: Predicting construction cost and schedule success using
artificial neural networks ensemble and support vector machines classification models. Int. J.
Project Manage. 30(4), 470–478 (2012)
8. Doğan, S.Z., Arditi, D., Murat Günaydin, H.: Using decision trees for determining attribute
weights in a case-based model of early cost prediction. J. Constr. Eng. Manag. 134(2), 146–152
(2008)
Cost Prediction Model Based on Hybridization of Artificial 85
9. Meharie, M.G., Shaik, N.: Predicting highway construction costs: comparison of the perfor-
mance of random forest, neural network and support vector machine models. J. Soft Comput.
Civ. Eng. 4(2), 103–112 (2020)
10. Kumar, A., Singla, S., Kumar, A., Bansal, A., Kaur, A.: Efficient prediction of bridge con-
ditions using modified convolutional neural network. Wirel. Pers. Commun. 125(1), 29–43
(2022)
11. Magdum, S.K., Adamuthe, A.C.: Construction cost prediction using neural networks. ICTACT
J. Soft Comput. 8(1) (2017)
12. Chandanshive, V., Kambekar, A.R.: Estimation of building construction cost using artificial
neural networks. J. Soft Comput. Civ. Eng. 3(1), 91–107 (2019)
13. Soni, V., Sharma, A., Singh, V.: A critical review on nature inspired optimization algorithms.
IOP Conf. Ser. Mater. Sci. Eng. 1099(1), 012055 (2021)
14. Feng, G.L., Li, L.: Application of genetic algorithm and neural network in construction cost
estimate. In: Advanced Materials Research, vol. 756, pp. 3194–3198 (2013)
15. Alsarraf, J., Moayedi, H., Rashid, A.S.A., Muazu, M.A., Shahsavar, A.: Application of PSO–
ANN modelling for predicting the exergetic performance of a building integrated photo-
voltaic/thermal system. Eng. Comput. 36(2), 633–646 (2019). [Link]
366-019-00721-4
16. Yuan, Z., Wang, W., Wang, H., Mizzi, S.: Combination of cuckoo search and wavelet neural
network for midterm building energy forecast. Energy 202, 117728 (2020)
17. Fan, M., Sharma, A.: Design and implementation of construction cost prediction model based
on SVM and LSSVM in industries 4.0. Int. J. Intell. Comput. Cybern. (2021)
18. Anand, V., Gupta, S., Koundal, D., Nayak, S.R., Barsocchi, P., Bhoi, A.K.: Modified U-net
architecture for segmentation of skin lesion. Sensors 22(3), 867 (2022)
19. Anand, V., Gupta, S., Koundal, D., Singh, K.: Fusion of U-Net and CNN model for seg-
mentation and classification of skin lesion from dermoscopy images. Expert Syst. Appl. 213,
119230 (2023)
20. Aarts, E., Korst, J., Michiels, W.: Simulated annealing. In: Search Methodologies: Introduc-
tory Tutorials in Optimization and Decision Support Techniques, pp. 187–210 (2005)
21. Chicco, D., Warrens, M.J., Jurman, G.: The coefficient of determination R-squared is more
informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation.
Peer J. Comput. Sci. 7, 623 (2021)
Optimum Design of Reinforced Concrete
Footings Using Jaya Algorithm
Abstract. Reinforced concrete (RC) footings; which are more suitable to design
where there is no risk of non-uniform settlements on the ground have many factors
to be considered in the design phase. In the RC footing design, considering the soil
and the structural properties, it is necessary to make a design in which safety, cost
and resources are used optimally. In this study, cost optimization of RC footing is
performed with a metaheuristic, Jaya algorithm. An objective function was found
by synthesizing the formulas used in the RC footing calculations. It is seen that the
Jaya algorithm is successful in the RC footing design and gives consistent results.
1 Introduction
Creating an efficient, safe and cost-effective design without compromising the integrity
of the system is a challenge for engineers [1]. Considered on the basis of civil engineering,
once the safety criterion has been met, it is important to carry out an optimum design that
ensures the efficient use of cost, time and resources. The optimum design of reinforced
concrete (RC) footings is discussed within the scope of the study; reinforced concrete
is the most widely used building material in the world [2] and its carbon emission is
higher than steel and wood [3]; therefore, the optimum use of the resources, cost and
safety are considered as a subject to be investigated because of the necessity of a design
model that is considered together.
At the design stage; design loads, geometric constraints, soil properties and cost
should be considered together. After minimizing the additional effects that may occur in
the structure by preventing different settlements on the ground by meeting the required
soil strength, the internal section effects of the foundation element should be determined
and the reinforced concrete section and reinforcement arrangement should be made [4].
When considering different parameters such as bending moment, shear force, normal
force, soil properties, required geometry, required reinforcement ratio and cost for the
optimum design of RC footings, iterative solutions are made and this can cause time
loss [5]. With metaheuristic optimization algorithms; in engineering designs, the loss of
time arising from the search for an optimum solution can be avoided by pre-sizing the
problem before starting the design and by gradually repetitive processes.
In the optimum design of RC structural members, metaheuristic algorithms may have
played an important role in finding the best design which is not possible to find via math-
ematical methods due to including a decision-making process on element dimensions.
Also, these element dimensions are needed to be checked for several design constraints
for safety. To make these safety checks automatically and find the best optimum solu-
tion with several iterative trials, metaheuristic algorithms have been used in the optimum
design of RC members such as beams [6–11], columns [12–18], retaining walls [19–23]
and frames [24–29].
In the present study, RC footings under bending moment and axial force were opti-
mized via the Jaya algorithm. The numerical investigation was done for several cases of
loading conditions.
2 Jaya Algorithm
This algorithm, which aims to move away from the worst solution in order to reach the
best solution, is a new method developed by Rao in 2016 [30]. The algorithm evaluates
fewer functions than other types of metaheuristic algorithms and requires relatively less
processing in the solution phase [31]. The formula of the Jaya algorithm, in which the
optimum solution is reached by moving away from the worst solution with each iteration,
is as follows:
Xi,new = Xi,j + rand() Xi,g(best) − |Xi,j | −rand() Xi,g(worst) − |Xi,j | (1)
The expression Xi,new denotes the new value of the project parameter to be optimized,
while the rand() expression denotes a random number between 0–1. The expression
Xi,g(best) represents the best value in the solution matrix, while the expression Xi,g(worst)
is the most undesirable result in the solution matrix. Therefore, in the current solution
matrix, steps are taken to find the optimum value by using the best result, the worst result
and the value itself.
3 Design of RC Footings
Reinforced concrete elements that are designed to spread the loads from the superstruc-
ture to the columns over wider bases in the ground are called RC footings [32]. In
this study, uniaxial moment and normal force load carrying RC footings are taken into
account (Fig. 1).
with the Jaya algorithm have been obtained in relation to the soil-bearing capacity control
conditions.
g1 = Nd · Bx + 6Md − qt − γRv · γavg · ht · Bx2 · By ≤ 0 (2)
g2 = Nd · Bx − 6Md − qt − γRv · γavg · ht · Bx2 · By ≤ 0 (3)
g3 = 6Md − Nd · Bx ≤ 0 (4)
In these formulas, ‘Nd ’ denotes the normal force to be considered in the design and
‘Md ’ denotes the design moment. ‘Bx ’ is the length of the foundation in the x direction,
‘qt ’ is the soil bearing capacity, ‘ht ’ is the foundation height, γRv is the foundation
bearing strength reduction coefficient, γavg is the foundation and above By denotes the
average unit weight of the soil, and By the length of the foundation in the y direction.
The design constraint relation from the shear control is given in Eq. 5. qof refers to the
base pressure on the column face, qmax refers to the maximum value of the base pressure.
h is the width of the column, Bx is the length of the foundation in the x-direction, By is
the length of the foundation in the y-direction, d is the useful foundation height, and fctd
is the design tensile strength.
qmax + qof Bx − h
g4 = · · By − 0.65 · fctd · By · d < 0 (5)
2 2
The final design constraint is established due to the punching verification and is
expressed by Eq. 6.
g5 = Nd − Ap · q00 − γ · fCtd · Up · d · 1000 ≤ 0 (6)
Optimum Design of Reinforced Concrete Footings Using 89
CF = CC + Cs (7)
For several normal force values, the optimization results are given in Table 3.
In Table 4; the results of optimum design of a RC footing bearing with 150 KNm
moment and 750 KN normal force under different soil bearing forces are included.
Data obtained from Table 5 gives optimum results of RC footing designed with
9 different concrete classes from C16 to C50. The effects of concrete grade change on
optimum design are shown. The design optimization was made under 250 KNm moment
and 1500 KN normal force, and the column size on an RC footing is 55 × 55 cm. Soil
bearing capacity is chosen as 240 KN/m2 .
90
1 100 1500 1.26 240 0.55 0.55 3.00 2.46 0.37 3965
2 200 1500 1.26 240 0.55 0.55 3.22 2.53 0.38 4514
3 300 1500 1.26 240 0.55 0.55 3.42 2.58 0.39 5034
4 400 1500 1.26 240 0.55 0.55 3.61 2.62 0.40 5532
5 500 1500 1.26 240 0.55 0.55 3.78 2.66 0.41 6015
6 600 1500 1.26 240 0.55 0.55 3.94 2.68 0.42 6485
7 700 1500 1.26 240 0.55 0.55 4.09 2.71 0.43 6945
8 800 1500 1.26 240 0.55 0.55 4.24 2.73 0.44 7397
9 900 1500 1.26 240 0.55 0.55 4.38 2.74 0.45 7813
10 1000 1500 1.26 240 0.55 0.55 4.52 2.76 0.46 8279
11 1100 1500 1.26 240 0.55 0.55 4.65 2.77 0.47 8711
12 1200 1500 1.26 240 0.55 0.55 4.80 2.76 0.48 9178
Table 3. Normal force – optimum values relation
4 Discussion
In the optimum design of an RC footing under the influence of uniaxial external forces,
the Jaya algorithm yielded consistent results with 500 iterations and 15 population
numbers.
As expected; since a situation with a moment in one axis (x-axis) is considered within
the scope of the research, it is seen that the increase in moment increases the foundation
dimension in the x-axis more than the other foundation dimensions. It is seen that the
increase in soil-bearing capacity reduces the dimensions of the foundation and reduces
the cost. Due to the x-directed moment bearing of the single footing, which is considered
within the scope of the study, the increase in soil strength in the optimum design of the
single footing allows for reducing the length of the single footing in the x-direction
and the foundation cost. Therefore, using Jaya algorithm to design RC footings gives
consistent results.
5 Conclusion
As it is supposed to be; the increase in moment increases the optimum dimensions of
the single foundation and the optimum cost. In addition, an increase in the normal force
has the same effect. When the tables showing the effects of the moment and normal
force increases on the cost are examined, the rate of cost increase in the external load
increase at the same rate was higher in the normal force than in the moment. This is
another confirmation that the optimization gives a valid result in terms of being parallel
to the design constraints written on the condition that the soil-bearing capacity is not
exceeded.
The effect of the moment increase on the individual foundation dimensions is less
than the effect of the normal force increase. Because in the maximum stress formula,
the normal force is divided by the product of the lengths of the single foundation in two
directions; the moment is divided by the square of the direction in the x-direction times
the y-direction. Therefore, desired results have been achieved.
In short, the increase in normal force and moment increased the optimum foundation
dimensions. In the rate of increase of the foundation height; the effect of the normal force
is more effective than the moment. Since there is a moment in one direction, while the
moment increases, the length of the foundation in the x direction shows an increase. The
length in the y direction increases relatively less. As the soil-bearing capacity increases
and the concrete class increases, the foundation sections get smaller and the optimum
cost decreases. When different scenarios with the same external loads, soil properties
and concrete class were investigated, it was determined that the increase in column sizes
reduced the cost and reduced the size of the foundations. As a result, it is seen that the
Jaya algorithm is successful in the singular basis optimum design and gives consistent
results.
References
1. Arora, J.: Introduction to Optimum Design. Elsevier (2004)
Optimum Design of Reinforced Concrete Footings Using 95
2. European Ready Mixed Concrete Organization (ERMCO). ERMCO Statistics 2015. Available
online: [Link]/document/ermco-statistics-2015-final-pdf/. Accessed on 24 Apr 2022
3. Maas, G.P.: Comparison of quay wall designs in concrete, steel, wood and composites with
regard to the CO2 -emission and the life cycle analysis (2011)
4. Celep, Z.: Betonarme Yapılar. Beta Basım Yayım Dağıtım, İstanbul (2017)
5. Nigdeli, S.M., Bekdaş, G., Yang, X.-S.: Metaheuristic optimization of reinforced concrete
footings. KSCE J. Civ. Eng. 22(11), 4555–4563 (2018). [Link]
2010-6
6. Nigdeli, S.M., Bekdaş, G.: Optimum design of RC continuous beams considering
unfavourable live-load distributions. KSCE J. Civ. Eng. 21(4), 1410–1416 (2017). https://
[Link]/10.1007/s12205-016-2045-5
7. Cakiroglu, C., Islam, K., Bekdas, G., Apak, S.: Cost and CO2 emission-based optimisation
of reinforced concrete deep beams using Jaya algorithm. J. Environ. Prot. Ecol. 23(6), 1–10
(2022)
8. Kayabekir, A.E., Bekdaş, G., Nigdeli, S.M.: Evaluation of metaheuristic algorithm on opti-
mum design of T-beams. In: Proceedings of 6th International Conference on Harmony Search,
Soft Computing and Applications: ICHSA 2020, Istanbul, pp. 155–169. Springer, Singapore
(2021)
9. Kayabekir, A.E., Bekdaş, G., Nigdeli, S.M.: Optimum design of reinforced concrete T-beam
considering environmental factors via flower pollination algorithm. Int. J. Eng. Appl. Sci.
13(4), 166–178 (2021)
10. Yücel, M., Nigdeli, S.M., Bekdaş, G.: Minimization of the CO2 emission for optimum design
of T-shape reinforced concrete (RC) beam. In: Proceedings of 7th International Conference
on Harmony Search, Soft Computing and Applications: ICHSA 2022, pp. 127–138. Springer
Nature, Singapore (2022)
11. Çoşut, M., Bekdaş, G., Niğdeli, S.M.: Cost optimization and comparison of rectangular cross-
section reinforced concrete beams using TS500, Eurocode 2, and ACI 318 code. In: Proceed-
ings of 7th International Conference on Harmony Search, Soft Computing and Applications:
ICHSA 2022, pp. 83–91. Springer Nature, Singapore (2022)
12. Bekdaş, G., Cakiroglu, C., Kim, S., Geem, Z.W.: Optimization and predictive modeling of
reinforced concrete circular columns. Materials 15(19), 6624 (2022)
13. Nigdeli, S.M., Yücel, M., Bekdaş, G.: A hybrid artificial intelligence model for design of
reinforced concrete columns. Neural Comput. Appl. 1–9 (2022)
14. Kayabekir, A., Bekdaş, G., Nigdeli, S., Apak, S.: Cost and environmental friendly multi-
objective optimum design of reinforced concrete columns. J. Environ. Prot. Ecol. 23(2), 1–10
(2022)
15. Kayabekir, A.E., Nigdeli, S.M., Bekdaş, G.: Adaptive harmony search for cost optimization of
reinforced concrete columns. In: Intelligent Computing & Optimization: Proceedings of the
4th International Conference on Intelligent Computing and Optimization 2021 (ICO2021),
vol. 3, pp. 35–44. Springer International Publishing, Berlin (2022)
16. Cakiroglu, C., Islam, K., Bekdaş, G., Kim, S., Geem, Z.W.: Interpretable machine learning
algorithms to predict the axial capacity of FRP-reinforced concrete columns. Materials 15(8),
2742 (2022)
17. Bekdaş, G., Nigdeli, S.M.: Optimum design of reinforced concrete columns employing
teaching learning based optimization. Challenge J. Struct. Mech. 2(4), 216–219 (2016)
18. Nigdeli, S.M., Bekdas, G., Kim, S., Geem, Z.W.: A novel harmony search based optimization
of reinforced concrete biaxially loaded columns. Struct. Eng. Mech. Int. J. 54(6), 1097–1109
(2015)
19. Bekdaş, G., Cakiroglu, C., Kim, S., Geem, Z.W.: Optimal dimensioning of retaining walls
using explainable ensemble learning algorithms. Materials 15(14), 4993 (2022)
96 H. K. Türkoğlu et al.
20. Yücel, M., Bekdaş, G., Nigdeli, S.M., Kayabekir, A.E.: An artificial intelligence-based pre-
diction model for optimum design variables of reinforced concrete retaining walls. Int. J.
Geomech. 21(12), 04021244 (2021)
21. Yücel, M., Kayabekir, A.E., Bekdaş, G., Nigdeli, S.M., Kim, S., Geem, Z.W.: Adaptive-hybrid
harmony search algorithm for multi-constrained optimum eco-design of reinforced concrete
retaining walls. Sustainability 13(4), 1639 (2021)
22. Kayabekir, A.E., Yücel, M., Bekdaş, G., Nigdeli, S.M.: Comparative study of optimum cost
design of reinforced concrete retaining wall via metaheuristics. Chall. J. Concr. Res. Lett 11,
75–81 (2020)
23. Kayabekir, A.E., Arama, Z.A., Bekdas, G.: Effect of application factors on optimum design
of reinforced concrete retaining systems. Struct. Eng. Mech. 80(2), 113–127 (2021)
24. Bekdaş, G., Nigdeli, S.M., Kim, S., Geem, Z.W.: Modified harmony search algorithm-based
optimization for eco-friendly reinforced concrete frames. Sustainability 14(6), 3361 (2022)
25. Rakıcı, E., Bekdaş, G., Nigdeli, S.M.: Optimal cost design of single-story reinforced concrete
frames using Jaya algorithm. In: Proceedings of 6th International Conference on Harmony
Search, Soft Computing and Applications: ICHSA 2020, Istanbul, pp. 179–186. Springer,
Singapore (2021)
26. Ulusoy, S., Kayabekir, A.E., Bekdaş, G., Nigdeli, S.M.: Optimum design of reinforced con-
crete multi-story multi-span frame structures under static loads. Int. J. Eng. Technol. 10(5),
403–407 (2018)
27. Bekdaş, G., & Nigdeli, S.M.: Modified harmony search for optimization of reinforced concrete
frames. In: Harmony Search Algorithm: Proceedings of the 3rd International Conference on
Harmony Search Algorithm (ICHSA 2017), vol. 3, pp. 213–221. Springer, Singapore (2017)
28. Bekdaş, G., Nigdeli, S.M.: Optimization of RC frame structures subjected to static loading.
In: 11th World Congress on Computational Mechanics, pp. 20–25 (2014)
29. Kayabekir, A.E., Bekdaş, G., Nigdeli, S.M.: Control of reinforced concrete frame struc-
tures via active tuned mass dampers. In: Proceedings of 7th International Conference on
Harmony Search, Soft Computing and Applications: ICHSA 2022, pp. 271–277. Springer
Nature, Singapore (2022)
30. Rao, R.V.: Jaya: a simple and new optimization algorithm for solving constrained and
unconstrained optimization problems. Int. J. Ind. Eng. Comput. 7, 19–34 (2016)
31. Bekdaş, G., Nigdeli, S.M., Yücel, M., Kayabekir, A.E.: Yapay Zeka Optimizasyon Algorit-
maları ve Mühendislik Uygulamaları. Seçkin Yayıncılık, Ankara (2021)
32. Doğangün, A.: Betonarme Yapıların Hesap ve Tasarımı. Birsen Yayın Dağıtım, İstanbul
(2021)
AI Models for Spot Electricity Price
Forecasting—A Review
1 Introduction
dependability are vital for the functioning of modern economies. Finally, the regulation
of electricity markets is necessary to guarantee fair prices and consumer protection, and
it can vary depending on the country or region, which makes it a regulated commodity.
These aspects make electricity a unique and exceptional commodity that needs special
attention concerning pricing, trading, and regulation [1, 2, 5].
Power exchanges and spot electricity markets play a pivotal role in the electric-
ity industry by facilitating the bilateral trading of electrical power between multiple
stakeholders. Power exchanges serve as sophisticated electronic trading platforms that
effectively match the supply and demand of electricity among electricity producers,
suppliers, and consumers. These platforms operate seamlessly and offer a transparent,
open, and highly competitive marketplace that ensures equitable pricing of electricity.
Spot electricity markets are a specialized subset of power exchanges where electricity
is traded for immediate delivery, typically on a day-ahead or hour-ahead basis. These
markets empower electricity producers and suppliers to adjust their output based on
real-time demand and supply conditions, thus optimizing grid balance and ensuring a
reliable and stable supply of electricity. By providing an effective platform for electric-
ity trading, power exchanges and spot markets foster a highly competitive environment,
thereby promoting transparency and reducing the market power of dominant players.
Additionally, they provide renewable energy sources with an opportunity to participate
in the energy market, thus encouraging investment in new generation capacity. Holis-
tically power exchanges and spot markets are instrumental in creating an efficient and
dependable electricity market, thereby benefiting consumers, producers, and suppliers
alike [3, 4, 6].
The trading of electricity is facilitated by various power exchanges globally,
including those that offer spot electricity markets. Some notable examples of major
power exchanges that provide such markets are: (a) The European Power Exchange
(EPEX) operates across multiple European countries and ranks among the largest power
exchanges worldwide. It operates both day-ahead and intra-day markets, catering to
power product trades in several currencies. (b) Nord Pool, the world’s biggest power
exchange, operates across several Northern European countries, and is known for its
trading activities in the Nordic and Baltic regions. It provides both day-ahead and intra-
day markets, offering products specifically for the Nordic, Baltic, and German markets.
(c) The Australian Energy Market Operator (AEMO) runs a spot market in Australia,
catering to the eastern and southern states. The exchange operates on a 24-h basis, pro-
viding both day-ahead and intra-day markets. (d) The New York Mercantile Exchange
(NYMEX) manages a spot market for electricity, natural gas, and other energy products
in the United States. It offers multiple trading contracts and operates on a 24-h basis. (e)
The Indian Energy Exchange (IEX) is a leading power exchange in India and operates
a day-ahead market specifically for electricity trading. The exchange offers multiple
contracts for trading and operates on a 24-h basis [7, 8, 9].
Artificial Intelligence (AI) is increasingly playing a crucial role in the spot electricity
price forecasting process, owing to its capability to process massive amounts of data and
generate insights to facilitate more precise price forecasting. In this study we review
the role played by AI inspired models in spot electricity price forecasting literature. In
this study we present and review state-of-the-art AI inspired models in spot electricity
AI Models for Spot Electricity Price Forecasting—A Review 99
price forecasting literature. The insights provided by the review will be useful for all
stakeholders, policy makers, power industry participants and researchers. The remainder
of the paper is structured as follows. In Sect. 2 we review spot electricity price forecasting
literature with emphasis on AI inspired models. In Sect. 3 we focus on stylized facts of
spot electricity prices, how AI models can contribute in optimizing spot electricity price
forecasting and the role of regulators. In Sect. 4 we summarize and conclude our study
highlighting the way forward.
2 Literature Review
Short-term electricity price contracts are typically characterized as agreements for the
exchange of electricity for a duration ranging from one day up to one year. These con-
tracts are utilized to regulate the prompt requirements of electricity or exploit transient
price oscillations within the spot market. Compared to their longer-term counterparts,
short-term contracts offer greater flexibility, providing parties with the ability to more
frequently adjust their electricity procurement or selling strategies. Medium-term elec-
tricity price contracts, which span between one to three years, are employed to manage
the supply and demand of electricity over an extended period, while also offering a
greater level of price stability in comparison to short-term contracts. They may be uti-
lized to mitigate against price volatility or to secure a consistent supply of electricity
for a specified duration. Long-term electricity price contracts, on the other hand, typi-
cally refer to agreements that involve the purchase or sale of electricity for a period of
three years or more. These contracts are frequently employed in large-scale electricity
projects, such as the development of new power plants or renewable energy facilities.
Long-term contracts offer greater certainty over future electricity prices and supply, thus
enabling effective long-term planning and investment [1, 4, 6, 9].
AI is utilized in several ways to enhance spot electricity price forecasting: (a) Data
processing: AI can analyze voluminous data from various sources, such as historical
price data, weather forecasts, and electricity market data, to recognize patterns and rela-
tionships that can inform price forecasting. (b) Machine learning algorithms: Historical
price data can train machine learning algorithms to predict future prices, with contin-
uous improvements in accuracy over time as new data becomes available. (c) Neural
networks: By modeling intricate connections between different variables that can affect
electricity prices, neural networks can learn to predict patterns that humans may find
too complex to detect. Large datasets can train these networks. (d) Predictive analyt-
ics: Predictive analytics techniques can identify trends and patterns in electricity market
data, facilitating price forecasting. These techniques can also be used to forecast future
trends and identify drivers of price changes, such as variations in supply or demand.
Integrating AI in spot electricity price forecasting augments the accuracy of forecasts
and facilitates more informed decision-making by market participants. This, in turn,
ensures the efficiency of the electricity market, with prices that reflect the actual value
of electricity at any given time [2, 5, 8, 9].
There exists a multitude of AI models for spot electricity price forecasting, such
as Artificial Neural Networks, Support Vector Machines, Random Forests, Long Short-
Term Memory Networks and Convolutional Neural Networks. ANNs simulate the bio-
logical neural network structure to predict the intricate relationships between diverse
100 G. P. Girish et al.
variables that impact electricity prices. SVMs, a supervised learning algorithm, learn
from historical price data to make accurate future price predictions, which can be refined
over time. Random Forests, an ensemble learning method, generate a plethora of decision
trees that predict the mode or mean prediction of the individual trees for classification
or regression analysis, respectively. LSTMs, a type of recurrent neural network, are
well-suited for analyzing time series data and can identify patterns in historical data to
forecast spot electricity prices. CNNs, typically used for image recognition, can be uti-
lized in time series forecasting, such as spot electricity price forecasting, by interpreting
historical price data as an image and recognizing patterns in the data. Overall, these AI
models utilize advanced techniques to analyze vast datasets and improve the accuracy
of spot electricity price forecasting [6, 8, 9].
Artificial neural network models can be categorized based on their architecture and
learning algorithm. The architecture, also known as topology, refers to the neural con-
nections, while the learning algorithm describes how the ANN adjusts its weights for
each training vector. In the context of electricity price forecasting, ANN models can also
be classified based on the number of output nodes they have. The first group includes
ANN models with a single output node used to forecast various electricity prices, such
as the next hour’s price, the price h hours ahead, the next day’s peak price, the next
day’s average on-peak price, or the next day’s average baseload price. Several studies
have been conducted using these models, such as [10–18]. The second, less common
group includes ANN models with multiple output nodes that forecast a vector of prices,
typically 24 or 48 nodes, to predict the complete price profile of the following day [19].
Feed-forward networks are commonly favored for forecasting, while recurrent networks
are particularly skilled in pattern classification and categorization, as highlighted by
studies conducted by [20, 21]. The Levenberg-Marquardt algorithm is the second most
commonly used training method, as demonstrated by its application in electricity price
forecasting studies conducted by [22, 23]. [24] posits that this algorithm can train a net-
work 10–100 times more quickly than back-propagation. The Multi-Layer Perceptron
architecture has been employed in various studies such as those conducted by [25, 26].
On the other hand, the Radial Basis Function architecture, which is less popular, has
been utilized in studies such as those conducted by [27].
for forecasting future prices. Finally, there is the possibility of price spikes, which are
sudden extreme changes in electricity prices caused by unexpected changes in demand
or supply conditions such as plant outages, weather events, or transmission line failures,
which can have significant economic impacts on electricity consumers and producers
[1–9].
Regulators are instrumental in guaranteeing efficient, equitable, and transparent oper-
ation of spot electricity markets. They perform various crucial roles in these markets
such as market design and structure, oversight and enforcement, market monitoring and
analysis, and consumer protection. Regulators collaborate in designing and structur-
ing spot electricity markets, creating rules and procedures for trading, and formulating
pricing mechanisms that reflect the genuine value of electricity to facilitate equitable
and efficient operation of the markets. They also monitor and enforce the rules and
procedures of these markets to forestall potential abuses like insider trading or mar-
ket manipulation. Through market monitoring and analysis, regulators identify market
inefficiencies, instabilities, or trends, which inform their decision-making. Regulators
ensure consumer protection by verifying that market participants operate transparently,
ethically, and fairly, and that consumers are shielded from anti-competitive or fraudu-
lent behavior. By promoting competition, transparency, and consumer protection in spot
electricity markets, regulators ensure their efficient and fair operation for all stakeholders
involved [1–9].
Artificial intelligence inspired models hold the potential to play a consequential role
in the future of spot electricity price forecasting. These models can scrutinize large vol-
umes of data, discovering patterns and trends that might be elusive to human analysts.
Some of the ways in which AI models can enhance spot electricity price forecasting
include: (a) Increased precision: AI models can scrutinize extensive data and detect pat-
terns and trends that may not be perceivable to human analysts. This can lead to more
precise forecasts and more informed decision-making. (b) Rapid forecasting: AI models
can analyze data expeditiously and make predictions in real-time, which is essential in a
swiftly changing market such as spot electricity prices. (c) Improved risk management:
AI models can assist in identifying and managing risk in the electricity market. For
instance, AI models can be used to anticipate extreme price spikes or recognize pos-
sible supply disruptions, enabling electricity companies to take measures to minimize
their vulnerability to risk. (d) Streamlined efficiency: AI models can mechanize many
of the activities associated with spot electricity price forecasting, enabling analysts to
concentrate on more strategic and value-added operations.
AI models for electricity price forecasting are relevant to stakeholders, policy makers,
power industry participants, and researchers in several ways. Firstly, accurate electric-
ity price forecasting is crucial for stakeholders, such as electricity producers, suppliers,
and consumers, as it helps them make informed decisions about production, consump-
tion, and investment. Accurate forecasting can help them optimize their operations,
manage risk, and reduce costs. Secondly, policymakers can use accurate forecasting to
develop effective energy policies that promote sustainability, efficiency, and affordabil-
ity. Thirdly, power industry participants can use accurate forecasting to improve the
efficiency of electricity trading, reduce price volatility, and ensure a reliable and stable
102 G. P. Girish et al.
supply of electricity. Fourthly, researchers can use AI models to advance the knowl-
edge and understanding of electricity markets, identify patterns and trends in electricity
prices, and develop new forecasting methodologies.
References
1. Girish, G.P., Badri, N.R., Vaseem, A.: Spot electricity price discovery in Indian electricity
market. Renew. Sust. Energ. Rev. 82, 73–79 (2018)
2. Weron, R., Misiorek, A.: Forecasting spot electricity prices: a comparison of parametric and
semiparametric time series models. Int. J. Forecast. 24, 744–763 (2008)
3. Girish, G.P.: Spot electricity price forecasting in Indian electricity market using
autoregressive-GARCH models. Energy Strategy Rev. 11–12, 52–57 (2016)
4. Aggarwal, S.K., Saini, L.M., Kumar, A.: Electricity price forecasting in deregulated markets:
a review and evaluation. Electr. Power Energy Syst. 31, 13–22 (2009)
5. Amjady, N., Daraeepour, A.: Design of input vector for day-ahead price forecasting of
electricity markets. Expert Syst. Appl. 36, 12281–12294 (2009)
6. Bowden, N., Payne, J.E.: Short term forecasting of electricity prices for MISO hubs: evidence
from ARIMA-EGARCH models. Energy Econ. 30, 3186–3197 (2008)
7. Girish, G.P., Vijayalakshmi, S.: Spot electricity price dynamics of Indian electricity market.
In: Lecture Notes in Electrical Engineering, vol. 279, pp. 1129-1135 (2014)
8. Girish, G.P., Vijayalakshmi, S., Panda, A.K., Rath, B.N.: Forecasting electricity prices in
deregulated wholesale spot electricity market—a review. Int. J. Energy Econ. Policy. 4, 32–42
(2014)
9. Weron, R.: Electricity price forecasting: a review of the state-of-the-art with a look into the
future. Int. J. Forecast. 30, 1030–1081 (2014)
10. Gonzalez, V., Contreras, J., Bunn, D.W.: Forecasting power prices using a hybrid fundamental-
econometric model. IEEE Trans. Power Syst. 27, 363–372 (2012)
AI Models for Spot Electricity Price Forecasting—A Review 103
11. Mandal, P., Senjyu, T., Funabashi, T.: Neural networks approach to forecast several hour ahead
electricity prices and loads in deregulated market. Energy Convers. Manag. 47, 2128–2142
(2006)
12. Amjady, N.: Day-ahead price forecasting of electricity markets by a new fuzzy neural network.
IEEE Trans. Power Syst. 21, 887–996 (2006)
13. Hu, Z., Yang, L., Wang, Z., Gan, D., Sun, W., Wang, K.: A game-theoretic model for electricity
markets with tight capacity constraints. Int. J. Electr. Power Energy Syst. 30, 207–215 (2008)
14. Rodriguez, C.P., Anders, G.J.: Energy price forecasting in the Ontario competitive power
system market. IEEE Trans. Power Syst. 19, 366–374 (2004)
15. Areekul, P., Senju, T., Toyama, H., Chakraborty, S., Yona, A., Urasaki, N.: A new method for
next-day price forecasting for PJM electricity market. Int. J. Emerg. Electr. Power Syst. 11
(2010)
16. Guo, J.-J., Luh, P.B.: Improving market clearing price prediction by using a committee
machine of neural networks. IEEE Trans. Power Syst. 19, 1867–1876 (2004)
17. Zhang, G., Patuwo, B.E., Hu, M.Y.: Forecasting with artificial neural networks: the state of
the art. Int. J. Forecast. 14, 35–62 (1998)
18. Pao, H.-T.: A neural network approach to m-daily-ahead electricity price prediction. In:
Lecture Notes in Computer Science, vol. 3972, pp. 1284–1289 (2006)
19. Yamin, H.Y., Shahidehpour, S.M., Li, Z.: Adaptive short-term electricity price forecasting
using artificial neural networks in the restructured power markets. Int. J. Electr. Power Energy
Syst. 26, 571–581 (2004)
20. Jain, A.K., Mao, J., Mohiuddin, K.M.: Artificial neural networks: a tutorial. Computer 29,
31–44 (1996)
21. Rutkowski, L.: Computational Intelligence: Methods and Techniques. Springer (2008)
22. Catalão, J.P.S., Pousinho, H.M.I., Mendes, V.M.F.: Hybrid wavelet-PSO-ANFIS approach for
short-term electricity prices forecasting. IEEE Trans. Power Syst. 26, 137–144 (2011)
23. Pindoriya, N.M., Singh, S.N., Singh, S.K.: An adaptive wavelet neural network-based energy
price forecasting in electricity markets. IEEE Trans. Power Syst. 23, 1423–1432 (2008)
24. Amjady, N.: Short-term electricity price forecasting. In: Catalão J.P.S. (ed.) Electric Power
Systems: Advanced Forecasting Techniques and Optimal Generation Scheduling. CRC Press
(2012)
25. Dong, Y., Wang, J., Jiang, H., Wu, J.: Short-term electricity price forecast based on the
improved hybrid model. Energy Conv. Manag. 52, 2987–2995 (2011)
26. Garcia-Martos, C., Rodriguez, J., Sanchez, M.J.: Forecasting electricity prices by extracting
dynamic common factors: application to the Iberian market. IET Gen. Transm. 6, 11–20
(2012)
27. Lin, W.-M., Gow, H.-J., Tsai, M.-T.: An enhanced radial basis function network for short-term
electricity price forecasting. Appl. Energy 87, 3226–3234 (2010)
Comparison of Various Weight Allocation
Methods for the Optimization of EDM Process
Parameters Using TOPSIS
Sunil Mintri, Gaurav Sapkota, Nameer Khan, Soham Das, Ishwer Shivakoti,
and Ranjan Kumar Ghadai(B)
Sikkim Manipal Institute of Technology, Sikkim Manipal University, Manipal, Sikkim, India
ranjankumarbls@[Link]
1 Introduction
Electro Discharge Machining (EDM) is a non-traditional material removal process that
is widely used to machine good-conducting materials, regardless of their hardness [1]. It
finds extensive applications in various industries such as nuclear energy, aircraft, mold-
ing, surgical instruments, sports, jewelry, automobile, and improvement areas [2]. This
approach can improve the electrical conductivity of the machined material, irrespec-
tive of its mechanical characteristics [3]. EDM is preferred over other non-traditional
methods as it is a contactless machining method based primarily on the erosive effect
of electrical discharge. Due to its electro-thermal nature, a wide range of conductive
materials can be machined regardless of their hardness and toughness. Its contactless
nature has resulted in a satisfactory level of accuracy and surface texture [4].
In the literature, various statistical and non-statistical decision-making techniques
have been proposed to model complicated engineering processes. Multi-Criteria Deci-
sion Making (MCDM) strategies are one of the techniques that have been gaining incred-
ible popularity and large applications in present days [5]. MCDM is a method for aiding
The present study aims to optimize the performance of EDM process parame-
ters using TOPSIS, a Multi-Criteria Decision-Making (MCDM) technique. The study
addresses a research gap in the study on effects of weight allocation in optimization of
EDM process parameters and provides valuable insights for choosing the best weight
allocation strategy for better results. The research investigates the impact of weight
allocation on the TOPSIS method to study the sensitivity of the MCDM method to dif-
ferent weight allocations. Additionally, correlation analysis is conducted to evaluate the
similarities in the obtained ranks.
2 Experimental Details
The experimental data in the existing research work has been taken from Nguyen et al.
[11]. In their work the specimen used was SKD11 high-chromium tool, as complex
shapes can be made using this material for fabricating dies, but due to its solidity it
is difficult to machine the work piece, due to this drawbacks of traditional machining,
EDM process is used as the machining process in the current work. Gap Voltage (U),
Peak Current (I), Pulse off time (Toff ) and pulse on time (Ton ) were chosen as the input
parameters. Microhardness (HV), White Layer thickness (WLT), Surface roughness
(SR) and Material removal rate (MRR) were considered as the performance responses
meant for the EDM process. MRR is described by means of the percentage of discount
in the work piece weight throughout the machining procedure to its preliminary load.
W1 − W2
MRR = (1)
W2
where W1 is initial weight and W2 is final weight.
The weights of the samples are considered by using five different process which
are Standard deviation method, Critic method, Entropy method SWARA method and
FUCOM method. The WLT, due to the varying intensity of the recasting sheet over the
machined surface was calculated using this equation.
Area of recast sheet
WLT = (2)
Length of recast sheet
White layer together with polyline turns into sketched for figuring out the WLT
region using EDM. The experimental research was carried out by L25 orthogonal array
(OA) as there has been four input procedure parameters at six distinct stages in this
study.
3 Methodology
3.1 Standard Deviation (SDV) Method
SDV builds an impartial projection of values as it improves the MCDM and lowers the
particular weight pressure [20]. The inherent steps in SDV are finalized through next
relation to interchange the numerous norms and scales into structured & quantifiable
measure to decide out their substantial weights.
xij − min(x)ij
Bij = (3)
max(x)ij − min(x)ij
Comparison of Various Weight Allocation Methods 107
m 2
i=1 Bij − Bj
SDVJ = (4)
m
where Bij is the balance of the values for the ith portion, where j = 1, 2, 3, 4.
The weight can be calculated by Eq. (5) as,
SDVj
Wj = n (5)
j=1 SDVj
The weights for the responses i.e. MRR, SR, HV and WLT are
W = 0.276 0.253 0.236 0.235
−1
m
ej = nij ln nij (6)
ln(m)
i=1
dj = 1 − ej , j = 1, 2, 3 (7)
The weights for the responses i.e. MRR, SR, HV and WLT are
W = 0.632 0.14 0.017 0.211
where rjk is the correlation between jth and kth indicator σj is the standard deviation of
jth indicator
Step 3: Objectives weights are calculated using Eq. 10:
cj
Wj = m (10)
k=1 cj
For the current problem, the Critic weight vector for MRR, SR, HV and WLT are
W = 0.358 0.222 0.212 0.208
3.4 SWARA
In the field of multiple attribute decision making (MADM), the SWARA approach was
proposed in 2010 with a new paradigm [23]. It was created for use in decision-making
procedures where policymaking is more prominent than in traditional decision-making
processes. The criteria should be prioritized first in the classic SWARA algorithm. The
importance of this stage appears to be unavoidable because the accuracy rate in this
prioritizing method looks to be near perfect. SWARA, in comparison to other weighting
techniques, has the following advantages. (1) It manages to capture the experts’ opinions
about the importance of the criteria in the process, (2) it aids in coordinating and getting
data from experts, (3) it is simple, user-friendly, and straightforward, and experts can
easily collaborate. Pair wise comparison of ranked criteria is done by the decision makers
to calculate sj by determining the relative importance of jth criteria with respect to the
(j − 1)th criteria. Coefficient values are then calculated as under.
1, j=1
kj =
sj + 1, j > 1
For the current problem, the SWARA weight vector for MRR, SR, HV and WLT are
W = 0.3396 0.1887 0.2830 0.1887
3.5 FUCOM
The FUCOM algorithm is based on paired criteria comparisons, with the model requiring
only the n − 1 comparison [24]. The model employs a simple technique for validating
the model by calculating the comparison’s deviation from full consistency (DFC). The
consistency of the model is determined by the fulfilment of mathematical transitivity
conditions. One feature of the newly formed approach is that it lowers decision-makers’
subjectivity, resulting in symmetry in the weight values of the criterion. It is a method that
Comparison of Various Weight Allocation Methods 109
requires (1) a modest number of paired criteria comparisons, (2) the capacity to specify
the DFC of the comparison, and (3) pairwise comparison is required to be transitive.
The weight is calculated using the formula:
wj(k)
−ϕ k ≤ξ (12)
wj(k+1) k+1
wj(k)
− ϕ k ⊗ ϕ k+1 ≤ ξ, (13)
wj(k+2) k+1 k+2
n
wj = 1 (14)
j=1
Wj ≥ 0, for all ξ signifies the consistency of the model and ϕ function signifies the
priority among criteria as assessed by the decision.
For the current problem, the FUCOM vector for MRR, SR, HV and WLT are
W = 0.4569 0.1714 0.2741 0.0977
3.6 TOPSIS
Figure 1 depicts the Euclidean distances of different weights using TOPSIS from the
positive and negative ideal solutions (PIS and NIS). The zero line in the picture represents
the ideal solution, and Si− on the bottom side of the zero line represents the distance of
the individual alternative from the negative ideal solution. The comparable Si+ shown
on the top side of the zero line represents the distance of each individual alternative from
the positive ideal solution. If the distance of Si+ is close to the zero line, the alternative
has a good closeness coefficient; similarly, a bigger distance of Si− from the zero line
suggests a strong closeness coefficient.
110 S. Mintri et al.
Fig. 1. Euclidean distances of each alternative from PIS and NIS for various employed weight
determination methods.
In this study, the input parameters are I, U, Ton, and Toff, whereas the output param-
eters are MRR, SR, HV, and WLT. The recorded experimental data and the input param-
eters are shown in Table 1. The TOPSIS technique determines the best solution by taking
into account both non-beneficial and beneficial aims. The beneficial objectives in our
situation are MRR and HV, while the non-beneficial ones are SR and WLT. This method
uses a basic mathematical formula and does not necessitate the use of any compli-
cated software to evaluate. Figure 2 shows that the weight calculation utilizing CRITIC
and standard deviation techniques ranks experiment no. 25 as the best choice. Entropy,
SWARA, and FUCOM on the other hand, rank experiment 22 as the best alternative.
Experiment 9 is regarded as the worst TOPSIS alternative based on standard deviation
and critics but with Entropy, SWARA and FUCOM experiment 5 gives the worst pos-
sibilities. Better ranks towards the end of the Fig. 2 shows that increasing the current
has positive effect on the quality of the machining. Looking at the rank trends, it can
also be seen that high gap voltage at low current ratings results in decreased quality of
machining. TOPSIS does not seem to be highly sensitive to the weights as a significant
level of correlation can be established between the ranks obtained using the various
weight determination methods.
The correlation matrix for TOPSIS is shown in Table 2 and 5 respectively. Pearson
correlation coefficient is used for the current work to assess the similarity between two
different weight determination method. The correlation matrix for TOPSIS shows that the
method is not very sensitive to the weights and the ranks assigned to different experiments
is very similar with each other. SWARA and CRITIC are two subjective and objective
Comparison of Various Weight Allocation Methods 111
weight determination techniques respectively which have the highest correlation between
them as suggested by the correlation coefficient.
5 Conclusion
In the current work an attempt has been made to study the effect of weight allocation
methods on the working of an MCDM technique namely TOPSIS. The experimental data
considered is an L-25 Taguchi Orthogonal array for machining of hard high chromium
tool material using EDM process. MRR, HV, SR and WLT were the response parameters
which are the criteria based on which ranking of alternatives were done. The following
conclusions can be drawn from the current work.
112 S. Mintri et al.
• TOPSIS method suggest that the experimental run no. 22 and 25 are the best alternative
among all the alternatives. The rank plot also suggests that increase in peak current
is better for overall performance of EDM process.
• From the current work, it can also be deduced that TOPSIS method is not very
sensitive to the criteria weights for the current dataset. This can also be seen from the
correlation between the ranks obtained using two methods.
References
1. Jung, J.H., Kwon, W.T.: Optimization of EDM process for multiple performance characteris-
tics using Taguchi method and Grey relational analysis. J. Mech. Sci. Technol. 24, 1083–1090
(2010). [Link]
2. Kumar, P., Gupta, M., Kumar, V.: Surface integrity analysis of WEDMed specimen of Inconel
825 superalloy. Int. J. Data Network Sci. 2, 79–88 (2018). [Link]
8.001
3. Świercz, R., Oniszczuk-Świercz, D., Chmielewski, T.: Multi-response optimization of electri-
cal discharge machining using the desirability function. Micromachines 10, 72 (2019). https://
[Link]/10.3390/mi10010072
4. Joshi, A.Y., Joshi, A.Y.: A systematic review on powder mixed electrical discharge machining.
Heliyon 5, e02963 (2019). [Link]
5. Mardani, A., Jusoh, A., MD Nor, K., Khalifah, Z., Zakwan, N., Valipour, A.: Multiple criteria
decision-making techniques and their applications—a review of the literature from 2000 to
2014. Econ. Res. [Ekonomska Istraživanja] 28, 516–571 (2015). [Link]
1677X.2015.1075139
6. Dooley, A.E., Smeaton, D.C., Sheath, G.W., Ledgard, S.F.: Application of multiple criteria
decision analysis in the New Zealand agricultural industry. J. Multi-Criteria Decis. Anal. 16,
39–53 (2009). [Link]
7. Hwang, C.-L., Yoon, K.: Multiple Attribute Decision Making. Springer, Berlin (1981). https://
[Link]/10.1007/978-3-642-48318-9
8. Wang, Y., Sun, Z.: Development of the comprehensive evaluation methods in medicine.
[Zhong nan da xue xue bao. Yi xue ban] J. Cent. South Univ. Med. Sci. 30, 228–232 (2005)
9. Triantaphyllou, E.: Multi-criteria Decision Making Methods: A Comparative Study. Springer,
USA (2000). [Link]
10. Singh, A., Ghadai, R., Kalita, K., Chatterjee, P., Pamucar, D.: EDM process parameter opti-
mization for efficient machining of INCONEL-718. Facta Univ. Ser. Mech. Eng. 18, 473
(2020). [Link]
11. Nguyen, P.H., et al.: Application of TGRA-based optimisation for machinability of high-
chromium tool steel in the EDM process. Arab. J. Sci. Eng. 45(7), 5555–5562 (2020). https://
[Link]/10.1007/s13369-020-04456-z
12. Ghosh, A., Mallik, A.: Manufacturing Science (2015)
13. Goldberg, D.E.: Genetic Algorithms. Pearson Education, India (2013)
14. Ghadai, R.K., Kalita, K., Gao, X.-Z.: Symbolic regression metamodel based multi-response
optimization of EDM process. FME Trans. 48, 404–410 (2020). [Link]
2002404G
15. Ragavendran, U., Ghadai, R.K., Bhoi, A.K., Ramachandran, M., Kalita, K.: Sensitivity anal-
ysis and optimization of EDM process parameters. Trans. Can. Soc. Mech. Eng. 43, 13–25
(2019). [Link]
16. Ch M.R., Kambagowni, V.: Optimization of wire EDM process parameters in machining
SS316 using DEAR method 5 (2021)
Comparison of Various Weight Allocation Methods 113
17. Verma, V., Sahu, R.: Process parameter optimization of die-sinking EDM on titanium grade—
V alloy (Ti6Al4V) using full factorial design approach. Mater. Today Proc. 4, 1893–1899
(2017). [Link]
18. Rezaei, J.: Best-worst multi-criteria decision-making method. Omega 53, 49–57 (2015).
[Link]
19. Manivannan, R., Kumar, M.P.: Multi-response optimization of micro-EDM process parame-
ters on AISI304 steel using TOPSIS. J. Mech. Sci. Technol. 30(1), 137–144 (2016). https://
[Link]/10.1007/s12206-015-1217-4
20. Mukhametzyanov, I.: Specific character of objective methods for determining weights of
criteria in MCDM problems: entropy, CRITIC and SD. Decis. Making Appl. Manag. Eng. 4,
76–105 (2021). [Link]
21. Chodha, V., Dubey, R., Kumar, R., Singh, S., Kaur, S.: Selection of industrial arc welding
robot with TOPSIS and Entropy MCDM techniques. Mater. Today Proc. 50, 709–715 (2022).
[Link]
22. Diakoulaki, D., Mavrotas, G., Papayannakis, L.: Determining objective weights in multiple
criteria problems: the critic method. Comput. Oper. Res. 22, 763–770 (1995). [Link]
10.1016/0305-0548(94)00059-H
23. Keršuliene, V., Zavadskas, E.K., Turskis, Z.: Selection of rational dispute resolution method
by applying new step-wise weight assessment ratio analysis (SWARA). J. Bus. Econ. Manag.
11, 243–258 (2010). [Link]
24. Pamučar, D., Stević, Ž, Sremac, S.: A new model for determining weight coefficients of
criteria in MCDM models: full consistency method (FUCOM). Symmetry 10, 393 (2018).
[Link]
25. Diyaley, S., Shilal, P., Shivakoti, I., Ghadai, R.K., Kalita, K.: PSI and TOPSIS based selection
of process parameters in WEDM. Periodica Polytech. Mech. Eng. 61, 255–260 (2017). https://
[Link]/10.3311/PPme.10431
Assessment of the Outriggers and Their Stiffness
in a Tall Building Using Multiple Response
Spectrum
Abstract. The increasing population and its accumulation in the cities raise the
need of tall buildings. The tall buildings can accommodate more families in a
comparatively smaller area. The construction of tall buildings needs to be safe and
serviceable. The primary concerns in a tall building are the lateral displacement
due to horizontal loads arising from earthquakes and wind. The present study
focuses on the model-based assessment of the outrigger’s location in a 50-story
tall building. The overall height of the building is 175 m and is of square cross-
section with each side of 28 m. The building has 3 bays of 9 m, 10 m & 9 m
respectively. The slenderness ratio of the building is 6.75. The base model of
beam-column framed structure is used for comparison with the models having a
shear wall at the core and outriggers and different locations. The study analyzed the
lateral displacement of each story in varying models. The inter-story displacement
of each story is compared with all models having varying outrigger locations
and stiffness. It was established that the optimum location of the outrigger is at
zone 3 (85 m height from the top) with the least lateral displacement of the top
story. The outrigger stiffness is compared at the optimum location of the outrigger
by increasing the cross-sectional area of the outriggers for lateral displacement
and inter-story displacement. The increase in the outrigger stiffness has a small
impact on the lateral displacement of the top story of the building. The inter-
story displacement variation along with the height of the building is considerably
stabilized with the increase of the stiffness of the outriggers. The placement of the
outriggers accompanying at two locations results in a decrease in top-story lateral
displacement, but no change is observed in inter-story displacement up to zone-4.
1 Introduction
The construction of buildings has changed face in recent two-three decades. The growing
population in the countries and the desire to live in the cities have continuously increased
the density in the cities. The resettlement of the increasing population in cities needs
more accommodation in the cities. The horizontal expansion of the cities will result
in diminishing agricultural land and land for natural habitats. The construction of tall
buildings to accommodate large populations as well as leaving free space is an effective
solution. The use of tall buildings helps in greater accommodation in a smaller land
area. Along with high rise and tall buildings, a greater risk of higher lateral displacement
and stresses is a matter of concern [1]. Tall building phenomenon will continue on a
greater scale to meet the needs of the growing population in future large cities [2]. The
advancement of concrete technology with high-strength concretes has made it feasible
to construct tall buildings [3]. As the slenderness ratio of the buildings increases the
overturning moment due to lateral loads also increases. The tall buildings have a more
significant impact on the lateral loads due to seismic and wind than superimposed and
dead loads. One of the solutions to resist the lateral loads in tall buildings is the use
of outriggers. Outriggers are placed as members connecting the core to the peripheral
columns which increase the moment resistance of the structure. Outrigger structural
system has been popular in construction since the 1980s due to their unique combination
of architectural flexibility and structural efficiency [4, 5]. The structure without using
the outriggers will be as a cantilever leg. The outriggers provided in opposite directions
act as a couple which lengthens the arm for moment resistance [6].
The literature consists of studies on the location of the outriggers and their impacts
on the effectiveness of the outriggers measured in terms of the lateral displacement of
the top story of the building [7]. Many studies have suggested the various locations
of the outriggers along the height of the building in different types and at locations.
Lee et al., [3], studied the nonlinear behavior due to nonlinearity of the geometry to
derive an equation for the structure. Lee and Tovar [8], proposed a different method that
concluded as highly accurate for the simulation of tall buildings using finite element
analysis. The study used topology-based assessment for the position of the outriggers
for a 201 m high-rise building. The inter-story drift of the tall building structures can
be efficiently controlled using outriggers. The author utilised the theoretical method to
identify the inter-story displacement and overall lateral displacement of tall buildings
using MATLAB for a 240 m high-rise building [9, 10]. The outriggers are easy to
modify and are used with dampers reducing the wind effect in the building [11]. The axial
shortening of the length between the outer columns and the core of the building cannot be
restricted [12, 13]. The optimum number of outriggers were also worked out by placing
the number of outriggers at various stories of the building [14–17]. The comparison of
the tall buildings with conventional outriggers and energy-dissipation outriggers using
different methods of assessment have been studied [18–21]. In outriggers, the maximum
lateral displacement and inter-story displacement are important aspects along with the
differential of the columns [22].
The assessment of the outriggers has been studied for many decades in the past.
The assessment remains a tedious task with changing geometries, aspect ratios, and
slenderness ratios. Along with the optimum location of the outriggers, the stiffness of
the outriggers also impacts the structural behaviour. In the present research, efforts are
done using a finite element analysis for the optimum location of the outriggers and their
effects on the lateral displacement of the top of the buildings and inter-story displacement
of all storeys. The same effects are also observed for the building having outriggers at
116 S. Dwivedi et al.
optimum location and with increasing stiffness of the outrigger. In order to resist lateral
loading in tall buildings, lateral displacement needs to be restricted. For the economy an
appropriate structural system is to be utilised.
2 Research Program
2.1 Geometry Used
In the present research, a 50-story building is used for the analytical study. The aspect
ratio of the building used is one having 3 bays in each perpendicular direction. From
the 3-bays central bay is of 10 m and both outer bays are 9 m each as shown in Fig. 1.
The height of the building used for the study is 175 m. Each story of the tall building
is assumed to be 3.5 m high. The size of the column utilised is 1000 mm × 1000 mm,
considering the practicality of the tall building. The cross-section of the column is kept
constant throughout the height of the building to observe the impact of outriggers only.
The cross-section of the beams is assumed to be 500 mm × 850 mm for the model as
mentioned in Table 1. The outriggers of the double story are used for the analysis at
various locations as per the designed study.
The present study is conducted using analysis software E-Tabs based on finite element
analysis. The 3-dimensional modelling is initially done in the software and analysis is
run using multiple response spectrum analysis. The tall building is modelled firstly as a
framed structure only. The analysis of the impact on lateral displacement of the building
due to wind as well as earthquake along with superimposed and dead load is computed.
The framed model is used as a base model for comparison with various other models.
The further models were designed with the inclusion of the shear wall at the core of the
building having a cross-sectional area of 12 m2 on four sides of the core. The variation
of models is mentioned in Table 1.
118 S. Dwivedi et al.
3.1 Modelling
The model is designed for a 50-story concrete building with a height of 175 m. The
method of response spectrum analysis is used for the results. The concrete is modelled
as per IS 456:2000 of the characteristic strength of 40 N/mm2 . The foundation soil is
assumed to be a hard soil. The tall building is located in a plain area, and the place is
assumed to lie in Zone-IV as per IS 1893. The loading is applied as per IS 875 and
assumed to act on all floors of the tall building. The impact of earthquake and wind load
is applied separately. The wind load is applied along all four sides with angles 0, 90, 180
and 270 degrees.
The tall buildings stability is based on the lateral displacement of the top of the tall
building. The impact of both earthquake and wind is considered for comparison in this
study. The impact of wind found to be greater on the lateral displacement of the building.
The results compared in the study are based on the lateral displacement due to wind. The
lateral displacement for the base model of framed building (BC) is shown in Fig. 2. The
displacement can be seen increasing with the height of the building. The displacement
increase with the increase of the story’s can be seen to follow a curve. The displacement
of the top story is maximum due to maximum wind effect with the increase in height.
The lateral displacement of all the models studied are compared for the lateral dis-
placement of the base model. As shown in Fig. 2, the lateral displacement of the building
is large for base model. The inclusion of the shear wall and outriggers resulted in consid-
erable decrease of 60% in the lateral displacement of the top story of the building. The
comparison of the lateral displacement of the models with shear walls and outriggers at
various locations is shown in Fig. 3. The nomenclature used is as mentioned in Table 1
for the reference.
The thorough relation of the change in lateral displacement of a particular story with
the variation of models is shown in Fig. 4. The x-axis of the graph shows the serial number
of model type. The serial numbers are marked as 1 = BC, 2 = BC-S, 3 = BC-S-OZ1, 4
= BC-S-OZ2, 5 = BC-S-OZ3, 6 = BC-S-OZ4, 7 = BC-S-OZ5, 8 = BC-S-OZ3-6-6, 9
= BC-S-OZ3-8-8, 10 = BC-S-OZ3-10-10 and 11 = BC-S-OZ3&5-10-10. The y-axis of
the graph shows the lateral displacement. The S0, S5, S15, S20, S25, S30, S35, S40, S45
and S50 are the graph line for lateral displacement of different models at that particular
story. It is observed that when comparing any particular story, the change is significant
when outriggers are used in the tall building. In all cases the displacement is minimum
120 S. Dwivedi et al.
with the use of outriggers and least in model BC-S-OZ3 & 5-10-10. It can be concluded
that the use of outriggers significantly affects the lateral displacement of the building and
the increased stiffness resists the displacement with small change. The use of nominal
outriggers at increased number of locations improves the lateral displacement then using
high stiffness outriggers at single location.
Considering the models with basic cross-sectional outriggers, the least fluctuations in
inter-story displacement is observed when the outrigger was placed at zone-III. Figure 6
compares the inter-story displacement of the models BC, BC-S-OZ3 and BC-S-OZ3-
6×6 having no outriggers, outriggers with 400 × 400 mm and outriggers with 600 ×
600 mm, respectively. It is observed that when the stiffness is increased with the increase
in cross-section of the outrigger, the displacement between adjacent stories decreased.
Figure 7 shows the comparison of the model BC, BC-S-OZ3 and BC-S-OZ3-10-10
and it is observed that when the curve of inter-story displacement of the BC-S-OZ3-
6 × 6 and BC-S-OZ3-10×10 having no outriggers, outriggers with 400 × 400 mm,
outriggers with 1000 × 1000 mm, respectively. The fluctuations in former model were
higher while in the later it smoother. It can be concluded that the increase of the stiffness
of the outriggers reduces the inter-story displacements and improves the stability of the
structure.
The placement of the outriggers can be at more than one place. In Fig. 8, the outriggers
of same stiffness placed at more than one location is compared with the model having
outrigger at only one location. The curve of the models BC-S-OZ3-10×10 consists of
122 S. Dwivedi et al.
Fig. 6. Inter-storey displacement of the zone 3 outriggers of BC, BC-S-OZ3 & BC-S-OZ3-6-6
5 Conclusion
The present study focuses on assessing the outriggers using multiple response spectrum.
The study utilised finite element analysis-based E-tabs software for making models of a
50-story tall building. The tall building is modelled firstly as a framed structure only. The
analysis of the impact on lateral displacement of the building due to wind and earthquake
along with superimposed and dead load is computed. The framed model is used as a
base model for comparison with other models with shear walls and outriggers at different
building zones. The following conclusions can be drawn from the present study.
The framed structure with shear walls at the core significantly decreases the lateral
displacement of the building. The lateral displacement of the tall building for the location
of the outriggers at each story was computed to assess the location of the outriggers.
The optimum location of the outriggers is found to in zone 3 which is near 90 m of the
height of the building which is at half the height of the building. The increase of the
stiffness of the outriggers decreases the lateral displacement of the building by small
extent. The inter-story displacement of the building is very large in the base model, it
is reduced with the use of the outriggers. The inter-story at each story was analyzed in
all the models and it was observed that at the location of the outrigger, the inter-story
displacement between adjacent stories was small and it increased with the increase in
the height of the building from the location of outriggers.
The increased stiffness of the outriggers helps in better stability of the building
by reducing the range of inter-story displacement at each zone. The comparison of
the model having outrigger at an optimum location and the model having outrigger at
optimum location as well as top story of the tall building concludes that the increased
124 S. Dwivedi et al.
number of outrigger locations impacts only on the top-story lateral displacement and
small improvement in inter-story displacement.
Future Research
The future research recommendation is to study the effect of outriggers with dampers.
The outriggers with dampers can be studied in direct placement and as truss belt. This
will increase the overall efficiency of the research.
Acknowledgement. The authors acknowledge the support provided by the Department of Civil
Engineeing, School of Engineering, RIMT University, Punjab, India.
References
1. Pullmann, T., et al.: Structural design of reinforced concrete tall buildings: evolutionary com-
putation approach using fuzzy sets. In: Proceedings of the 10th European Group for Intelligent
Computing in Engineering EG-ICE, Delft, The Netherlands (2003)
2. Fawzia, S., Nasir, A., Fatima, T.: Study of the effectiveness of outrigger system for high-rise
composite buildings for cyclonic region. World Acad. Sci. Eng. Technol. 60(1), 937–945
(2011)
3. Lee, J., Bang, M., Kim, J.-Y.: An analytical model for high-rise wall-frame structures with
outriggers. Struct. Design Tall Spec. Build. 17(4), 839–851 (2008)
4. Tan, P., et al.: Dynamic characteristics of novel energy dissipation systems with damped
outriggers. Eng. Struct. 98, 128–140 (2015)
5. Amoussou, C.P.D., et al.: Performance-based seismic design methodology for tall buildings
with outrigger and ladder systems. Structures 34 (2021)
6. Choi, H.S., Joseph, L.: Outrigger system design considerations. Int. J. High-Rise Build. 1(3),
237–246 (2012)
7. Ding, J.M.: Optimum belt truss location for high-rise structures and top level drift coefficient.
J. Build. Struct. 4, 10–13 (1991)
8. Lee, S., Tovar, A.: Outrigger placement in tall buildings using topology optimization. Eng.
Struct. 74, 122–129 (2014)
9. Zhou, Y., Zhang, C., Lu, X.: An inter-story drift-based parameter analysis of the optimal
location of outriggers in tall buildings. Struct. Design Tall Spec. Build. 25(5), 215–231 (2016)
10. Soltani, O., Layeb, S.B.: Evolutionary reinforcement learning for solving a transportation
problem. In: Intelligent Computing & Optimization: Proceedings of the 5th International Con-
ference on Intelligent Computing and Optimization 2022 (ICO2022), pp. 429–438. Springer
International Publishing, Cham (2022)
11. Ho, G.W.: The evolution of outrigger system in tall buildings. Int. J. High-Rise Build. 5(1),
21–30 (2016)
12. Kim, H.S.: Effect of outriggers on differential column shortening in tall buildings. Int. J.
High-Rise Build. 6(1), 91–99 (2017)
13. Kim, H.S.: Optimum locations of outriggers in a concrete tall building to reduce differential
axial shortening. Int. J. Concr. Struct. Mater. 12(1), 77 (2018)
14. Yang, Q., Lu, X., Yu, C., Gu, D.: Experimental study and finite element analysis of energy
dissipating outriggers. Adv. Struct. Eng. 20(8), 1196–1209 (2017)
15. Baygi, S., Khazaee, A.: The optimal number of outriggers in a structure under different lateral
loadings. J. Inst. Eng. (India) Ser. A 100(4), 753–761 (2019)
Assessment of the Outriggers and Their Stiffness in a Tall Building Using 125
16. Jiang, H.J., et al.: Performance-based seismic design principles and structural analysis of
Shanghai Tower. Adv. Struct. Eng. 17(4), 513–527 (2014)
17. Nigdeli, S.M., Bekdaş, G., Yücel, M., Kayabekir, A.E., Toklu, Y.C.: Analysis of non-linear
structural systems via hybrid algorithms. In: Intelligent Computing & Optimization: Pro-
ceedings of the 4th International Conference on Intelligent Computing and Optimization
2021 (ICO2021), vol. 3, pp. 536–545. Springer International Publishing, Berlin (2022)
18. Jiang, H., Li, S., Zhu, Y.: Seismic performance of high-rise buildings with energy-dissipation
outriggers. J. Constr. Steel Res. 134, 80–91 (2017)
19. Kamgar, R., Rahgozar, P.: Reducing static roof displacement and axial forces of columns in
tall buildings based on obtaining the best locations for multi-rigid belt truss outrigger systems.
Asian J. Civ. Eng. 20(6), 759–768 (2019). [Link]
20. Kim, H.S., Lim, Y.J., Lee, H.L.: Optimum location of outrigger in tall buildings using finite
element analysis and gradient-based optimization method. J. Build. Eng. 31, 101379 (2020)
21. Kim, H.S., Lee, H.L., Lim, Y.J.: Multi-objective optimization of dual-purpose outriggers in
tall buildings to reduce lateral displacement and differential axial shortening. Eng. Struct.
189, 296–308 (2019)
22. Lee, J., Park, D., Lee, K., Ahn, N.: Geometric nonlinear analysis of tall building structures
with outriggers. Struct. Design Tall Spec. Build. 22(5), 454–470 (2013)
The Role of Artificial Intelligence in Art:
A Comprehensive Review of a Generative
Adversarial Network Portrait Painting
1 Introduction
Artificial Intelligence is blurring the definition of an artist and art. As AI becomes more
present in our daily lives, it is only natural for artists to explore and experiment with
it. Technology is becoming increasingly advanced and sophisticated with each passing
year, with a multitude of devices being designed to make life easier for people. The
arts have also been impacted by such developments in technology; advancements in AI,
for instance, have had a notable influence on the range of artistic expressions. It has
increased the number of techniques available to artists and presented them with a wide
range of opportunities. Artists can now create works of art using a paintbrush and an
iPad just as well as they can with traditional art supplies. Art has been created since long
before the invention of basic art supplies or technology; cave art, for example, dates
back to around 17,000 years ago, when people in France’s Lascaux caverns created
lifelike drawings of bulls, bison, stags, horses, and other creatures on the walls. They
also created stencils of their hands, illustrating the significant impact technology has
had on art when compared to the methods and technologies available in ancient times
[1]. We now have access to a plethora of knowledge and innovative practices thanks
to technology. A few examples include graphic design, computer-generated artwork,
Photoshop, digitally created music, e-books, and 3D printing, which show just how much
art in all its forms has been impacted by technology [2]. A brand-new genre of digital art is
pushing the boundaries of inventiveness and upending traditional methods of producing
art. Artists create autonomous robots for collaboration, feed data through algorithms,
and program machines to produce original visual creations. They use computer systems
that incorporate artificial intelligence (AI) to simulate the human mind and create an
endless stream of original works of art. The use of AI as a creative partner has gained
popularity, revolutionizing creative disciplines such as music, architecture, fine arts, and
science. Computers are playing a significant role in transforming the way people think
and create. Additionally, the AI-generated works of art has led to an evolution in what
is considered acceptable under the umbrella of art. As artificial intelligence technology
continues to develop, its impact on the arts will continue to become increasingly present,
creating an exciting new landscape for the arts [3]. The fact is that the computer already
functions as a canvas, a brush, a musical instrument, etc. However, we think that the
connections between creativity and computers need to be more ambitious. We might
consider the computer to be a creative force in and of itself rather than just a tool to
assist human creators. Computational creativity is a new branch of artificial intelligence
(AI) that has been inspired by this viewpoint [4]. This manuscript addresses the question:
what does AI art mean for artists, and how does artificial intelligence (AI) impact the
art? It utilizes a practice-led technique, with descriptive qualitative data sourced from
secondary sources such as publications, online news sources, and pertinent references.
Through documentation techniques such as synthesizing data from written, visual, and
other records of historical events, this study aims to analyze the characteristics of digital
technology use, particularly artificial intelligence, in creative and cultural processes. Its
scientific uniqueness lies in its initial analysis of the impact of AI on art and artists.
2 Literature Review
2.1 A History of Artistic Expression Through Technology
The history of AI art dates back to the early 1950s, when scientists started to explore the
potential of computers to create artwork. Early experiments focused on rendering images
with simple programming. The first AI-generated art was created by John Whitney, in
1955. His program, called “Orchestration,” used a random number generator to produce
abstract patterns and colors. In the 1960s, computer art became more sophisticated, with
researchers such as Harold Cohen and Jean-Paul Bailly exploring what could be done
with interactive programs. By the late 1970s, the first AI-generated paintings were being
created. In the 1980s, Hal Lasko developed the earliest AI painting program, Painterbot
[5]. In the mid-2000s, AI art became increasingly sophisticated with the introduction of
deep learning, and today, AI art is becoming increasingly popular, with many galleries
and museums hosting exhibitions of AI-generated artwork. AARON (Artificial Artists
128 S. Rani et al.
Robotic Operative Network), created by Harold Cohen in 1980, is credited as the first true
AI art program and continues to inspire many modern AI-generated artworks. The rise of
deep learning in the mid-2000s enabled computers to create even more complex, realistic
images. In 2013, Google released Deep Dream, an algorithm that could generate abstract
images by interpreting existing photos. Since then, AI art has gained popularity, with
artists like Anna Ridler and Mario Klingemann leading the way [6]. The experimental
project “Artificial Natural History” (2020) examines speculative, artificial life through
the prism of a “natural history book that never was.” This intriguing example of modern
AI being utilized to create art raises a number of philosophical questions about the nature
of AI art, which are historically-based rather than unique to the current moment in art
[7]. In recent years, AI art has become increasingly popular, with many galleries and
museums hosting exhibitions of AI-generated artwork. As the technology continues to
evolve and improve, AI art is sure to become increasingly sophisticated and integrated
into our lives.
realistic artwork that can mimic or surpass the quality of art created by humans. GANs
can be used to create works of art in a variety of mediums, such as images, videos, or
music. In this way, GANs have the potential to revolutionize art and lead to the creation
of entirely new forms of art. For example, GANs can generate new types of artwork,
such as digital sculptures or interactive virtual reality art. Additionally, GANs can be
used to create art with specific styles or aesthetics, such as abstract art or minimalist art.
As such, GANs are an exciting tool for creating innovative and unique artwork. Indeed,
these three types of Artificial Intelligence (AI) are all powerful tools for artists. By using
these AI tools, artists can expand their creative repertoire and explore new possibilities
in their art.
Artificial Intelligence is advancing the field of image generation and manipulation. Deep
Dream is an AI-generated dream-like imagery, generated by a deep learning neural net-
work that uses image style transfer. WOMBO Dream is another deep learning neural
network for image style transfer. Gau GAN is an AI-based generative technology that cre-
ates photorealistic landscapes from simple drawings. Developed by NVIDIA, it applies
generative adversarial networks to create photorealistic landscape images. DALL-E is
an artificial intelligence image synthesis model developed by Open AI. It enables users
to generate novel images based on textual descriptions of desired objects or scenes.
Finally, Fotor is an online photo editor powered by advanced AI technology. It offers
a variety of editing tools, such as auto photo enhancing, image retouching, and cre-
ative effects. All of these technologies involve using artificial intelligence to generate or
manipulate images. It has enabled users to create amazing works of art and even change
the way photos are looked at. In addition, AI-based image editing techniques can be
used beyond personal enjoyment and creativity - they can also be used for a variety of
practical applications such as medical imaging, forensic research, remote sensing, and
more. By harnessing the power of AI-generated imagery, scientists and professionals
in a variety of fields are able to make great leaps forward. In short, AI-based image
130 S. Rani et al.
generation and manipulation technologies such as Deep Dream, WOMBO Dream, Gau
GAN, DALL-E, and Fotor are revolutionizing the way images are created, viewed, and
utilized. AI-based image generation and manipulation technologies are paving the way
for more efficient, effective ways to transfer visual information. For example, medical
imaging applications such as dental X-rays or MRI scans can be further analyzed with
AI-assisted algorithms. Similarly, remote sensing and mapping tasks can be completed in
a fraction of the time it would take without AI assistance. Additionally, AI-based image
synthesis can add another layer of authenticity to film production, providing producers
with precise computer-generated visual effects that look incredibly realistic [11]. These
technologies have opened up a world of possibilities for both amateur photographers and
professionals alike. By utilizing these AI-based tools, users can create amazing works
of art in a fraction of the time it would take to do so without them. It is without a doubt
that AI-based image generation and editing technologies will continue to be refined and
improved upon as AI technology advances.
3 Methodology
This manuscript utilizes a practice-based methodology predominantly utilizing descrip-
tive and qualitative approaches to investigate the responses to its queries. AI has advanced
significantly over the past few decades and its powers have been employed to produce
artwork that is more complex and realistic than ever before. One example of AI-created
art is the painting “Portrait of Edmond de Belamy” by the French collective Obvious.
A neural network trained on a dataset of 15,000 portraits produced the artwork, which
brought in an incredible $432,000 at auction. This groundbreaking work has raised many
questions on creativity that need to be answered: How can AI create art? What serves as
its source of inspiration? How does AI gain access to muses’ power? AI is emotionless
and has no feelings; it is purely scientific and is supported by big data, machine learning,
and algorithms. AI-based tools are rapidly changing the art industry by enabling artists
to create more complex works and experiment with new styles and techniques. To inves-
tigate the responses to these queries, this manuscript employs a practice-based approach
with descriptive qualitative methods to discuss the theoretical and applied elements of
artificial intelligence art, as well as how AI influences the creative process.
4 Result
4.1 Edmond De Belamy- AI Created Portrait
Edmond De Belamy is an AI-generated portrait created through a process called Gen-
erative Adversarial Networks (GANs). The portrait was created by the Paris-based art
collective Obvious and was put up for auction in October 2018 for an estimated $10,000–
$12,000 [12]. The portrait features a mustachioed gentleman wearing a black jacket and a
white collar, with the title “Portrait of Edmond De Belamy, from La Famille de Belamy”
written underneath it. The artwork was created using a machine learning algorithm which
was given a data set which included 15,000 portraits from the 14th to 20th centuries. The
machine learning algorithm was then trained to create a new piece based on the data set
The Role of Artificial Intelligence in Art: A Comprehensive 131
it was given. Through the algorithm, Obvious was able to create a unique yet original
artistic work which was unlike anything it had seen before. One of the most interesting
aspects of this artwork is its combination of art and technology. It’s a perfect example of
how AI can be used to produce something which has never been seen before, showing
that art and technology can truly go hand in hand. For example, the artist did not have
to manually create each element in the painting as the machine learning algorithm did
that for them. The portrait of Edmond De Belamy has sparked a debate over the creative
process and whether or not machines can truly be creative. While some people argue
that AI can never replicate the human creative process, others argue that machines can
produce unique works of art which are as valid as those created by humans. Ultimately,
it is up to the individual to decide whether or not they consider Edmond De Belamy a
work of art. Regardless of one’s opinion on the validity of AI-created artwork, Edmond
De Belamy has certainly sparked an interesting debate and has served as a reminder that
technology and art can coexist. As AI develops and more AI-generated artwork is seen,
the debate will only become more intense as people attempt to define what constitutes a
work of art. Obvious has signed the painting as a nod to traditional artworks and their sig-
natures. In this artwork, the artists used a metaheuristic algorithm, “min G max D x [log
(D(x))] + z [log(1 − D (G(z))]” which was used to generate the portrait as a signature
[13]. This is because AI-generated artwork typically wouldn’t have a signature, due to it
being created by an algorithm. By signing the painting, Obvious wanted to acknowledge
the fact that although a machine created it, the artwork is still the product of their hard
work and creativity. As Edmond De Belamy has been sold at auction and is gradually
becoming more widely appreciated, it shows that AI-generated artwork is increasingly
being accepted and appreciated as a form of art. The artwork serves as an example that
there are no rules or restrictions when it comes to creativity, and that anything is possible
with AI (Fig. 2).
Fig. 2. Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network).
Image © Obvious
outcomes. One algorithm produces data, while the other tries to distinguish between true
and false data. Figure 3 illustrates how the Generative Adversarial Network functions.
A generator and discriminator are both present in GANs. The Generator produces
phony samples of data (such as images, audio, etc.) to try to fool the Discriminator. The
Discriminator is then tasked with distinguishing genuine from fraudulent samples. Both
the Generator and Discriminator are neural networks, which compete with one another
during the training phase. By repeating the procedures multiple times, both the Generator
and Discriminator become better at their jobs. During this process, the Discriminator
tries to reduce its reward V(D, G) in the minimax game the GANs are designed for, while
the Generator attempts to maximize its loss by minimizing the Discriminator’s reward.
GANs consists of two networks: the generator and the discriminator. During training,
the generator runs once and the discriminator runs twice, once for real and once for fake
inputs. The losses calculated from these runs are then used to independently calculate
gradients that are propagated through each network. This process is shown in the Fig. 4,
provided by Goodfellow et al. in their 2014 paper on GANs. Hugo Caselles-Dupré, Pierre
Fautrel, and Gauthier Vernier used a generative adversarial network, a type of AI system
employing machine learning, to generate a painting based on 15,000 historical portraits
painted between the years 1300 and 2000. The Generator generated an image while
the Discriminator tried to differentiate human-created images from those created by
the Generator. As the Discriminator had difficulty distinguishing between human-made
images and computer-generated ones, the output had a distorted appearance. According
to Caselles-Dupré, this was due to the Discriminator being more susceptible to deception
than a human eye as it looks for specific features in the image, such as facial features
and shoulder shape [12]. As a result, the artwork created had a unique look that blended
elements of both human and machine-generated art. It is an interesting example of how
AI technology can be applied to create something completely new from existing works.
This could have many applications in the world of art, from creating paintings to creating
digital works of art.
The Role of Artificial Intelligence in Art: A Comprehensive 133
Fig. 4. Summary of the generative adversarial network training algorithm. Taken from: Generative
Adversarial Networks.
5 Discussion
AI art can never fully replace human creativity, since a human artist’s involvement is
always necessary to select the subject and its context to achieve the desired outcome.
Generative AI art may be aesthetically pleasing, but it still requires the skills of a human
artist to complete it. Since AI art is still a relatively new phenomenon, it is difficult to
determine how it will impact the traditional art industry. However, it appears that AI art
presents conventional artists with more of an opportunity than a threat, given the situation
of the market right now. A new kind of art is being produced with the development of
artificial intelligence, and it fetches high prices. There is no doubt that AI-generated art
is growing in popularity and value, despite the skepticism of some who object to the idea
of purchasing works created by machines. For instance, “Edmond de Belamy,” a piece
of art by the art collective “Obvious,” was sold for US$432,000 in October 2018. The
artwork presents itself with a visually pleasing aesthetic, featuring warm and neutral
colors. The portrait features a face composed of brush strokes and geometric shapes,
subtly blending shades of brown and beige, and dark hair drawn with a single curved
line. The composition is framed with a white background, allowing the focus to be
solely on the portrait. Overall, the painting is executed in a pastel palette that reflects a
classical aesthetic with a modern touch. The portrait exudes a sense of familiarity and
resembles a classic European renaissance painting which evokes a sense of nostalgia yet
still stands out as something unique and contemporary. The portrait has been painted
with oil on canvas as a nod to traditional portraiture painting and as a signifier of its
relationship to the past. The combination of the medium, the colors, and the form creates
a memorable work of art that stands out in its own right. AI a technological revolution
that has taken the world by storm. It has made it possible to complete complex tasks
with less cost and effort, and has enabled multitasking, reducing the burden on resources
available. AI is always active, never experiences downtime, and can be paused or stopped
whenever necessary. Furthermore, it offers people with disabilities improved capabilities
and decision-making processes are accelerated and improved. Being extremely versatile,
AI may be used across industries and has a large market potential. AI has been an
134 S. Rani et al.
invaluable help for artists in various aspects. For example, AI-powered algorithms can
be used to generate new artwork based on existing visual data. AI can also help in video
editing and image manipulation, allowing for more creative functionality. AI can even
be used for 3D modeling, creating models with higher levels of detail at a fraction of the
time. AI-powered AI can also be used to offer suggestions to a composer while creating
new music, thus providing them with a larger range of creative possibilities. Finally, AI
can be used to enhance digital marketing campaigns and create engaging content. All
of these capabilities unlock new possibilities for artists, helping them create new and
unique works of art.
6 Conclusion
The recent advances in AI have opened up new opportunities for artists to explore
production and introspection through the use of high-performance technology and newly
developed algorithms. Generative models, which are able to generate unique data by
being fed with abundant training data, are often employed in these applications. AI
acts as a tool, similar to a brush or a piano, in the creation of art. Its creative potential
depends on how artists use it. The impact of AI on the art world is significant. It enables
artists to explore more complex and intricate works of art and supports experimentation
with new techniques and [Link] is being used to create new forms of art, such
as by constructing algorithms that “learn” an aesthetic by studying a large number of
images and then generating images with the same aesthetic. This reduces the workload
involved in creating art, allowing artists more time to focus on their creative abilities.
While some fear that AI may eventually replace artists, it is not likely to happen soon.
AI is capable of producing technically proficient works of art, but it lacks the ability to
create truly innovative and unique designs. The potential of AI art is both exhilarating
and unsettling. Although it cannot yet produce truly original and unique works, it has
the power to generate texts, movies, and images that can be misconstrued by humans.
As digital technology continues to evolve, artists are using it to expand their creative
options and explore new territory.
References
1. Joshua Thomas, J., Pillai, N.: A deep learning framework on generation of image descrip-
tions with bidirectional recurrent neural networks. In: Advances in Intelligent Systems and
Computing, vol. 866, pp. 219–230 (2019)
2. Karagianni, A., Geropanta, V.: Smart homes: methodology of IoT integration in the architec-
tural and interior design process—a case study in the historical center of Athens. In: Advances
in Intelligent Systems and Computing, vol. 1072, pp. 222–230 (2020)
3. Hossain, M.A., Hasan, M.A.F.M.R.: Activity identification from natural images using deep
CNN, pp. 693–707 (2021)
4. Jahan, T., Hasan, S.B., Nafisa, N., Chowdhury, A.A., Uddin, R., Arefin, M.S.: Big data for
smart cities and smart villages: a review. In: Lecture Notes in Networks and Systems, vol.
371, pp. 427–439 (2022)
5. Nake, F.: Paragraphs on Computer Art, Past and Present, Feb 2010
The Role of Artificial Intelligence in Art: A Comprehensive 135
6. Seshia, S.A., Sadigh, D., Sastry, S.S.: Toward verified artificial intelligence. Commun. ACM
65(7), 46–55 (2022)
7. Hossain, S.M.M., Sumon, J.A., Alam, M.I., Kamal, K.M.A., Sen, A., Sarker, I.H.: Classifying
Sentiments from movie reviews using deep neural networks. In: Lecture Notes in Networks
and Systems, vol. 569 LNNS, pp. 399–409 (2023)
8. Gatys, L.A., Ecker, A.S., Bethge, M.: Texture and art with deep neural networks. Curr. Opin.
Neurobiol. 46, 178–186 (2017)
9. Jing, Y., Yang, Y., Feng, Z., Ye, J., Yu, Y., Song, M.: Neural style transfer: a review. IEEE
Trans. Vis. Comput. Graph. 26(11), 3365–3385 (2020)
10. Hertzmann, A.: Visual indeterminacy in GAN art. Leonardo 53(4), 424–428 (2020)
11. Nti, I.K., Adekoya, A.F., Weyori, B.A., Nyarko-Boateng, O.: Applications of artificial intel-
ligence in engineering and manufacturing: a systematic review. J. Intell. Manuf. 33(6),
1581–1601 (2022)
12. Goenaga, M.A.: A critique of contemporary artificial intelligence art: who is ‘Edmond de
Belamy’? AusArt (2020). [Link]
13. Zhang, M., Kreiman, G.: Beauty is in the eye of the machine. Nat. Human Behav. (2021).
[Link]
Introducing Set-Based Regret for Online
Multiobjective Optimization
School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29634, USA
{ksavary,wmalgor}@[Link]
1 Introduction
Multiobjective optimization (MO) considers problems with at least two objective func-
tions conflicting with each other. Finding an optimal solution to these conflicting objec-
tives requires the decision maker (DM) to balance tradeoffs, as an improvement in one
objective results in a degradation of other objectives. MO has been studied extensively,
with many methodologies and algorithms for finding a partial, or full, solution set, called
the efficient or Pareto set [3, 12].
Online optimization (OO) addresses iterative and dynamic decision processes under
uncertainty and can be viewed as an extension of stochastic optimization. While the
latter assumes a priori knowledge of probability distributions of the uncertain variables
modeling uncertainty, the fundamental assumption in OO is that the decision’s outcome
is unknown when the decision is being made. For example, in agriculture, farmers are
unaware of the market price of their crops at the time of planting, so it is difficult to
maximize revenue with unknown sale prices. This decision situation illustrates an online
single-objective optimization (OSO) problem. However, in addition to future market
price, they may also consider whether a specific crop replenishes nutrients in the soil or
requires higher maintenance costs for water or fertilizer. This now becomes an online
multiobjective optimization (OMO) problem as there are multiple conflicting, unknown
objectives when the decision to plant crops needs to be made.
OO algorithms and other learning algorithms are important tools for machine learn-
ing methodologies, but they have mainly been studied in a variety of single-objective
settings (e.g., [7, 11, 14]). There are limited studies in MO settings for OO. An OMO
problem is first introduced in [13], along with concepts from competitive analysis, such
as c-competitiveness, that are extended to an MO setting. OMO problems also arise in
game theory with vector-valued payoff objective functions [10]. Algorithms for online
stochastic MO are proposed in [8, 9]. While the concept of regret is a fundamental
notion in OSO, its multiobjective counterpart has not been defined with the exception
of a multiobjective robust regret that is introduced in MO under uncertainty rather than
OMO [5].
In this paper, we introduce a general concept of regret for OMO problems to recognize
that the solutions to MO problems come in the form of efficient sets rather than a single
optimal vector, and the solution values come in the form of Pareto sets rather than a
single number. Thus, this new concept utilizes sets rather than numbers to account for
MO algorithms that output a set [1], as opposed to the algorithms proposed in [13] that
output single solutions. Additionally, we propose an approach to measuring this set-
based regret and observing its behavior. The theoretical results rely only on convexity
with no assumption that objective functions follow known distributions or scenarios.
The OMO problem is formulated in Sect. 2. The useful results from OSO, which are
reviewed in Sect. 3, are extended in Sect. 4 to a multiobjective setting. In Sect. 5, the
set-based regret is generally defined and measured using the concept of hypervolume.
Section 6 contains numerical results while Sect. 7 concludes the paper.
2 Problem Formulation
Let Rn and Rp denote Euclidean vector spaces as the decision and objective space,
respectively. Let u, v ∈ Rp . We write u < v if ui < vi for each i = 1, . . . , p, u ≤ v if
ui ≤ vi for each i = 1, . . . , p with at least one i such
that ui < vi , and u v if ui ≤ vi
p
for each i = 1, . . . , p. Furthermore, let R≥/ = u ∈ Rp : u ≥ () 0 .
Let {1, 2, . . .} denote an infinite sequence of iterations. The general online convex
multiobjective optimization problem (MOP) solved at t ∈ {1, 2, . . .} is
Let Ew/·/
t ⊆ X and Pw/·/
t ⊆ Y t denote the set of all (weakly) efficient solutions and
(weak) Pareto outcomes to (OMOP(t)).
p
where wt ∈ R≥ is a vector of weights at iteration t. In (OSOP(wt )) and throughout this
paper, the transpose notation is intentionally left off to avoid confusion when additional
indices are introduced.
The following well-known theorem establishes the relationship between weakly
efficient solutions to (OMOP(t)) and optimal solutions to (OSOP(wt )). If the objective
functions in (OMOP(t)) are strictly convex, then E t = Ewt .
p
Theorem 1 [4] Let wt ∈ R≥ . If xtw = x(wt ) is an optimal solution to problem
t=1
After T iterations, all data becomes known for the following offline MOP,
T
min f t (x) . (M OP (T ))
x∈X
t=1
T
Let Ew/·/ ∈ X and Pw/·/
T ∈ Rp denote the set of all (weakly) efficient solutions and
(weak) Pareto outcomes to (MOP(T )). For the offline weighted-sum problem
Introducing Set-Based Regret for Online Multiobjective … 139
T
min wt f t (x) , (SOP (wt , T ))
x∈X
t=1
T
let Ew,w/·/ ∈ X and Pw,w/·/
T ∈ Rp denote the set of all (weakly) efficient solutions
and (weak) Pareto outcomes to (MOP(T )), computed by solving (SOP(wt , T )) for a
sequence of weights {wt }Tt=1 .
In the next section, we apply a single objective perspective to (OSOP(wt , T )) in
order to later relate this to (OMOP(T )) by utilizing Theorem 1.
Once (OMOP(T )) is scalarized into (OSOP(wt , T )), the latter can be considered as
an online single objective optimization problem (OSOP(T )) with a convex objective
function f t : Rn → R, t = 1, . . . , T . Since the objective function is unknown at the
time the solution is computed, it is expected that an online algorithm, AO , will incur
loss. The desired property of AO is to minimize the loss as this algorithm progresses
and learns from consecutive iterations.
Let xt denote an online solution to (OSOP(T )) computed at each iteration t, t =
1, . . . , T , by AO . In [14], the success of AO is computed as the regret (or loss) acquired
by AO after T iterations have been completed and is defined as
T
T
regret(T ) = f t (xt ) − min f t (x) ,
x∈X
t=1 t=1
where the first term in the difference is the actual cost incurred by AO and the second
term is the true optimal objective value that is calculated in hindsight. The goal is to
show that the average regret,
regret(T )
avgregret(T ) = ,
T
tends to 0 as T → ∞, which illustrates how AO can learn from the consecutive objective
functions to improve future online solution choices.
The Online Gradient Descent (OGD) algorithm is also proposed in [14] to solve
(OSOP(T )), and the conditions for average regret to approach zero are derived. Given
an initial feasible solution x1 ∈ X , solution xt+1 is computed using the known previous
solution xt and previous objective function f t as
where the scalars ηt > 0 are learning rates, ∇f t (xt ) denotes the gradient of f t at
xt , and Proj(x) is a projection of thesolution onto the feasible set X. Let X :=
max x1 − x2 2 and ∇f := max∇f t (x)2 for t = 1, 2, . . . , T . The following
x1 ,x2 ∈X x∈X
results hold.
140 K. Savary and M. M. Wiecek
∇f x ≤ x − x − x − x + ∇f 2
.
2ηt 2 2ηt 2 2
4 Weighted-Sum Regret
The OGD algorithm is applied to (OSOP(wt , T )), and xtw denotes the online solution
computed at each t, t = 1, . . . , T . We define the weighted-sum regret as
T
T
T
regretWS T , wt t=1 := wt ft xtw − min wt ft (x) ,
x∈X
t=1 t=1
T p
where wt t=1 denotes the sequence of weight vectors wt ∈ R≥ , t = 1, . . . , T . Let
t
∇F := max max max∇fi (x)2 and xw be an optimal solution to (SOP(wt , T )).
T
1 t
T 2
T 2
wt ∇ft (xtw )(xtw − xTw ) ≤
xw − xw − xt+1
w − xw + ηt pw
2 max
∇F 2
.
2ηt 2 2
The remainder of this proof follows directly from the proof of Theorem 2 by utilizing
this modified bound instead of the bound in Corollary 1.
Introducing Set-Based Regret for Online Multiobjective … 141
We call this optimal weight adaptive and present its effects on regret computation in
Sect. 6. Based on Theorem 3, we obtain the following result.
Corollary 2 If the OGD algorithm is applied to (OSOP(wt , T )) with learning rates
ηt = t −1/2 , then
T
√
fit xtw − fit xTw ≤ O T , for each i = 1, . . . , p ,
t=1
and the modified weighted-sum regret is bounded,
p
T T
√
fit xtw − fit xTw ≤ O T .
regret T , wt t=1 =
i=1 t=1
Proof Recall that by assumption, solutions of (OSOP(wt , T )) are finite with ft (x) ≥ 0
T
wt ft (xTw ) ≤ O(C) . Then, by
Theorem 3, ≤ O T implies
t=1
T
√
T √ √
wt ft (xTw ) ≤ O T + O(C) = O T .
wt ft xtw ≤ O T +
t=1 t=1
Thus,
p
T p
T
T
√
wt ft xtw = wit fit xtw = wit fit xtw ≤ O T .
t=1 t=1 i=1 i=1 t=1
T
√
wit fit xtw ≤ O T .
t=1
142 K. Savary and M. M. Wiecek
T
T
√
wimin fit xtw ≤ wit fit xtw ≤ O T .
t=1 t=1
T √
Thus, wimin fit xtw ≤ O T implies
t=1
T
1 t
T
1 √ √
wimin fit xw = fit xtw ≤ O T = O T .
wimin t=1 t=1
wimin
T T T √
fit (xTw ) ≤
O T = pO T = O T .
i=1 t=1 i=1
Corollary 2 indicates that the weighted-sum regret bound remains the same if the
weights are not included in the computation. Thus, when we recompute the regret as
the sum of the contributions of each individual objective function only using the online
weighted-sum solutions, we obtain the same sublinear bound.
We now turn attention to a regret computation when sets of online and Pareto solutions
are considered.
5 Set-Based Regret
We recognize that the weighted-sum regret does not consider the multiobjective nature
of (OMOP(T )), where every iteration yields a solution set of vectors rather than a single
vector. Before we propose a definition of the multiobjective, or set-based regret, that
relies on the notion of the Pareto set, we introduce the concept of nondominated points
for an arbitrary set.
Given the set Y T ⊂ Rp at iteration T, we can extract the most significant information
from this set by applying the operator N. We have YNT = N (Y T ).
Fig. 1 depicts the offline Pareto set, the accumulated online outcome set, and the
regret region. Since various mathematical tools can be used to measure this region,
below we present such a tool.
Fig. 1. a. Pareto set and accumulated online outcome set. b. Region for set-based regret (shaded
area).
The hypervolume indicator has been widely used in evolutionary multiobjective opti-
mization algorithms to assess the quality of an approximation to the Pareto set by its
closeness to the true Pareto set.
In this derivation, we relax Definition 3 and use the complete accumulated online
outcome set as a set of reference points. Given p objectives, the hypervolume between
a single accumulated outcome point y ∈ Y T and a Pareto point p ∈ Pw/·/ T , such that
p ≤ y, is defined as
p
HV(y, p) := yj − pj .
j=1
We note that this definition may account for some regions between the sets YwT and
T
Pw,w/·/ more than once while other regions may be left out. For p objective functions,
the average set-based regret via hypervolume is
p
T
fjt xtw − fjt xTw
Thus, the bound in Theorem 4 results in an average set-based regret via hypervolume
approaching 0 as T tends to infinity regardless of the precision carried in the definition
of this regret. Because Yw,N
T ⊆ YwT , this bound holds when using only the nondominated
accumulated online points as in Definition 3.
6 Numerical Results
We demonstrate the online multiobjective optimization process on the strictly convex
biobjective problem of the form
1 1
min xQt1 x + pt1 x, xQt2 x + pt2 x (OBOP (t))
x∈Rn 2 2
s.t. x = 1 ,
and consider two cases for the matrices Q1t and Q2t : they are (i) diagonal with ele-
ments in [−5, 5], and (ii) dense with elements in [0, 1]. We scalarize (OBOP(t)) into
Introducing Set-Based Regret for Online Multiobjective … 145
(OSOP(wt )) and run T iterations of the OGD algorithm with different choices of weights
satisfying w1t , w2t ∈ [0, 1] and w1t +w2t = 1. At every iteration, we use either a collection
of fixed weights or the adaptive weight being an optimal solution to (2). Table 1 shows
the average regret for (OBOP(T )) in cases (i) and (ii) with these weight choices.
Table 1. Average regret for (OBOP(T ))(i) and (OBOP(T ))(ii) with minimum average regret
shaded
Iter.\Weight (0, 1) (1/10, 9/10) (1/2, 1/2) (9/10, 1/10) (1, 0) Adaptive
100 1.154 1.069 0.8566 1.032 1.106 0.9912
(i) 1000 0.6102 0.5253 0.3315 0.458 0.5239 0.3919
5000 0.289 0.2422 0.1527 0.2353 0.2797 0.1818
100 0.3114 0.3043 0.2878 0.2459 0.2312 0.2479
(ii) 1000 0.03693 0.03516 0.03192 0.02995 0.02964 0.02785
5000 0.009553 0.008823 0.007512 0.007791 0.008108 0.006742
For case (i), the minimum average regret occurs for wt = 21 , 21 . For case (ii), the
minimum average regret occurs with the adaptive weight as the algorithm progresses. In
general, it is difficult to determine which weight choice results in the smallest (average)
regret. However, the results in Table 1 agree with Theorem 3 and show the average regret
does tend to 0 as the iterations increase regardless of the weight choice. Figure 2 plots
the adaptive weight computed for
each iteration. The thicker band in the middle of the
plot corresponds to wt = 21 , 21 , implying the computed adaptive weight is frequently
close to this vector. Note that this wt corresponds to the smallest average regret in case
(i).
7 Future Work
We have shown that the regret for OSO computed with the OGD algorithm can naturally
be extended to the multiobjective case as the weighted-sum regret when this algorithm
is applied to the online weighted-sum problem scalarizing the OMO problem. We also
proposed general notion of set-based regret and computed it with the hypervolume by
solving appropriate weighted-sum problems.
Future research directions include alternate ways to compute the set-based regret,
addressing unconstrained OMO, or designing other OMO algorithms.
References
1. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, USA (2001).
[Link]
2. Désidéri, J.A.: Multiple-gradient descent algorithm for multiobjective optimization. C. R.
Math. 350(5–6), 313–318 (2012). [Link]
3. Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005). [Link]
1732
4. Geoffrion, A.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl.
22, 618–630 (1968). [Link]
5. Groetzner, P., Werner, R.: Multiobjective optimization under uncertainty: A multiobjective
robust (relative) regret approach. Eur. J. Oper. Res. 296(1), 101–115 (2022). [Link]
10.1016/[Link].2021.03.068
6. Guerreiro, A., Fonseca, C., Paquete, L.: The hypervolume indicator: Computational problems
and algorithms. ACM Comput. Surv. 54(6-119) (2022). [Link]
7. Hazan, E.: Introduction to online convex optimization. Found. Trends Optim. 2(3–4), 157–325
(2016). [Link]
8. Liu, S., Vicente, L. N.: The stochastic multi-gradient algorithm for multi-objective optimiza-
tion and its application to supervised machine learning. Ann. Oper. Res. 1–30 (2021). https://
[Link]/10.1007/s10479-021-04033-z
9. Mahdavi, M., Yang, T., Jin, R.: Stochastic convex optimization with multiple objectives. In:
Proceedings of the 26th International Conference on NIPS‘13, Vol. 1, pp. 1115–1123. Curran
Associates Inc., USA (2013). [Link]
10. Mannor, S., Perchet, V., Stoltz, G.: Approachability in unknown games: Online learning meets
multi-objective optimization. In: Conference on Learning Theory, pp. 339–355. PMLR (2014)
11. Shalev-Shwartz, S.: Online learning and online convex optimization. Found. Trends Mach.
Learn. vol. 4(2), pp. 107–194 (2012). [Link]
12. Sumpunsri, S., Thammarat, Ch., Puangdownreong, D.: Multiobjective Lévy-flight firefly algo-
rithm for multiobjective optimization. In: Vasant, P., Zelinka, I., Weber, G.-W. (eds) Intelligent
Computing and Optimization, ICO 2020. Advances in Intelligent Systems and Computing.
Springer, Berlin, vol 1324, pp. 145–153 (2021). [Link]
8_15
13. Tiedemann, M., Ide, J., Schöbel, A.: Competitive analysis for multi-objective online algo-
rithms. In: Rahman, M.S., Tomita, E. (eds) WALCOM: Algorithms and Computation, WAL-
COM 2015, Lecture Notes in Computer Science, vol. 8973. Springer, Cham (2015). https://
[Link]/10.1007/978-3-319-15612-5_19
14. Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent.
In: Proceedings of the Twentieth International Conference on ICML ‘03, pp. 928–935. AAAI
Press (2003). [Link]
The Concept of Optimizing the Operation
of a Multimodel Real-Time Federated Learning
System
Elizaveta Tarasova(B)
Abstract. The paper considers the optimization of federated learning for a real-
time system with independent devices and centralized aggregation. A multimodel
setting is considered, that is, various machine learning models participate in the
system, while each device can work with any number of them. A new method of
managing this system and work algorithms for its elements is proposed in order
to improve the quality of global models and minimize delays between updates on
local devices. The proposed approach is based on the idea of introducing a control
center to control the correctness of updates, as well as considering the aggregation
station as a single-processor scheduling theory model with due dates. In this case,
the due dates are the predicted points in time for the next local update. The proposed
approach is tested on synthetic data and compared in various combinations, as
well as with the basic method, in which the aggregation starts on a first-come-
first-served basis for models that have received updates from all devices. Testing
has shown that the new approach in the maximum configuration and with the SF
algorithm for more than 90% of examples reduces the delay on local devices by
an average of more than 19% compared to the rest.
1 Introduction
The quality of machine learning (ML) modeling results directly depends on the set of
incoming data. With a small volume or low representativeness, the results of the models
may be inaccurate or even incorrect. The problem of lack or imbalance of data is faced
by many researchers, as well as companies that use ML models in their work. Federated
Learning (FL) offers a solution to this problem while maintaining data privacy.
This approach is based on two principles: data confidentiality and collaborative
learning. That is, on the one hand, organizations can enrich their models with data
from other similar sources, and on the other hand, they will save their customers’ data.
FL is not applicable to all ML models. For example, models based on trees cannot be
considered in the framework of FL, since these trees cannot be retrained. Also, k-nearest
neighbors algorithm (KNN) methods or similar methods may not benefit from using
the FL concept, as they store training data during training. Basically, FL is suitable for
parameterized learning, neural networks.
When building a model based on federated learning, many management and planning
questions arise. It is required to determine the frequency of model recalculation on local
devices, which determines the frequency of updates to the station, taking into account the
transmission time. At the same time, it is necessary to control the frequency of updates
aggregation at the Station. Also, local devices can have different computing power and
different intensity of incoming data, which affects the completion time of calculations.
On the one hand, all updates, and recalculations on devices can be done with the arrival
of new data (for example, a fixed size) and aggregation can be carried out at the Station
when new data arrives. However, with this update approach, incoming updates to the
Station for aggregation may not be correct when applied alone. It is also possible to
introduce additional parameters that affect the efficiency of the system. For example,
the readiness of devices to wait for the aggregation to complete and receive an updated
model. FL planning and optimization is used to solve such issues.
In this work, the multimodel federated learning system with independent devices on
which local updates of various models of machine learning occur, and with centralized
aggregation has considered. Several unrelated models can be used on each device. The
system works in real time with ML models that require constant updating with new
incoming data. The paper proposes a new control method for this FL system and an
online FL scheduling algorithm for optimizing work in order to improve the quality of
global models and minimize delays on local devices. Improving the quality of global
models is achieved through the following control of the start of local updates on local
ones, the start of aggregation, and control of the aggregation order for different models.
In this case, the choice of the aggregation method remains optional.
2 Related Work
As mentioned earlier, FL is based on two main ideas: privacy protection and collabora-
tive learning, which requires the aggregation of model updates received from different
devices. Consider approaches for these two ideas. There are various aggregation meth-
ods. The basic approach, calculating the arithmetic average of all updates coming from
devices, is not resistant to damage: even one incorrect update in a cycle can greatly
degrade the global model for all devices. An alternative approach is the median due to
its robustness to outliers. The paper [4] considers the application of the classical mul-
tidimensional generalization of the median (geometric, spatial, or L1-median [2]) to
FL. Other approaches are the weighted average, where the significance is determined
for each local device—the weight, the Bayesian non-parametric structure for FL neural
networks, in which the global model is formed by comparing neurons in local models
The Concept of Optimizing the Operation of a Multimodel … 149
[9], as well as various variations of the methods of summing and averaging local and
the global model [3].
There are various studies aimed at managing FL in order to optimize various criteria.
For example, in [8], planning of the FL model without information about the state of the
wireless channel and the statistical characteristics of clients based on multi-armed bandits
is considered for the minimization of the time spent on FL training in general, including
transfer time and local computation time. The paper [8] proposes a CS algorithm based
on the upper confidence bound (CS-UCB) policy. Another goal of FL planning might be
to minimize the number of training cycles. The learning cycle for FL includes updating
on local devices, pushing updates, and aggregation on the Station [4]. In the [4] an
approach was proposed to reduce the number of training cycles along with minimizing
the time interval per communication round by jointly considering the significance of
local updates and the latency problems of each client. The proposed scheme is based on
the -greedy algorithm.
Federated learning can be considered in a multi-jobing setting (Multi-Job Federated
Learning—MJ-FL): several models can be updated on local devices and aggregated
at the Station. There are two possible cases: with a parallel learning process on local
devices and with a sequential one. The direct approach is to train each job separately
using the mechanism described in [3] while using existing single job FL scheduling (for
example, FedAvg [3]). Thus, simple parallelism is considered when the devices are not
fully utilized and the system efficiency is low. In addition, direct adaptation of existing
scheduling methods to multi-job FL cannot provide efficiency and accuracy at the same
time. For parallel learning, the research [10] proposed two approaches to improve the
efficiency of the learning process and the accuracy of the global model. The effectiveness
of training is determined by a cost model based on training time and data validity of
different devices in the learning process of different jobs. The first approach proposed
in [10] is based on reinforcement learning and is more suitable for complex jobs, the
second one uses Bayesian optimization and is suited for simple jobs.
Thus, when optimizing in order to reduce the time of the learning process, one should
also take into account not only the computing and communication capabilities, but also
the balance of the data.
3 Problems
A system based on federal training for independent devices on which the same type of
machine training models is considered.
General model have devices with unrelated customer databases, K groups of machine
learning models. On each device can be different amounts of machine learning models
from different groups: {Mnk }, where n—device number, k—group number. Station for
centralized global updates aggregation. A federal approach retains confidentiality—data
remain on local devices, the training occurs locally, only the parameters of the model
150 E. Tarasova
are sent for aggregation. The system is considered during the time. It is required to set
the system control algorithm in order to optimize the selected target functions.
The general structure of the model under consideration is shown in the Fig. 1. At the
entrance, the system accepts data (for example, customer transactions) in the assumption
that the data comes with different frequency and different devices have different cus-
tomers. The input data falls into the Unconnected devices area on their devices, where
local updates occur. After that, the parameters of models are transferred to an optimized
system of federal training (OFL System) for global update (aggregation). The results
obtained by the system are updated parameters (call them global) that return to the
devices.
The problems that are required to solve to optimize the system under consideration:
1. Control of local devices. At what point of time the local update occurs. As based on
the frequency of data receipt to the device to regulate the aggregation of the model in
the system.
2. Adjustment of the correctness of updates. How to determine whether it is possible
to aggregate on the basis of local update received, provided that the update did not
come from all devices. At what point in time to send updates to aggregation and how
many devices should an update come for the correct aggregation.
3. Adjustment of the sequence of updates. In what sequence is the aggregation, provided
that updates are received for different models.
The solution of the problems described above will improve the quality of the global
model (problems 1 and 2), as well as increase the efficiency of the system in terms
of reducing the time of delays on local devices for updates (problems 1 and 3). The
concept of optimizing the work under consideration is proposed in order to solve the
problem posed, which is based on the prediction of the frequency of data receipt, planning
algorithms and games theory.
This section presents a new system control method, the general algorithm of work and
the algorithm of the work of each level of the system. The section also presents several
scheduling theory algorithms for the operation of one of the levels of the system and
their adaptation for it.
The Concept of Optimizing the Operation of a Multimodel … 151
From the point of view of FL in the system there is a set of devices (see Fig. 2: Device
1, Device 2, Device 3), each of which has an isolated set of customers and data about
them, and the center of updates “Aggregation Station—AS”, on which the aggregation
is carried out. AS can consistently aggregate for different models.
To solve the jobs described in the previous section, the model was divided into three
levels (see Fig. 2): Local devices—Level 1—problem 1, Hub—control center—Level
2—problem 2, Aggregation Station (AS)—Level 3—problem 3.
General algorithm for the operation of the system:
1. The principle of operation of local devices is set, which determines under what
conditions the local update begins.
2. After the local update is completed, the device transfers the obtained model
parameters.
3. The Hub abstract device is composed, which collects models after local updates and
makes a decision based on the given principle of operation, at what point to send
update for each group of models to the aggregation station as a job. Hub is also a link
between the concept of federal training and the operation of the aggregation station
as a single processor system of schedules, as it forms from many locally updated
parameters of the jobs with a set of parameters selected for the station.
4. At the time of the release of the aggregation station, the selected planning algorithm is
used to make a decision on setting up one of the available jobs for aggregation. After
the aggregation, the updated model is sent back to all the devices that participated in
the aggregation.
152 E. Tarasova
The described sequence of actions (local update and sending it, collection of updating
to Hub, transmission to AS and newsletter back to devices) is an iteration of the system
operation. Moreover, some devices may not participate in a particular iteration. If the
update from the n device was not received before sending the job to AS, then this device
does not participate in the aggregation and does not receive a subsequent global update.
Next, the principles of the work of each level will be considered.
The following designations are introduced. The cnk parameter is the number of data
units related to the Mnk model received from the beginning of the last local update of this
model on the n device, that is, data that have not yet participated in the update. Fink is a
is a set of
set of parameters obtained after the local update of the Mnk in iteration i. Fink
parameters received after the global update of the Mnk in iteration i. Cnk is the parameter
calculated empirically, which determines which data volume is optimal to start the next
update. The parameter is a constant during the system. Dink —the deadline of the current
iteration of the global renewal of the n model k—at what the next point of time new data
will accumulate C—blue point (see Fig. 3). αink —the “weight” parameter of the device,
displays the quality of the resulting update.
Local device operation algorithm:
1. If the device is free, then for model Mnk of device n from group k after the start of
iteration i:
The Concept of Optimizing the Operation of a Multimodel … 153
1.1. If cnk > Cnk and Fink arrived at the device, the i + 1 local update iteration starts.
Else: waiting.
1.2. At the start of a local update, the Dink parameter is predicted for each model.
2. If several models are ready for updating (by condition 1), then the model with the
minimum Dink is given priority.
3. After the local update is completed, Fink , Dink and αink are sent to the OFL System.
where time is the current time on the processor. It is required to set a processor operation
rule that minimizes the maximum latency for the final schedule S:
L (S) = max(τi + pi − Di , 0) → min, (1)
vi ∈V
Due to the fact that at this stage of research, the proposed concept was not applied to
machine learning models, the quality of the system is determined by theoretical consid-
erations. The quality of the entire system directly depends on the quality of local models
on local devices, which in turn depend on the quality and volume of data, as well as
the quality of previous local and global updates. At the same time, adding a Hub to the
system allows you to control the impact of local updates on the global model, which
leads to better global updates.
For comparison, on the second basis, the system was tested in the following
variations:
1. The system does not use suggested ideas. Model aggregation occurs when all devices
are ready, that is, at the moment when all local updates are completed. The sequence
of aggregation on the AS is determined by the principle “first come, first served”. The
objective function is also defined, as in other cases.
2. The system works without the second level: aggregation occurs when all devices for a
specific model are ready, the aggregation order is determined by the MINDL, WSPT,
SF algorithms.
3. The system works on the basis of the entire proposed concept with different algorithms
for the third level: MINDL, WSPT, SF.
The use of option 3 with the SF algorithm gives the best performance in terms of
the objective function: an improvement was observed in 92% of the examples in each
group by an average of 25% compared to option 1 and in 90% of the examples by an
average of 19% with option 2 with any of the algorithms. For the remaining examples,
the functions received close values when comparing option 3 and option 2. At the same
time, when compared with the option without using the Hub, in some cases option 3
gives the target values of the function less (better, since minimization was considered),
by more than 38%. This is due to the fact that the Hub reduces the delay for waiting
for updates from different devices, as it allows you to put updates on the aggregation
that are not received from all devices. Analysis of the system operation using different
algorithms for the aggregation station confirms the results obtained in the work [6].
For all groups of examples, new algorithms in more than half of the cases generated
schedules with objective function values better than those of the MINDL and WSPT
algorithms. Improvement averaged from 4 to 43%.
6 Conclusion
This research was devoted to the optimization of multimodel federated learning. The
paper proposes a new approach to managing a system with independent devices and
centralized aggregation. In the proposed method, the system was divided into three lev-
els of control, for each of which an operation algorithm was developed. The new method
solves three posed problems: update sequencing on local devices, global update cor-
rectness, and aggregation sequencing when multiple models are ready for local update.
Along with improving the quality of global models, the new method minimizes delays
between updates on local devices. The proposed new approach was tested on synthetic
data. Various combinations of the proposed method were tested with each other, as well
156 E. Tarasova
as with the basic method, in which aggregation starts according to the first-come-first-
served rule for models that have received updates from all devices. Testing has shown
that the new approach in the maximum configuration and with the SF algorithm for more
than 90% of examples reduces the delay on local devices by more than 19% on average.
This article is devoted to the description of the new concept, as well as its theoretical
justification and confirmation of the effectiveness of testing on synthetic data. Further
research will be devoted to applying this approach to systems with specific machine
learning models, developing rules for calculating various parameters, such as device
weights or the send rate for the Hub.
References
1. Carlie, J.: The one machine sequencing problem. Euro. J. Opera. Res. 11, 42–47 (1982)
2. Maronna, R., Martin, D., Yohai, V.: Robust Stat. Theor. Methods (2006). [Link]
1002/0470010940
3. McMahan, H.B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.Y.: Communication-
Efficient Learning of Deep Networks from Decentralized Data (2016). [Link]
48550/ARXIV.1602.05629
4. Pillutla, K., Kakade, S.M., Harchaoui, Z.: Robust aggregation for federated learning. IEEE
Trans. Signal Process. 70, 1142–1154 (2022). [Link]
5. Smith, W.: Various optimizers for single-stage production. Naval Res. Logist. Q. 3, 59–66
(1956)
6. Tarasova, E.: Online algorithm for single machine scheduling problem with deadlines. Actual
Sci. Res. Mod. World 7(63)(2), 177–181 (2020). [Link]
08166
7. Tarasova, E., Grigoreva, N.: Accounting for large jobs for a single-processor online model.
In: 2022 8th International Conference on Optimization and Applications (ICOA), pp. 1–5
(2022). [Link]
8. Xia, W., Quek, T.Q.S., Guo, K., Wen, W., Yang, H.H., Zhu, H.: Multi-Armed bandit-based
client scheduling for federated learning. IEEE Trans. Wirel. Commun. 19(11), 7108–7123
(2020). [Link]
9. Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, T.N., Khazaeni, Y.:
Bayesian nonparametric federated learning of neural networks (2019). [Link]
48550/ARXIV.1905.12022
10. Zhou, C., Liu, J., Jia, J., Zhou, J., Zhou, Y., Dai, H., Dou, D.: Efficient Device Scheduling
with Multi-Job Federated Learning. [Link]
Ambulance Priority Dispatch Under
Multi—Tiered Response by Simulation
Abstract. Emergency medical service (EMS) systems are health care systems that
provide medical care and transportation of patients to hospitals when needed, thus
potentially saving lives. We determine an optimal policy for multiple unit dispatch
and call priorities to increase the overall patient survival probability, which our
model emphasizes the study of the priority 2 call policy. In addition, we present
some extensions to the model by considering real on-scene conditions, such as
the fact that dispatch decisions of the priority 2 calls can be changed. We study
the models of dispatching emergency vehicles under multitier by considering a
specific alternative policy for priority2 calls. The simulation models for multiple
unit dispatch with multiple call priorities are used to investigate the performance
of possible policies for dispatching ambulances by using a simulation method.
Simulation is used to investigate the models to obtain the optimal dispatching
policy for priority2 calls based on the real-world problem. The results show that
the better policy provide the benefit over the closest policy of the improvement of
42 lives saved per 10,000 calls.
1 Introduction
Medical priority dispatching is used to improve efficiency of EMS systems. The strategy
of medical priority dispatching is to consider a faster response time to life-threatening
patients. The study of pre-hospital mortality in EMS systems by Kuisma et al. [1] showed
that dispatching a far ambulance to low priority patients does not negatively impact pre-
hospital mortality rates. Therefore, the decisions regarding how to dispatch ambulances
do not adversely affect low priority patients in terms of survival rates, since these patients
are non-critical. Medical priority dispatching may make the closest ambulances unavail-
able to non-serious patients. Emergency calls are classified into three priority levels upon
dispatch, and their classification may be updated once a responder reaches the scene and
makes father assessment based on research paper of Nicholl et al. [2]. For example,
BRAVO calls (prioirty2) are potentially life-threatening calls that could be upgraded to
life-threatening (priority1). In this case, priority2 calls need a paramedic unit and rapid
transport.
2 Literature Review
Since the late 1960’s the rapid US population growth generated on increasing demand
for ambulance services. In 1967, the study of EMS systems began to determine the
distribution and workload of the existing systems. King and Sox [5] were the first to
conduct a study to evaluate the workload of EMS systems to improve performance. In
1972, the EMS systems were analyzed in a study of dispatching models to minimize
average response time, as seen in Carter et al. [6]. This study considered two ambulance
units which were dispatched to each call, given the different locations of the units.
The study then determined the district boundary for each unit to respond to calls. The
EMS planners then studied the number and type of ambulances to deploy to certain
locations, as seen in Eaton et al. [7]. This study researched how to design the EMS
Ambulance Priority Dispatch Under Multi—Tiered Response 159
systems to reduce cost. The idea to study dispatching policies was proposed by Repede
and Bernarde [8]. They evaluated two alternative dispatch policies. The first policy
always sends the closest ambulance to the call. Lim et al. [9] studied the impact of the
dispatch policies on the performance of EMS systems. The effect of dispatch strategies
on the performance of EMS systems is based on a determining code of the urgency of
calls. Recent studies considered the fairness among demand zones was presented by
Chanta et al. [10] Several previous works relevant to fairness above analyzed model
by not taking account the real conditions at on-scene of accidents into the model. 12.
Zarkeshzadeh et al. [11] considered the improvement of ambulance dispatching by using
a novel hybrid method: network centrality measure, nearest neighbor method. Nicoletta
et al. [12] studied the ambulance location and dispatching models. They formulated the
model and validated a robust of the model. Enayati et al. [13] formulated the model of the
integrated redeployment and dispatching model. They studied the model under personnel
workload limitation. In this work we extend the multiple-unit dispatch with multiple call
priorities proposed in Sudtachat et al. [3] and [4]. However, our work differs in which
we consider the realistic on-scene conditions that the potentially-life-threatening calls
might need the paramedic unit.
3 Model Description
In this section, we discuss the EMS systems which are extended from the original model
in Sudtachat et al. [3]. This paper proposes the multiple unit dispatch of EMS systems
while considering on-scene conditions. The systems have three call priorities and two
types of ambulances (the ALS unit and BLS units). The response area is partitioned
into demand zones, each with a distinct dispatch preference list. When a call arrives at
the dispatch center, the dispatch planners make the decision about which ambulances
to assign in response to the call according to the preference lists. In the case when all
ambulances in the preference list are busy, the call will transfer to another dispatch center.
The classification of call priorities is also considered in this paper. The dispatching of
different types of ambulances depends on call priorities. The characteristics of the EMS
systems, described in Sudtachat et al. [3], showed that priority1 calls require a double
dispatch of the ALS unit and the BLS unit. Single dispatching of the BLS unit is when
the BLS unit is assigned to respond the priority2 or 3 calls. In this paper, we consider the
dispatching policy for priority2 calls. The configuration of the EMS system process with
BLS-upgrade of priority2 calls is described in Fig. 1. The main assumptions of priority1
and 3 calls are the same as the original studied in Sudtachat et al. [3], except for the
situation of on-scene upgrades/downgrades for priority2 calls. The adapted models of
possible situations at on-scene priority2 calls are:
• Vehicle dispatch decisions: Priority2 calls require a single dispatch (BLS unit). We
dispatch the available BLS unit for priority2 calls according to two possible policies;
priority1 (closest policy) or 3 (heuristic policy) calls in which the inputs for the
dispatching policies of priority1 or 3 calls are based on results from Sudtachat et al. [3].
To obtain high efficiency of EMS systems, we compare the two alternative policies.
We decide to dispatch the BLS unit by choosing the policy that provides the better
overall expected survival probability of life-threatening patients. If the first ambulance
160 K. Sudtachat and N. Phongsirimethi
in the rank of ordered preference list is busy, the next one will be dispatched. In case
of all BLS unit are busy, a call would be transferred to another dispatch center.
• On-Scene: If patients require BLS care at on-scene priority2 calls, BLS serves and
then returns to the home station base. However, if patients require ALS care, judged
by the BLS personnel, the BLS unit will provide the initial care, wait for the ALS unit
to determine if patients need transportation to hospitals, and then head back to their
original station. The dispatch of the ALS unit is assigned according to the available
ALS units in rank of the ordered preference list for prirority2 calls. We refer to this
as BLS-upgrade.
Fig. 1. The EMS system process with BLS upgrade for priority2 calls
Table 1. Types of calls, types of ambulances and their corresponding dispatching policies
be: “idle” (at station base), “busy” (on the way to respond a call), or “busy” (serving
and providing transportation a call). Table 2 shows the state space of EMS systems.
We consider integer numbers to represent status of the ambulances. We generate dif-
ferent modules to dispatch ambulances according to attributes of the calls (priority and
location). When a call arrives on scene, we assign a call priority and a demand zone. In
addition, the dispatch centers will decide which units to dispatch depending on call prior-
ities. We then assign a status to dispatch ambulances according to the state of ambulances
shown in Table 2. Double dispatch is when assigning a pair of dispatched ambulances.
When the first ambulance arrives on the scene, we calculate the survival rate by using
the response time of the first ambulance to priority1 calls. The survival probability is
then calculated using Eq. (1) in Sudtachat et al. [3]. Considering a single dispatch for
prority2 calls with status3, as the BLS arrives on-scene of accident, the state would be
“waiting” for another unit with status4, if patients need ALS care. When the ALS unit
arrives on scene, the status of both ambulances would be changed to “busy” (offering
service to patients). After the ambulances provide service to the patients and go head
to their original stations the state would become “idle” again. We investigate the better
dispatch policy when the EMS systems reach the steady state.
The simulation models analyze different dispatching policies and evaluate patient
survival probability. When a priority2 call arrives to systems, we dispatch the BLS unit
according to the dispatching policy like prirority1 or 3 calls. When the BLS unit arrives
on the scene of an accident, dispatchers decide to upgrade or not. In case of no upgrade,
BLS unit provides care to patient and then return to home station. If BLS upgrade occurs,
we will dispatch the ALS unit according to the policy where the closest ALS is always
sent. In the case where all ALS units are busy, the BLS unit provides initial care and waits
for the next available ALS unit. The simulation models assume they operate 24 h per day.
In this study, we investigate the better policy of dispatching BLS unit for priority2 calls,
162 K. Sudtachat and N. Phongsirimethi
where we treat the policy of dispatching the BLS unit for priority2 calls like the policy
for priority1 or 3 calls. The Process Analyzer in Arena Version14 is used to obtain the
better policy. These simulators run 1800 replications per one simulation with half width
of 0.0001 the 95% confidence interval around the survival probability. Each replication
runs 10 weeks to reach steady-state results. The performance of two alternative policies
is compared to obtain the better policy.
5 Conclusion
We analyzed a simulation model for multiple unit dispatch in EMS systems. We consider
classifications of call priorities and two types of ambulances. The simulation model is
formulated as a model given a particular dispatching policy which could implement for
Table 3. Response times (Lognormal distribution) and proportion of calls
Table 4. Service times (Exponential distribution) and proportion of priority1, 2 and 3 calls
elaborately realistic situation of the EMS system. We consider the dispatching policy
of BLS units based on possible situations that can be changed at on-scene of accidents
for priority2 calls. We compare two alternative dispatching policies of BLS units for
priority2 calls. Numerical results showed that the dispatching policy of the BLS unit for
priority2 calls treated like priority3 calls (heuristic policy) provided improvement over
the closest dispatching policy. When average busy probability of servers was over 80
percent, there was no difference between the heuristic and the closest dispatching policy
for priority2 call. We suggest the managerial insight to the EMS administrators. They
could implement the better policies that apply to the priority 2 calls to choose to dispatch
the proper ambulance for each situation. The policies are pre-installing for dispatching
and monitoring the ambulances by integrated with the GPS system. In future work, we
will expand the dispatching model to consider the location of ambulances that lead to
increasing the expected survival probability of EMS systems.
Table 5. Comparison of two alternative policies and closest policies for priority2 calls with 30% upgrades
ID Arrival Policy Resp. Resp. Resp. % of % of % of % of total Sur. Prob. % Imp. # of the imp.
rate Treat like time P1: time P2: time P3: covered covered covered coverage of lives
calls/hr. mins mins mins P1 < 9 P2 <15 P3 < 22 saved/10,000
mins mins mins calls
1 0.25 Closest 7.28 13.31 13.25 0.74 0.72 0.84 0.78 0.2545 1.53 39
P1 7.19 12.85 17.52 0.74 0.73 0.75 0.75 0.2576
P3 7.17 17.55 17.49 0.74 0.59 0.75 0.73 0.2584
2 0.50 Closest 8.22 14.27 14.19 0.70 0.69 0.83 0.75 0.2354 2.42 57
P1 8.08 13.63 17.66 0.71 0.71 0.75 0.72 0.2398
P3 8.03 17.67 17.61 0.72 0.58 0.75 0.71 0.2411
3 0.75 Closest 9.72 14.79 14.72 0.66 0.67 0.82 0.70 0.2112 1.75 37
P1 9.56 14.28 16.88 0.67 0.70 0.76 0.69 0.2149
P3 9.59 16.80 16.82 0.67 0.60 0.76 0.68 0.2149
4 1.00 Closest 11.06 15.08 15.01 0.63 0.67 0.81 0.66 0.1919 2.50 48
P1 10.89 14.62 16.88 0.64 0.69 0.76 0.66 0.1958
P3 10.82 16.98 16.87 0.64 0.60 0.76 0.65 0.1967
5 1.25 Closest 11.83 15.25 15.16 0.60 0.66 0.81 0.64 0.1811 1.71 31
P1 11.70 14.90 16.56 0.61 0.67 0.77 0.63 0.1842
Ambulance Priority Dispatch Under Multi—Tiered Response
Fig. 2. Comparison of the expected survival probability for two alternative policies and the closest
policy
References
1. Kuisma, M., Holmström, P., Repo, J., Määttä, T., Nousila-Wiik, M., Boyd, J: Prehospital
mortality in an EMS system using medical priority dispatching: a community-based cohort
study. Resuscitation 61(3), 297–302 (2004)
2. Nicholl, J., Coleman, P., Parry, G., Turner, J., Dixon, S.: Emergency priority dispatch sys-
tems—a new era in the provision of ambulance services in the UK. Pre-hosp. Immediate Care
3, 71–75 (1999)
3. Sudtachat, K., Mayorga, M.E., McLay, L.A.: Recommendations for dispatching emergency
vehicles under multitiered response via simulation. Int. Trans. Oper. Res. 21, 581–617 (2014)
4. Sudtachat, K.: Strategies to improve the efficiency of emergency medical service (EMS)
systems under more realistic conditions (Doctoral dissertation, Clemson University) (2014)
5. King, B.G., Sox, E.D.: An emergency medical service system: analysis of workload: San
Francisco Area. Assoc. Sch. Public Health 82(11), 995–1008 (1967)
6. Carter, G.M., Chaiken, J.M., Ignall, E.: Response areas for two emergency units. Oper. Res.
20(3), 571–594 (1972)
7. Eaton, D.J., Daskin, M.S., Simmons, D., Bulloch, B., Jansma, G.: Determining emergency
service vehicle deployment in Austin. Taxas, Interfaces, 15(1), 96 (1985). CPMS/TIMS Prize
Papers
8. Repede, J.F., Bernardo, J.J.: Developing and validating a decision support system for locating
emergency medical vehicles in Louisville, Kentucky. Eur. J. Oper. Res. 75(3), 567–581 (1994)
9. Lim, C.S., Mamat, R., Braunl, T.: Impact of ambulance dispatch policies on performance of
emergency medical services. Intell. Transp. Syst. IEEE Trans. 12(2), 624–632 (2011)
10. Chanta, S., Mayorga, M.E., Kurz, M.E., McLay, L.A.: The minimum p-envy location problem:
a new model for equitable distribution of emergency resources. IIE Trans. Healthc. Syst. Eng.
1(2), 101–115 (2011)
11. Zarkeshzadeh, M., Zare, H., Heshmati, Z., Teimouri, M.: A novel hybrid method for improving
ambulance dispatching response time through a simulation study. Simul. Model. Pract. Theory
60, 170–184 (2016)
Ambulance Priority Dispatch Under Multi—Tiered Response 167
12. Nicoletta, V., Lanzarone, E., Bélanger, V., Ruiz, A.: A cardinality-constrained robust approach
for the ambulance location and dispatching problem. In: Health Care Systems Engineering:
HCSE, Florence, Italy, May 2017 3, pp. 99–109. Springer International Publishing (2017)
13. Enayati, S., Özaltın, O.Y., Mayorga, M.E., Saydam, C.: Ambulance redeployment and dis-
patching under uncertainty with personnel workload limitations. IISE Trans. 50(9), 777–788
(2018)
Seeds Classification Using Deep Neural
Network: A Review
Abstract. In the past few years, agricultural production studies have gained pop-
ularity, showing signs of rapid development. Using different kinds of computer
technology, the newest thing to come along makes it easier for farmers to do their
work. Many things affect agricultural output, but the efficiency of the seeds is the
most important. Seed classification could give us more information about quality
work, controlling seed quality, and finding impurities. Automated classification of
seeds has been generally done based on factors like colour, texture, and size. Most
of the time, specialists do this by looking at the samples, which is very much time-
consuming. Adaptation of technologies for automated classification of seeds can
be helpful in this regard. Fortunately, there are good number of research already
carried out using Deep Neural Network (DNN) around the globe. In this paper, we
provide a review of seed classification techniques with a strong focus on DNN.
The goal of this research is to create a system for categorizing seeds based on
visual and morphological traits.
1 Introduction
A dramatic increase in the agriculture industry has coincided with an increase in popu-
lation around the world. As a result, it is projected that a a wider variety of products and
better-quality goods will emerge. This presumption makes the classification of things
the most important subject. One may classify the expanding objects using attributes like
product class, size, clarity, brightness, texture, product images, and product colors. In
this respect, the categorization of the goods was based on the detection of ill items,
detection of the freshness of 2 products, detection of weeds, counting of products, edge
characteristics, and textures. Seed classification using possible deep neural networks is
present in Fig. 1.
Applications for deep artificial neural networks have become increasingly common
in recent years. Many different tasks, such as categorization, analysis, image processing,
picture commentary, sound processing, question-answering, and language translation,
are frequently handled using deep learning. DNN further makes it possible to analyze
more samples using previously gathered data. Contrasting deep learning with old-school
neural network methods. Machine learning methods, including supervised and unsuper-
vised learning, and deep neural network topologies, can give accurate categorization
results.
In this study, The wheat seeds dataset contains three different varieties of seeds:
Kama, Rosa, and Canadian. While there are already many articles based on this kind of
dataset, we believe there is more room for improvement in the current setup, and hence
we have chosen this dataset. To construct the data, seven geometric parameters of wheat
kernels were measured:
1. Area A
2. Perimeter P
3. Compactness C = 4π P2
A
All of these parameters were real-valued continuous. Therefore, we will use this
dataset for classification tasks using various types of algorithms and try to determine
which method will perform better on the gathered dataset.
The forty articles on seed classification using several deep neural network are
reviewed in this study. In this study, we first go over the methodology for writing a
review paper, then we go over the specifics of the articles we found, and finally, we give
a summary of the research these papers have produced.
2 Methodology
This section includes the process of how the relevant papers are selected which database,
which search terms, and which inclusion-exclusion criteria. The writing consideration
was sourced from prestigious sources such as Springer, ACM, MDPI, Elsevier, IEEE
Access, Wiley, Taylor & Francis. The following search terms are used that were included
in this audit: “Seed classification”, “Seed prediction”, “Seed Classification using Deep
Learning”, “Seed predictiom using Deep learning”, and “Seed and Deep Learning”.
This section focuses on how the papers were checked. The papers were strictly observed
for their reliability and validity to take as final sample papers to review on them.
3 Paper Collection
A significant stage was identifying the primary data sources from which the original
articles were compiled. For the selection of primary studies, Google Scholar served
as our main search engine. For the purpose of locating pertinent publications, we also
took into account several notable academic publishers, including Scopus, IEEE, MDPI,
Hindawi, ScienceDirect, ACM Digital Library, and ResearchGate.
Seeds Classification Using Deep Neural Network: A Review 171
(Machine learning < OR > Deep learning < OR > Neural network)
[AND]
Image processing
The process of choosing studies is done in two stages: the first is primary selection,
and the second is the final selection.
The titles, keywords, and abstracts of the primary sources were first used to choose them;
however, when these three criteria proved insufficient, the evaluation was expanded to
include the conclusions section. This phase resulted in the creation of 222 publications,
including conference papers, journals, summaries, books, symposiums, and other pieces
of literature.
The potential of a research paper was evaluated using several criteria, such as the
breadth, significance, and future research influence of the research. Table 1 displays
the inclusion/exclusion criteria that were applied to select papers for inclusion/rejection.
A crucial step in preparing a survey report is choosing reliable research articles. Not
every research article that has been published in a certain field is of a high caliber. In
order for our survey to encompass both the most recent research and earlier research
efforts in the fields of seeds categorization, DNN, CNN, and image processing, we chose
5 noteworthy research pieces from reputable journals that were published in five different
time frames. Table 2 provides a chronological overview of the projects carried out across
various time periods for the readers’ perusal.
We chose six publications on seed classification, three papers on DNN/CNN, and
two papers on image processing systems from a variety of publishers, including ACM,
ResearchGate, MDPI, Hindawi, ScienceDirect, IEE, and others. Figure 2 depicts the
distribution of the 11 final selected papers by the data source.
172 H. Al Fahim et al.
were available (KNN, SVM, CNN). KNN, a widely used pattern recognition technique,
was initially employed. Where the distance between the unknown and the predetermined
training set of samples is calculated, following that, SVM was also used for comparison
with KNN and CNN here. As a result, Support vector machines are a popular technique
for pattern recognition. Where the information is translated in fractal pattern regions,
where a polygon of both sets is created to increase the separation between both the
nearest examples of distinct classes. Convolutional Neural Networks were employed
for pattern recognition for the provided datasets in addition to KNN and SVM. Using
all three approaches in this dataset demonstrates that CNN outperformed the other two.
Using the CNN model, this model properly categorized all four varieties.
For the classification of wheat seeds, machine learning techniques were used in [8],
the classification of 14 well-known seeds was performed using powerful deep-learning
methods. The technique applied comprises model checkpoint, degraded learning rate,
and hybrid weight modification. Seed classification can offer more information based on
impurity detection, seed quality control, and quality production. In order to determine
which performs better on the classification of this dataset of 14 common seeds. To do this,
we deployed deep-learning technology to make use of algorithms as well as the CNN
method. It appears that the categorization accuracy during the training set is approxi-
mately 99% or exactly 99%. Incorporating the proposed model, it was discovered that
the provided visual was, in fact, just a mustard seed, but that the concept had selected it
as being, thus establishing a genuine positive foundation for the prototype. In the second
stage, we also see examples of positive results, negative results, and correct negative
findings. Working on the model involves ongoing monitoring in some senses. After all,
14 seeds have been classified using the suggested approach or model, it is evident that the
model is correctly classifying all seeds with an accuracy of 99%. As a result, the proposed
model performed at 99% accuracy and the training accuracy in the early phase was 99%.
The deep convolutional neural network (CNN) that is employed in this research as the
general feature extractor presents a novel technique in [11], algorithmic techniques such
as balanced k-nearest neighbors, cubic SVM, nonlinear SVM, and ANN A convolutional
neural network (CNN) is employed here as a stand-in for the more generalized char-
acteristic. We employed neural network models to categorize data that wasn’t initially
included (ANN). Multiple other models, including SVMs with both quadratic terms and
balanced KNN, were also used. Through numerous tests, this research shows that CNNs
and ANNs are the most effective. The categorization has a 98.1% efficiency on just a
scale ranging from 0 to 100, with an identical percentage for speed, memory, or Fp rate.
The results of the investigation show that perhaps the CNN-Ann classification can be
effectively used to sort out the many types of maize seeds available. According to [2],
a classification method for differentiating pepper seeds has been put out on the basis of
neural networks and computer vision. Images with a 1200 dpi resolution were recorded
using an office scanner. Images of the color, shape, and texture of pepper seeds were
gathered in order to categorize them. The characteristics were calculated from different
color components and stored in a feature database. For this case’s classification, a neural
network approach was applied. The number of characteristics was reduced from 257
to 10 for the procedures to accurately classify the particular dataset of pepper seeds in
this instance. Also utilized were the cross-validation rules. The amount of nodes varies
174 H. Al Fahim et al.
designs for classifying. With an accuracy of 87.3%, it demonstrates that ResNet-50 out-
performs other transfer learning architectures in accuracy. So, it’s safe to say that X-ray
scanning has promise as an imaging method for identifying and categorizing seeds based
on morphological characteristics.
In [14], used multispectral imaging techniques to perform discriminant analysis of
15 different cultivars of eggplant seeds. From multispectral images, 78 features of indi-
vidual eggplant seeds were obtained. In this paper, the overall approach is to classify
eggplant seeds using SVM and 1D CNN. Accuracy of 90.12–100% is achieved by using
a support vector machine model. On the other hand, a one-dimensional CNN achieves
94.80% accuracy. Again, a 2D CNN was used to distinguish seed varieties. If 2D-CNN
achieves 90.67% accuracy, using this approach and all results suggest that genetic and
environmental factors may cause the seed coat to not be exactly the same as that of the
female parent. According to [6], deep learning has recently achieved great success in the
field of picture recognition. The purpose of this study is to evaluate the efficacy of CNN
in identifying seeds into quality in comparison to more conventional techniques for ML.
As a result, this research demonstrates how successfully deep learning has worked in the
area of picture recognition. Here, the quality categorization of seeds was established,
and it was compared to a conventional convolutional neural network. This experiment
demonstrates that in the specific area of picture identification, the deep learning app-
roach outperforms all machine learning techniques. On the following dataset connected
to this paper, deep learning (CNN) performs with an accuracy of 95% immediately
following the model’s execution on the learning algorithm. SVM and SURF perform
with an accuracy of 79.2%. Here, too, visualizations were used to generate the extracted
features for every of the network’s layers in CNNs, as well as a scatterplot was used
to display the probabilities of the judgment findings. From here, CNNs can be used to
automate the production of seeds. In [13], Here are ANN, DNN, and CNN classification
results. The study’s data set includes 75,000 rice grain photos. Both ANN and DNN
received this information. The results of the categorization were: Ipsala, Karacadag,
Arborio, Basmati, and Jasmine, CNN’s introduction used data set pictures. The sensibil-
ity, selectivity, forecast, F1 measure, correctness, probability of false alarm, and network
designs that calculate these metrics are all dependent on the equipment characteristics
used to perform these methods. And false negative rates were supplied in tables. ANN,
DNN, and CNN all achieved 99.87% classification accuracy. The study’s algorithms for
classifying rice types may be successfully utilized here, according to the results. Image
processing and computer vision are non-destructive and cheaper in agriculture. Image
processing-based computer vision applications beat manual methods (Barbedo 2016).
Manually classifying grains is time-consuming and costly. Manual procedures rely on
professionals’ experience. Large-scale manual evaluations might be slow. Rice quality
is rated using digital picture attributes. Some samples include measurements of length,
breadth, brightness, and the frequency with which rice grains break. Grain characteristics
may be extracted from images. Classification uses ANN, SVM, DNN, and CNN. This
effort intends to enhance rice classification non-destructively. ANN and DNN classified
106 rice pictures based on morphological and color features. CNN classified 75,000 rice
pictures from 5 classifications without preprocessing. Classification success of ANN,
DNN, and CNN. In [15] The five varieties of rice that are most prevalent in Turkey
176 H. Al Fahim et al.
were used to perform this work. To achieve the greatest results throughout the experi-
ment, they used the CNN model. Five different types of photographs of rice seeds were
included in the collection. Residual Network (ResNet) and EfficientNets architecture
were employed in the pre-stage Visual Geometry Group (VGG) for additional compari-
son. The CNN model was created, and when it was compared to the other three models,
it became clear that the VGG model performed the best in the CNN example. This has a
97% accuracy. In [12], The classification of wheat varietal level is done using a standard
deep-learning method. CNN was utilized to categorize the grain picture of wheat seeds
into four different groups. There are four teams: Hd, Arz, Vitron, and Simeto. CNN was
indeed trained to a new degree of excellence. This was trained because It will give a
boost to the classification performance. For their model, they used a dataset that con-
tains 31,606 images of single-grain. The dataset was collected from different regions of
Algeria and captured through different scanners. After all, the pre-process runs perfectly.
Then the model starts giving its performance. The results reveal that the DensNet201 can
achieve an efficiency of 95.68 percent at its finest, with a span of 85.36 to 95.68 percent.
Where the proposed model gives a reliable result. In [7] After seven days of fertilization,
convolutional neural networks were used for the classification stage to separate aberrates
from normal jasmine rice seedlings. To evaluate the entire suggested model, 1562 sam-
ple photos of jasmine rice seeds were used as the dataset. The dataset contains about 76
mixtures of regular and atypical seeds. The remaining 25% were kept aside for training,
while 75% were used for the training set. Six CNN hidden layers were constructed after
the pre-phase. Where 0 denotes normalcy and 1 denotes abnormality. Following all the
procedures, CNN achieves a very respectable accuracy of 99.57%. In [3], After wheat
is harvested, the seeds must be sorted by quality, size, variety, etc. It’s time-consuming
and error-prone to measure and analyze wheat seeds. The system leverages the famous
K-means clustering technique to assess and categorize wheat seeds more quickly and
with better confidence. K-means is based on the squares of the errors. After receiving
several data points, they are divided into k groups according to their proximity towards
the cluster hubs. To classify each data point, we look at the center of each group and
place them in the one that is geographically nearest. K-means clustering is applied on
the wheat sample from the Machine Learning Repository. Smart systems sense, act,
and control. Smart systems can evaluate a situation, make decisions based on facts, and
perform suitable tasks. Smart systems include sensors, CCUs, information transmitters,
and actuators. Agriculture uses smart systems. Smart systems categorize wheat seeds
by the quality and other characteristics. As each item is described by certain attributes,
attribute differences may be used to categorize objects. Each attribute is counted as a
dimension; thus, an object is a multi-dimensional attribute vector. The aim is to arrange
an object in a group with the most comparable attribute values. K-means is the most pop-
ular, recognized, and commonly used technique for clustering a dataset of objects into a
specified number of groups. K-means distributes data into k clusters unsupervised. Each
cluster employs a notion called centroid; each dataset point is categorized into a cluster
whose centroid is closest to it. Results of the testing system using the UCI Machine
Learning Repository seed dataset are reported. Dataset is a 210*7 matrix with 3 clusters.
K-means takes the dataset and cluster number as inputs. Optional parameters exist. As
there are three clusters, three centroids are randomly chosen. A technique for grouping
Seeds Classification Using Deep Neural Network: A Review 177
wheat seeds is proposed. Experiments with the UCI machine learning repository wheat
dataset indicate great accuracy and success. The system clusters seeds nearly accurately.
K-means is quick and efficient. It’s also cheaper.
In [4], The network was trained and evaluated on 10,413 pictures of early-stage
weeds and crops. Photographs from six separate data sets vary in illumination, resolu-
tion, and soil type. This includes images shot utilizing handheld smartphones in areas
with varying material and lighting conditions. The network classifies these 22 species
with 86.2% accuracy. Mechanical fertilizer application technique requires the precise
position of agricultural plants, whereas herbicide-optimized approaches require weed
species knowledge. By employing accurate information regarding weed species, devel-
opment phases, and plant densities, pesticide use may be decreased by 40% on aggregate.
Image processing was utilized to identify weeds and crops. This study demonstrates how
else to teach the DNN to distinguish between various plant species. A gradient of iden-
tity features is built by CNN from less abstract components in prior levels. Due to these
identity characteristics, CNN is less sensitive to environmental conditions such as illu-
mination, shadows, crooked leaves, and obscured plants. Segmenting the soil and plants
is not necessary for the categorization strategy. CNN quickly learns different species of
plants since they can discover visual properties by themselves. A total of 10,413 images
of young plants and crops were used to train and test the system. Images from six differ-
ent data sets have different lighting, quality, and soil types. This includes images taken
with palm mobile devices in areas with varying levels of sunshine and soil types. The net-
work classifies these 22 species with 86.2% accuracy. In [16] A system was developed
for examining different rice kinds using convolutional neural networks. Without any
data preparation, deep CNN extracted spatio-spectral features in this instance. Whereas
the existing common classification approaches, which are based on SVM, give a lower
accuracy, the proposed model ResNetB gives greater accuracy. The accuracy of the sug-
gested technique is 91.09%, while SVM’s accuracy is 79.23%. In [7] After seven days
of fertilization, convolutional neural networks were used for the classification stage to
separate aberrates from normal jasmine rice seedlings. To evaluate the entire suggested
model, 1,562 sample photos of jasmine rice seeds were used as the dataset. The dataset
contains about 76 mixtures of regular and atypical seeds. The remaining 25% were kept
aside for training, while 75% were used for the training set. Six CNN hidden layers were
constructed after the pre-phase. Where 0 denotes normalcy and 1 denotes abnormality.
Following all the procedures, CNN achieves a very respectable accuracy of 99.57%. In
[13], Here are ANN, DNN, and CNN classification results. The study’s data set includes
75,000 rice grain photos. Both ANN and DNN received this information. The results
of the categorization were: Ipsala, Karacadag, Arborio, Basmati, and Jasmine, CNN’s
introduction used data set pictures. The sensibility, selectivity, forecast, F1 measure,
correctness, probability of false alarm, and network designs that calculate these met-
rics are all dependent on the equipment characteristics used to perform these methods.
And false negative rates were supplied in tables. ANN, DNN, and CNN all achieved
99.87% classification accuracy. The study’s algorithms for classifying rice types may
be successfully utilized here, according to the results. Image processing and computer
vision are non-destructive and cheaper in agriculture. Image processing-based com-
puter vision applications beat manual methods (Barbedo 2016). Manually classifying
178 H. Al Fahim et al.
5 Discussions
The global population increase has coincided with a strong increase in the agriculture
industry. Therefore, it is projected that more variety of product offers and higher-quality
things will be produced. As a result of this supposition, the classification of the objects
is also the most pressing issue. Using features such as product class, size, clarity, bright-
ness, texture, product images, and product colors, it is possible to classify the expanding
number of products. In this regard, the goods were categorized based on the detection
of diseased products, the detection of freshness, the detection of weeds, the counting of
products, edge characteristics, and textures. In the past few years, there have been a lot
more applications that use deep artificial neural networks. Deep learning is frequently
utilized for numerous tasks, such as categorization, analysis, image processing, picture
commentary, sound processing, question answering, and language translation. In addi-
tion, DNN permits the interpretation of additional samples using previously gathered
data. Evaluating deep learning versus conventional neural network techniques. Deep neu-
ral network topologies and supervised and unsupervised machine learning algorithms
can generate accurate categorization outcomes.
Table 2. (continued)
6 Conclusion
The goals of seed type include preserving seed quality, providing excellent seeds to the
general population, and quickening growth. The type also guarantees the quality and
purity of every full-size component found inside seeds. Additionally, type provides the
seeds with great assistance for growth. Therefore, 40 papers with term dates earlier than
2018 to 2022 are referenced in this study. As a result of each author’s contributions to
the field of seed categorization evaluation, we choose 17 of those 40 publications as our
top pick.
References
1. Qiu, Z., et al.: Variety ıdentification of single rice seed using hyperspectral ımaging combined
with convolutional neural network. Appl. Sci. 8(2), MDPI AG, Jan 2018, p 212. [Link]
org/10.3390/app8020212
2. Kurtulmuş, F., et al.: Classification of pepper seeds using machine vision based on neural
network | Kurtulmuş. Int. J. Agric. Biol. Eng. 31 Jan 2016. [Link]
v9i1.1790
3. Parnian, R., Ahmad, Javidan, R.: Autonomous wheat seed type classifier system. Int. J. Com-
put. Appl. 96(12), 14–17 June 2014. Foundation of Computer Science. Crossref, [Link]
org/10.5120/16845-6702
Seeds Classification Using Deep Neural Network: A Review 181
4. Dyrmann, M., et al.: Plant species classification using deep convolutional neural network.
Biosyst. Eng. 151, 72–80 Nov. 2016, Elsevier BV. [Link]
2016.08.024
5. Eldem, A.: An application of deep neural network for classification of wheat seeds. Avrupa
Bilim ve Teknoloji Dergisi 19 (2020). [Link]
6. Huang, S., et al.: Research on classification method of maize seed defect based on machine
vision. J. Sens. 2019, 1–9, Nov. 2019, Hindawi Limited. [Link]
6975
7. Nindam, S., et al.: Collection and classification of jasmine rice germination using con-
volutional neural networks. Proc. Int. Symp. Inf. Technol. Convergence (ISITC 2019)
(2019)
8. Gulzar, Y., et al.: A convolution neural network-based seed classification system. Symmetry
12(12), 2018, MDPI AG, Dec 2020. [Link]
9. de Medeiros, A.D., et al.: Interactive machine learning for soybean seed and seedling quality
classification. Sci. Rep. 10(1), Springer Science and Business Media LLC, July 2020. https://
[Link]/10.1038/s41598-020-68273-y
10. Ahmed, M.R., et al.: Classification of watermelon seeds using morphological patterns of
X-ray ımaging: A comparison of conventional machine learning and deep learning. Sensors
20(23), 6753, MDPI AG, Nov 2020. [Link]
11. Javanmardi, S., et al.: Computer-vision classification of corn seed varieties using deep con-
volutional neural network. J. Stored Prod. Res. 92, 101800, Elsevier BV, May 2021. https://
[Link]/10.1016/[Link].2021.101800
12. Ebrahimi, E., Mollazade, K., Babaei, S.: Toward an automatic wheat purity measuring device:
a machine vision-based neural networks-assisted imperialist competitive algorithm approach.
Measurement 55, 196–205 (2014)
13. Koklu, M., et al.: Classification of rice varieties with deep learning methods. Comput. Electron.
Agric. 187, 106285, Aug. 2021, Elsevier BV. [Link]
14. Sun, L., et al.: Research on classification method of eggplant seeds based on machine learning
and multispectral ımaging classification eggplant seeds. J. Sens. edited by Eduard Llobet,
Hindawi Limited, 2021, 1–9, Sept. 2021. [Link]
15. Tuğrul, B.: Classification of five different rice seeds grown in Turkey with deep learning
methods. Communications Faculty of Sciences University of Ankara Series A2-A3 Phys.
Sci. Eng. 64(1), 40–50 (2022). Laabassi, K., et al.: Wheat varieties ıdentification based on
a deep learning approach. J. Saudi Soc. Agric. Sci. 20(5), 281–89, Elsevier BV, July 2021.
[Link]
16. Onmankhong, J., et al.: Cognitive spectroscopy for the classification of rice varieties: a com-
parison of machine learning and deep learning approaches in analysing long-wave near-
infrared hyperspectral ımages of brown and milled samples. Infrared Phys. Technol. 123,
104100, June 2022, Elsevier BV. [Link]
17. Bakhshipour, A., Jafari, A.: Evaluation of support vector machine and artificial neural net-
works in weed detection using shape features. Comput. Electron. Agric. 145, 153–160 Feb.
2018, Elsevier BV. [Link]
18. Bishop, C.M., Nasrabadi, N.M.: Pattern recognition and machine learning, Vol. 4. No. 4.
Springer, New York (2006)
19. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117
(2015). [Link]
20. Dharani, M.K., et al.: Review on crop prediction using deep learning techniques. J. Phys.
Conf. Ser. 1767(1), 012026, Feb. 2021, IOP Publishing. [Link]
1767/1/012026
182 H. Al Fahim et al.
21. Yu, Z., et al.: Hyperspectral ımaging technology combined with deep learning for hybrid
Okra seed ıdentification. Biosyst. Eng. 212, 46–61, Dec. 2021, Elsevier BV. [Link]
10.1016/[Link].2021.09.010
22. Sabanci, K., et al.: A convolutional neural network -based comparative study for pepper seed
classification: analysis of selected deep features with support vector machine. J. Food Process
Eng. 45(6), Dec. 2021, Wiley. [Link]
23. Zhao, L., et al.: Automated seed ıdentification with computer vision: challenges and opportu-
nities. Seed Sci. Technol. 50(2), 75–102, Oct. 2022, International Seed Testing Association.
[Link]
24. Loddo, A., et al.: A novel deep learning based approach for seed ımage classification and
retrieval. Comput. Electron. Agric. 187, 106269, Aug. 2021, Elsevier BV. [Link]
1016/[Link].2021.106269
25. Xu, P., et al.: Research on maize seed classification and recognition based on machine vision
and deep learning. Agriculture 12(2), 232, Feb. 2022. MDPI AG. [Link]
iculture12020232
26. Cristin, R., Kumar, B.S., Priya, C., Karthick, K.: Deep neural network based Rider-Cuckoo
search algorithm for plant disease detection. Artif. Intell. Rev. 53(7), 4993–5018 (2020).
[Link]
27. Huang, Z., et al.: Deep learning based soybean seed classification. Comput. Electron. Agricul.
202, 107393 Nov. 2022, Elsevier BV. [Link]
28. Dietrich, F.: Track seed classification with deep neural networks. arXiv preprint arXiv:1910.
06779 (2019)
29. Bakumenko, A., et al.: Crop seed classification based on a real-time convolutional neural
network. SPIE Future Sens. Technol. 11525. SPIE, 2020. [Link]
30. Liu, J., et al.: EEG-based emotion classification using a deep neural network and sparse
autoencoder. Front. Syst. Neurosci. 14, Frontiers Media SA, Sept 2020. [Link]
3389/fnsys.2020.00043
31. Rakhmatulin, I., et al.: Deep neural networks to detect weeds from crops in agricultural
environments in real-time: a review. Remote Sens. 13(21), MDPI AG, Nov 2021, 4486.
[Link]
32. Vlasov, A.V., Fadeev, A.S.: A machine learning approach for grain crop’s seed classification
in purifying separation. J. Phys. Conf. Ser. 803, 012177. IOP Publishing, Jan. 2017. https://
[Link]/10.1088/1742-6596/803/1/012177
33. Wei, Y., et al.: Nondestructive classification of soybean seed varieties by hyperspectral ımag-
ing and ensemble machine learning algorithms. Sensors 20(23), 6980. MDPI AG, Dec. 2020.
[Link]
34. Khatri, A., et al.: Wheat seed classification: utilizing ensemble machine learning approach.
Sci. Program. 2022, 1–9 Feb. 2022 edited by Punit Gupta, Hindawi Limited. [Link]
10.1155/2022/2626868
35. Kundu, N., et al.: Seeds classification and quality testing using deep learning and YOLO
V5. In: Proceedings of the International Conference on Data Science, Machine Learning and
Artificial Intelligence, USA, ACM, Aug. 2021. [Link]
36. Gao, H., Zhen, T., Li, Z.: Detection of wheat unsound kernels based on improved ResNet.
IEEE Access 10, 20092–20101 (2022). [Link]
37. Taheri-Garavand, A., et al.: Automated in Situ seed variety identification via deep learning:
a case study in Chickpea. Plants 10(7), 1406. MDPI AG, July 2021. [Link]
plants10071406
38. Ebrahimi, E., Mollazade, K., Babaei, S.: Toward an automatic wheat purity measuring device:
a machine vision-based neural networks assisted imperialist competitive algorithm approach.
Measurement 55, 196–205 (2014). 10.1016/[Link].2014.05.003
Agrophytocenosis Development Analysis
and Computer Monitoring Software Complex
Based on Microprocessor Hardware Platforms
Russia
1 Introduction
Fig. 1. General functional scheme of contamination detection using computer vision: 1 – phyto-
camera, 2 – digital video camera, 3 – tripod, 4 – test tube with micro-growth
The monitored object is monitored by 3 digital video cameras, two of which are
located inside the phyto-meter on the side at the level of the object of observation,
as well as from below directly under the object. In addition, a tripod with a test tube
periodically makes an incomplete rotation around its axis to monitor all sides of the
object.
The image of an object is transmitted through an optical device to a light-signal
converter, the electrical signal in the primary image processing device is amplified and
stored.
186 K. Tokarev et al.
The image analysis device (secondary processing) is used to highlight and recognize
an object, determine its coordinates and position. If necessary, the processed information
about the object is displayed on the visual control device, and can also be duplicated by
an audio signal. In addition, the proposed device can record the results of image analysis
on data carriers. The functions of the control unit include control of the parameters of
processing units, as well as synchronization of processes running in the system.
The algorithm of automated control of contamination in the nutrient medium/plant
object, characteristics of the nutrient medium (deviations from the regulated indicators)
is shown in Fig. 2.
Fig. 2. Algorithm for automated control of contamination in the nutrient medium/plant object,
characteristics of the nutrient medium (deviations from the regulated indicators)
Fig. 3. Combined electro-mechanical circuits of the device: 1 – independent sections of the cli-
mate chamber, 2 – air ozonator, 3 – air filter, 4 – carbon filter, 5 – section control panel, 6 – LED
lighting, 7 – tripod-platform for controlled samples in vitro, 8 – rotary table, 9 – servo, 10 – rotary
sleeve, 11 – in vitro controlled samples, 12 – tripod, 13 – tripod positioning ball bearing, 14 – color
sensor positioning ball bearing, 15 – intelligent color analyzer, 16 – plant samples in the zone of
adaptation to real growing and growing conditions, 17 – control and automation unit,18 – computer
communication interface
RGB spectrum (8 bits, a numerical value in the range from 0 to 255 for each component
of the spectrum). Sensor 3 is mounted on a tripod 12 (Fig. 3) with two ball bearings 13,
14 to increase the degrees of freedom of position variation. The controlled samples 11
(Fig. 3) are mounted on a tripod platform 7, fixed by a rotary sleeve 10 on the servo shaft
9.
Fig. 4. Electronic circuit of the device (built using CAD “Fritzing”): 1 – ATmega 2560 pro-
grammable microcontroller, 2 – SG-90 servo, 3 – TCS-230 color recognition sensor, 4 – layout
board, 5 – feedback indication of automatic positioning mode, 6 – feedback indication of manual
positioning mode, 7 – feedback indication of precise positioning mode, 8 – feedback indication
the relay of the biological purity maintenance device, 9–clock buttons for controlling the oper-
ating modes of the device, 10 – potentiometer for controlling the rotation angles of the servo,
11 – LCD1602 display, 12 – electromechanical relay of biological purity maintenance device,
13 – DS1302 Real–time clock, 14 - RS–485 to TTL Converter, 15 - USB to RS-485 Converter
The software part of the development is made with the ability to calibrate the TCS-
230 sensor for specific color deviations and the light mode of the working camera. In
addition, the positioning of controlled samples is possible both automatically (with the
choice of the number of sectors of the tripod-platform) and manually, including fine-
tuning the angle of rotation of the platform (up to 1 degree). Ensuring the creation
and maintenance of biological purity conditions is realized by means of control via
an electromechanical relay by forcibly switching on or in automated mode according
to a set response time by means of a DS1302 real-time clock. The provision of serial
communication 18 (Fig. 4) between the device 17 (Fig. 4) and the computer is carried
out by means of the RS-485 data transmission standard. The software and hardware
complex is built on the modern hardware platform of the ATmega-2560 microprocessor
[9] and the high-performance ESP-32 microprocessor with the Tensilica Xtensa LX6
core in single-core and dual-core versions.
Figure 5 shows the interface (mnemonic) of the developed SCADA system and the
distribution of Modbus tags (converted into OPC server channels) on its display and
control elements.
Agrophytocenosis Development Analysis and Computer Monitoring 189
Fig. 5. Interface of the developed SCADA system and distribution of Modbus tag on its display
and control elements: 1 – automatic positioning mode, 2 – manual positioning mode operation res-
olution, 3 – manual positioning mode feedback indication, 4 – precise positioning mode operation
resolution, 5 – controlled sample position number selection, 6 – tripod platform position angle
selection for controlled samples in vitro, 7 - input of the number of tripod sectors used-platforms,
8 – manual adjustment of the spectrum intensity range “Red” of the color analyzer, 9 – received
actual data of the spectrum intensity “Red”, 10 – manual adjustment of the spectrum intensity
range “Green” of the color analyzer, 11 – received actual spectrum intensity data “Green”, 12 –
manual adjustment of the spectrum intensity range “Blue” of the color analyzer, 13 – received
actual spectrum intensity data “Blue”, 14 – contamination detection indicator (according to the
object colors defined by the control system: red, blue, green, azure, pink, etc.), 15 – IP camera 16
- automatic positioning mode operation resolution
4 Conclusion
The authors propose a software and hardware complex for analysis, visualization of com-
puter monitoring of agrophytocenoses in biotechnological reproduction methods, which
provides detection of contamination of nutrient media/plant explants through technical
vision, digital sensors, analysis of digital images using machine learning libraries for
subsequent processing by artificial neural networks [20–25].
The proposed hardware and software complex includes an intelligent digital sensor
as an analyzing device, which allows to determine the color of an object with a high
degree of accuracy in the range of the adaptive RGB color model of the spectrum (8 bits,
a numerical value in the range from 0 to 255 for each component of the spectrum).
The development software is designed with the ability to calibrate the sensor for
specific color deviations and the light mode of the working camera. In addition, the posi-
tioning of the controlled samples is possible both automatically and manually, including
by fine-tuning the angle of rotation of the platform (up to 1 deg.)
Ensuring the creation and maintenance of biological purity conditions is imple-
mented by means of forced activation or in automated mode according to the set response
time. Serial communication between the device and the computer is provided by means
of the RS-485 data transmission standard. The software and hardware complex is built on
190 K. Tokarev et al.
the modern hardware platform of the ATmega-328/2560 microprocessor and the high-
performance ESP-32 microprocessor with the Tensilica Xtensa LX6 core in single-core
and dual-core versions.
A software (SCADA system) has been developed for indicating, configuring and
controlling all parameters when monitoring research objects. In case of detection of the
fact of deviations, an alert is made on the operator’s screen, an entry is made in the
system’s accident table, as well as in the created Telegram messenger chatbot.
A promising direction for further research in the framework of studying the prob-
lems of increasing the productivity of agrophytocenoses based on automated analysis,
visualization and computer monitoring using microprocessor hardware platforms is the
development of intelligent algorithms and software implementing them to control the
growth and development of plants with the formation of datasets of images on a video
series for the purpose of subsequent expert markup and processing by artificial neural
networks of deep learning.
Acknowledgements. The article is prepared with the financial support of the Russian Science
Foundation, project № 22-21-20041 and Volgograd region.
References
1. Atkinson, P.M., Tatnall, A.R.L.: Neural networks in remote sensing. Int. J. Remote Sens.
18(4), 699–709 (1997)
2. Walker, W.R.: Integrating strategies for improving irrigation sistem design and management
water management synthesis. Proj. WMS Repot. 70 (1990)
3. Ceballos, J.C., Bottino, M.J.: Technical note: The discrimination of scenes by principal
components analysis of multi-spectral imagery. Int. J. Remote Sens. 18(11), 2437–2449
(1997)
4. Huete, A., Justice, C., Van Leeuwen, W.: Modis vegetation index (MOD13): Algorithm
theoretical basis document. USGS Land Process Distrib. Active Archive Center. 129 (1999)
5. Garge, N.R., Bobashev, G., Eggleston, B.: Random forest methodology for model-based
recursive partitioning: the mobForest package for R. BMC Bioinform. 14, 125 (2013)
6. Chang, D.-H., Islam, S.: Estimation of soil physical properties using remote sensing and
artificial neural network. Remote Sens. Environ. 74(3), 534–544 (2000)
7. Mair, C., et al.: An investigation of machine learning based prediction systems. J. Syst. Softw.
53(1), 23–29 (2000)
8. Osborne, S.L., Schepers, J.S., Francis, D.D., Schlernrner, M.R.: Use of spectral radiance to
estimate in-season biomass and grain yield in nitrogen- and water-stressed corn. Crop Sci.
42, 165–171 (2002)
9. Tokarev, K.E.: Agricultural crops programmed cultivation using intelligent system of irrigated
agrocoenoses productivity analyzing. J. Phys. Conf. Ser. 1801. 012030 (2021)
10. Plant, R.E., et al.: Relationship between remotely sensed reflectance data and cotton growth
and yield. Trans. ASAE 43(3), 535–546 (2000)
11. Tokarev, K., et al.: Monitoring and intelligent management of agrophytocenosis productivity
based on deep neural network algorithms. Lect. Notes Netw. Syst. 569, 686–694 (2023)
12. Tokarev, K.E.: Raising bio-productivity of agroecosystems using intelligent decision-making
procedures for optimization their state management. J. Phys.: Conf. Ser. 1801, 012031 (2021)
Agrophytocenosis Development Analysis and Computer Monitoring 191
13. Petrukhin, V., et al.: Modeling of the device operating principle for electrical stimulation of
grafting establishment of woody plants. Lect. Notes Netw. Syst. 569, 667–673 (2023)
14. Isaev, R.A., Podvesovskii, A.G.: Application of time series analysis for structural and para-
metric identification of fuzzy cognitive models. CEUR Workshop Proc. 2212, 119–125
(2021)
15. Churchland, P.S.: Neurophilosophy: Toward a Unified Science of the Mind/Brain. MIT Press,
Cambridge (1986)
16. Aleksander, I., Morton, H.: An Introduction to Neural Computing. Chapman & Hall, London
(1990)
17. McCulloch, W.S., Pitts, W.A.: Logical calculus of the ideas immanent in nervous activity.
Bull. Math. Biophys. 5, 115–133 (1943)
18. Ivushkin, D., et al.: Modeling the influence of quasi-monochrome phytoirradiators on the
development of woody plants in order to optimize the parameters of small-sized LED
irradiation chamber. Lect. Notes Netw. Syst. 569, 632–641 (2023)
19. Yudaev, I., Eviev, V., Sumyanova, E., Romanyuk, N., Daus, Y., Panchenko, V.: Methodology
and modeling of the application of electrophysical methods for locust pest control. Lect. Notes
Netw. Syst. 569, 781–788 (2023)
20. Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization
in the brain. Psychol. Rev. 65, 386–408 (1958)
21. Cheng, G., Li, Z., Yao, X., Guo, L., Wei, V.: Remote sensing image scene classification using
bag of convolutional features. IEEE Geosci. Remote Sens. Lett. 14(10), 1735–1739 (2017)
22. Bian, X., Chen, C., Tian, L., Du, Q.: Fusing local and global features for high-resolution
scene classification IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 10(6), 2889–2900
(2017)
23. Mohammed, A.K., Mohammed, H.A.: Convolutional neural network for satellite image.
Classifi. Stud. Comput. Intell. 165–178 (2020)
24. Tokarev, K.E.: Overview of intelligent technologies for ecosystem bioproductivity manage-
ment based on neural network algorithms. IOP Conf. Ser. Earth Environ. Sci. 1069, 012002
(2022)
25. Lebed, N.I., Makarov, A.M., Volkov, I.V., Kukhtik, M.P., Lebed, M.B.: Mathematical model-
ing of the process of sterilizing potato explants and obtaining viable potato microclones. IOP
Conf. Ser. Earth Environ. Sci. 786, 012035 (2021)
Reducing Fish Ball’s Setting Process Time
by Considering the Quality of the Product
Abstract. The demand for fish ball products in a frozen food company has
increased drastically. Companies need to increase production output in line with
the increase in product demand. However, the company wants to avoid paying for
additional equipment or employees and wants to minimize overtime hours. The
suggestion is to reduce the setting time of the fish balls without reducing product
quality, which is assessed by the gel strength test, organoleptic test, and micro-
biological test. Some trials have been carried out to reduce the fish ball’s setting
time, where several proposed setting times were tried at temperatures of 41–45 ˚C
and 46–50 ˚C for 10 g and 15 g fish balls. Each trial was carried out with three
replications, for which 20 pieces of samples were taken for each replication. The
result of quality testing from products sample will be compared to obtain products
with quality following company standards. Based on trials that have been carried
out, the most appropriate solution is to decrease the setting time to 20 min to
produce 10 g of fish balls and 25 min to produce 15 g of fish balls. The product
with the decreasing time has the quality according to the company standard and
can meet the increasing demand.
1 Introduction
Surimi is the primary raw material of fish balls. Surimi is a fish paste from deboned
fish used to make simulated crab legs and other seafood. The paste is blended with cry-
oprotectants, such as sucrose, sorbitol, and phosphates, and frozen for preservation. For
making the final product, the frozen paste is thawed, blended with starch, and extruded
[1]. The quality of surimi can be seen from its color, taste, and strong gel ability. The
ability of surimi to form a gel will affect the elasticity of advanced products that use
surimi as a raw material. The mechanism of surimi gel formation is divided into three
stages: suwari, modori, and ashi. Suwari gel is a gel that is formed when heated at tem-
peratures reaching 50 °C. This gel will slowly form into an elastic gel paste. However,
if heating is increased above 50 °C, the gel structure will be destroyed [2]. This event is
referred to as a modori. Meanwhile, gel ashi is a gel that is formed after passing through
the two temperature zones. A strong and elastic gel will form if surimi is maintained
long in the suwari phase and quickly passes through the modori stage [3].
The demand for fish ball products in a frozen food company in Surabaya, Indonesia,
has increased drastically. Demand for 10 g of fish balls increased by 85%, while demand
for 15 g increased by 78%. Companies need to increase production output in line with
the increase in product demand. However, the company wants to avoid additional equip-
ment or employees and minimize overtime hours. One way that can be done to increase
production output is to reduce the production time of fish balls without lowering the
quality of the product itself, which is assessed by gel strength test, organoleptic test, and
microbiological test. A gel strength test is used to measure the gel strength of a prod-
uct. Previous studies state that the individual gel-forming ability of the mince’s product
varied greatly due to their compositional differences [4]. Organoleptic tests measure a
product’s quality (texture, taste, smell, and appearance) so that it complies with com-
pany standards and is acceptable to customers. The fish ball’s texture is critical for the
quality and acceptability of seafood substitutes product. But most companies prioritize
imitating seafood’s appearance, smell, and flavour rather than the texture attributes [5].
A microbiological test is an examination performed to detect the presence of microor-
ganisms in a food product. This study aims to reduce fish ball production time so that
the output can increase without lowering the quality of the product.
2 Research Method
This research begins by identifying the longest process on the production floor and
finding each process’s standard time. Standard time is the time required to carry out an
activity by a reasonable workforce in normal situations and conditions [6]. Performance
rating and allowance are essential in determining the standard time. Determination of
performance rating is an activity to assess the speed of operators in completing their
products. Since the actual time required to perform each work element depends to a
high degree on the skill and effort of the operator, it is necessary to adjust upward the
good operator’s time and downward the poor operator’s time to a standard level. Giving a
performance rating is the most important step and the step that is most criticized because
it is entirely based on experience, training, and assessment of work measurement analysis
[7]. Allowance is the amount of time added to meet personal needs, unavoidable waiting
times, and fatigue. The maximum production output can be seen from the production
capacity. Production capacity can be defined as the maximum amount of output produced
from a production process in a certain period. Production capacity planning aims to
optimize the resources owned to get maximum output.
Some trials at the longest process were carried out to reduce the fish ball’s production
time. Several proposed times were tried at 41–45 ˚C and 46–50 ˚C for 10 g and 15 g of
fish balls. Previous studies state that maximal production of the round-shaped fish ball
could be made when the paste is extruded into a setting tank filled with 10% salt solution
and held for 20 min [8]. In this study, for 10 g of fish ball products, the setting time was
reduced to 20, 15, and 10 min, while for 15 g of fish balls, the setting time was reduced
to 25, 20, 15, and 10 min. Each trial was carried out with three replications, for which
20 pieces of samples were taken for each replication.
The next stage is quality testing so that the quality of fish ball products that have
undergone the trial process remains by company standards. Testing the quality of fish
194 M. S. J. Pranata and D. A. Y. Aysia
balls was carried out using three tests: gel strength, organoleptic, and microbiological.
The gel strength test was conducted using a Rheo Tex-type SD-700II machine. The
product to be tested is placed onto the device so that the pusher is in the center of the
product. The machine will start to detect the gel strength of a product and stop when it
reaches the result. The strength of the gel will be recorded on the paper connected to the
thermal printer.
The organoleptic test uses the human senses as the primary tool. The senses used in
the organoleptic test are sight, touch, smell, and taste. The sense of sight is used to judge
the product’s color, size, and shape. The sense of touch is used to assess the texture and
consistency of the product. The sense of smell is used to evaluate the appearance of signs
of damage to the product, such as a foul odor indicating that the product is damaged.
Finally, the sense of taste is used to assess the taste of a product by predetermined
standards. According to previous studies, several factors affect the fish balls’ texture.
Two-step cooking obtained better texture and mouth feel and reduced cook loss of fish
balls compared to other cooking processes [4]. Texture force increased in fried and boiled
canned fish balls when the processing temperature increased. Fish muscle’s toughness
significantly increases as the heating temperature increases [9]. Frozen storage decreased
the quality (texture, flavor, and color) of the fish flesh affecting the quality of the final
fish ball product. Washing of fish mince caused decreases in the color and taste of the
fish ball products [10].
The microbiological test is divided into two, namely qualitative test and the quan-
titative test. The qualitative test is used to determine the type of microorganism, while
the quantitative test is used to determine how many microorganisms are present in the
product. There are 5 test parameters used, namely, Total Plate Count (TPC), Coliform,
Escherichia Coli, Salmonella, and Vibrio cholerae. Each parameter has the maximum
number of microorganisms that grow in a product. TPC is a method used to calculate
the number of microorganisms in a sample in a medium. The TPC method was tested
using Plate Count Agar (PCA) media as a medium for bacterial growth. Testing with
the TPC method aims to show the total number of microorganisms in the tested sample.
The TPC method is the most frequently used because it does not require the aid of a
microscope to count the number of growing microorganisms. The maximum limit for
the number of microorganisms growing in a precooked product (fish balls, shrimp balls,
squid balls, fish rolls, etc.) is 100,000. Coliform bacteria are microorganisms used to
assess the quality of contaminated water sources. The nature of Coliform bacteria can
be divided into two: fecal coliform, bacteria originating from human feces, and no-fecal
coliform, bacteria originating from dead animals or plants. The company tests coliform
bacteria to determine the presence of coliform microorganisms that grow in fish ball
products. The maximum limit for the number of coliform bacteria that can grow in a
product is 100. Escherichia Coli bacteria (E. Coli bacteria) belong to the fecal coliform
group. This bacterium is one of the most polluting bacteria in food. The growth of E.
Coli bacteria in the product can cause diarrhea for people who consume it. E. Coli
bacteria are not allowed to grow in the product, so a separate test is carried out to see
if this bacterium grows in the fish ball product. Salmonella bacterial test results must
be negative because the growth of Salmonella bacteria in products can interfere with
the digestive tract, causing diarrhea, abdominal pain, pain, and cramps in the stomach.
Reducing Fish Ball’s Setting Process Time by Considering 195
Vibrio cholerae bacteria can usually grow in food through poor food processing. Vibrio
cholerae test results must be negative because these bacteria can cause disturbances in
human digestion, such as diarrhea.
freezing capacity can accommodate the request. Therefore, the amount of product that
enters the ABF machine will depend on the second longest process, namely the setting
process. The setting process aims to form a texture, so the product has a supple texture.
In the setting process, boiling is carried out for 30 min for 10 g of fish ball products and
35 min for 15 g. The water temperature in the setting tub is 41–45 °C. Some trials were
carried out to reduce the setting process time. For 10 g of fish ball products, the setting
time was reduced to 20, 15, and 10 min, while for 15 g of fish balls, the setting time was
reduced to 25, 20, 15, and 10 min. These trials were carried out with three replications
on three different production batches. Samples taken were 20 pieces for each replication.
The samples were then tested for quality to know whether the trial product had the same
quality as the product quality that has become the company’s standard.
Gel Strength Test. This test was carried out by the same operator on five replicates for
each piece of meatball product to obtain optimal results. Gel strength is considered good
if it has a value above 150 [Link]. The gel strength test was carried out on 30 pieces of
10-g fish balls for three replications in each trial. The results of the gel strength test can
be seen in Table 2. For the 10-g fish meatball product, the trial was considered to have
passed the test was a 20-min setting time. The trial considered to have passed the test on
15-g fish ball products were 20 and 25 min setting process times.
Table 2. The results of the gel strength test and organoleptic test at a temperature of 41–45 °C.
Product Setting time (minutes) Gel strength test ([Link]) Organoleptic test
10-g 10 92,93 12
10-g 15 143,83 14,67
10-g 20 230,15 16
15-g 10 95,85 11
15-g 15 129,59 14,67
15-g 20 207,4 16
15-g 25 248,94 16
Organoleptic Test. Organoleptic tests were carried out on a sample of 5 pieces of fish
balls for each replication. Organoleptic results are considered reasonable if they have
a minimum total score of 15 from the four existing categories. When carrying out a
product taste test, neutralization is done by drinking water after tasting the product in
each trial to make the value more accurate. The organoleptic test results can be seen in
Table 3. The trial that passed the test on the 10-g fish ball product was the 20-min setting
time. Trials considered to have passed the test on 15-g fish ball products were 20 and
25-min trials.
Microbiological Test. This test was carried out on five pieces of fish balls for each
replication, with a total of 15 pieces of fish balls for three replications. This test was
carried out by cutting a sample of 15 pieces of fish balls from three different replications.
Reducing Fish Ball’s Setting Process Time by Considering 197
Samples that have become small pieces will then be weighed and given different treat-
ments according to the test method for the five parameters used. After that, the treated
samples will be allowed to stand and test according to each test parameter’s provisions.
The results of the microbiological tests can be seen in Table 3. It is known that the trials
that passed the test were the 20-min setting time for producing 10 g of fish balls and the
25-min setting time for producing 15 g of fish balls.
Efforts to reduce the setting process time with a temperature of 41–45 ˚C have been
carried out and obtained results of 20 min for 10-g fish balls and 25 min for 15-g fish
balls. After finding these results, further trials were conducted to increase the water
temperature in the setting process to determine whether the setting process time could
be shortened by increasing the water temperature and whether the fish balls still meet
company quality standards. As a result, the temperature was increased to 46–50 ˚C. The
gel strength test, organoleptic test, and microbiological test results showed that for 10 g
of fish balls, the products still met the quality standard for the 15 and 20-min setting
time; meanwhile, for 15 g of fish balls, the product still met the quality standard for 20
and 25-min setting process time.
When compared with the results of the test for decreasing the processing time at a
temperature of 41–45 ˚C, the experiment of increasing the temperature to 46–50 ˚C
succeeded in shortening the setting time from 20 min at a temperature of 41–45 ˚C
to 15 min for 10-g fish balls and shortening the time from 25 min with a temperature
of 41–45 ˚C to 20 min for 15-g fish balls. In addition to the successful shortening of
the processing time, the quality of the fish balls also increased, as can be seen from
198 M. S. J. Pranata and D. A. Y. Aysia
the gel strength test values at 46–50 ˚C, which were better than the experiments with
temperatures of 41–45 ˚C. The results of the microbiological tests show that the bacteria
that grew in the trials with a temperature of 46–50 ˚C were less than those in the trials
with a temperature of 41–45 ˚C.
One essential thing to note is maintaining the water temperature in the setting process.
In the current process, the water temperature is checked manually and periodically during
the setting process using a thermometer. If the water temperature exceeds the standard,
water is added to keep the temperature stable below the standard of 45 °C or 50 °C. Water
temperature plays a vital role in maintaining the quality of fish balls. As previously
mentioned, if the temperature exceeds 50 °C, the gel structure will be destroyed [2].
Improvements can be made by adding a tool/sensor for detecting water temperature and
increasing water flow to automatically reduce heat in the water when the temperature
exceeds the standard. With this sensor, temperature manual and periodic checks can
be eliminated to avoid human error. The working method of the proposed tool can be
adapted from the research Design of Pond Water Temperature Monitoring Built Using
NodeMCU ESP8266 [13]. This research aims to make a prototype to detect pond water
temperature and increase pond water discharge, which is useful for reducing pond water
temperature to cooler and reducing the risk of fish death. The prototype developed is a
NodeMCU ESP 8266 microcontroller, a temperature detection sensor in water, a relay
as a switch, and a mini pump.
The setting time for the 10-g fish ball product was reduced from 30 to 20 min at 41–
45 ˚C and 15 min at 46–50 ˚C. Meanwhile, the setting time for the 15-g fish ball product
has been reduced from 35 to 25 min at 41–45 ˚C and 20 min at 46–50 ˚C. Decreasing
the setting process time in fish ball production has several impacts, including increasing
production output and yield.
Increased Production Capacity. The decrease in setting process time impacts produc-
tion capacity (Table 4). With the initial setting time, the company must add employee
overtime hours to fulfill product demand. If the setting time is reduced to 15 min or
20 min for 10-g fish balls and 20 min or 25 min for 15-g fish balls, then the company
does not need to add additional overtime hours because product demand can be fulfilled.
Improved Yields. The second impact of reducing the setting process time is that it can
increase production yields. Yield is measured based on a process’s input and output
weight. Yield is calculated using the following formula.
output weight
yield = × 100% (1)
input weight
Some data are needed to calculate yield, such as raw material weight, after-the-
mixing dough weight, after-cooking product weight, after-aging product weight, and
finished good weight. Table 5 shows that for the 10-g fish meatball product, the average
increase in yield in the 20-min reduction trial at 41–45 ˚C was 1.17%. In the 15-min
experiment and a temperature of 46–50 ˚C, the production yield increased by 1.50%.
For 15-g fish balls, the trial of decreasing the setting time to 25 min at a temperature of
41–45 ˚C experienced an increase in production yield by an average of 1.31%. In the
20-min experiment and a temperature of 46–50 ˚C, the production yield increased by an
average of 1.21%.
3.4 Conclusion
The increasing demand for fish ball products has resulted in companies trying to reduce
the production time for fish balls without reducing product quality. Therefore, three tests
were carried out for product quality assessment: gel strength, organoleptic, and micro-
biological. Product quality is considered good if the three tests have values according to
company standards.
In producing fish balls measuring 10 g, trials were conducted to reduce the setting
time to 10, 15, and 20 min at temperatures of 41–45 and 46–50 ˚C. In producing fish
balls measuring 15 g, trials were conducted to reduce the setting time to 10, 15, 20, and
25 min at temperatures of 41–45 ˚C and 46–50 ˚C. Each trial was carried out with three
replications. In the production of 10-g fish balls, it was found that product quality that
complies with company standards is a product that is produced using a setting time of
20 min at a temperature of 41–45 ˚C and a setting time of 15 min at a temperature of
46–50 ˚C. Whereas in the production of 15 g of fish balls, it was found that the setting
time decreased to 25 min at 41–45 ˚C and 20 min at 46–50 ˚C.
A decrease in setting time has an impact on increasing production capacity and
increasing production yields. Based on the calculations that have been done, the produc-
tion capacity of 10 g of fish balls with an experiment of decreasing the time to 20 min is
200 M. S. J. Pranata and D. A. Y. Aysia
40,717.14 kg per month, and the yield increase is 1.17%. In the 15-min trial, the produc-
tion capacity of 10 g of fish balls was 46,886.40 kg per month, with a yield increase of
1.50%. In the production of 15 g of fish balls, it was found that the production capacity
was 24,045.12 kg per month, and the yield increase was 1.31% for the trial of reducing
the time to 25 min. Meanwhile, in the trial of reducing the time to 20 min, it was found
that the production capacity increased to 27,480.14 kg per month, and the yield increased
by 1.21%. Therefore, the maximum solution for the company is to reduce the setting
time for producing 10-g fish balls to 20 and 25 min for producing 15-g fish balls.
A tool/sensor for detecting water temperature in the fish balls setting process can be
developed for future research. This tool can be modified to automatically increase water
flow to reduce heat when the temperature exceeds the standard.
References
1. Mason, W.R.: Starch: chemistry and technology, chapter 20. 3rd edn. Elsevier Inc. (2009)
2. Moniharapon, A.: Surimi technology and it’s processing product. BIAM Mag. 10(1), 16–30
(2014)
3. Iqbal, M., Ma’aruf, W.F., Sumardianto: The impact of Microalgae spirulina platensis and
Microalgae skeletonema costatum addition on the quality of milkfish sausages (Chanos chanos
Forsk). J. Pengolahan dan Bioteknologi Hasil Perikanan 5(1), 56–63 (2016)
4. Hoque, M.S., Nowsad, A.A.K.M., Hossain, M.I., Shikha, F.H.: Improved methods for the
preparation of fish ball from the unwashed mixed minces of low-cost marine fish. Progressive
Agric. 18(2), 189–197 (2007)
5. Ran, X., Lou, X., Zheng, H., Gu, Q., Yang, H.: Improving the texture and rheological qualities
of a plant-based fishball analogue by using konjac glucomannan to enhance crosslinks with
soy protein. Innovative Food Sci. Emerg. Technol. 75 (2022)
6. Sutalaksana, I.Z., Anggawisastra, R., Tjakraatmadja, J.H.: Teknik perancangan sistem kerja.
2nd edn. ITB, Bandung (2006)
7. Freivalds, A., Niebel, B.W.: Niebel’s methods, standards, and work design, 13th edn. McGraw-
Hill Higher Education, New York (2013)
8. Kok, T.N., Park, J.W.: Elucidating factors affecting floatation of fish ball. J. Food Sci. 71(6),
E297–E302 (2006)
9. Lazos, E.S.: Utilization of freshwater bream for canned fish ball manufacture. J. Aquat. Food
Prod. Technol. 5(2), 47–64 (1996)
10. Akter, M., Islami, S.N., Reza, M.S., Shikha, F.H., Kamal, M.: Quality evaluation of fish
ball prepared from frozen stored striped catfish (Pangasianodon hypophthalmus). J. Agrofor.
Environ. 7(1), 7–10 (2013)
11. Cold Storage Indonesia: [Link] Last
accessed 23 Feb 2023
12. Kustyawati, M.E.: Mikrobiologi hasil pertanian. Pusaka Media, Bandarlampung (2020)
13. Muhammad, S.A., Haryono.: Design of pond water temperature monitoring built using
NodeMCU ESP8266. Sinkron: J. Penelitian Teknik Informatika. 7(2), 579–585 (2022)
Optimal Fire Stations for Industrial Plants
King Mongkut’s University of Technology North Bangkok, 129 M.21 Noenhom Muang
Prachinburi, Bangkok 25230, Thailand
{ornurai.s,sunarin.c}@[Link]
Abstract. The ability to solve fire incidents on time is very important for safety
and reliability in the industry. This research proposes the establishment of disaster
management centers for industrial plants in Chachoengsao Province, one of the
provinces in the Eastern Economic Corridor (EEC). The research considers the
risk level of industrial plants causing disasters by applying the Maximal Covering
Location Problem (MCLP) with the workload capacity of fire stations to determine
the optimal location of fire stations. According to the study, the location of the
current main fire stations can cover the risk level of the plant by 74.35%, and define
the optimal location of fire stations, while one fire station is able to accommodate
up to 50 and 100 factories.
1 Introduction
When a fire breaks out, especially in the industry, it causes millions of dollars in damage.
Failure to reach the scene in time will result in a long wait to control the fire and hazardous
chemicals. Some incidents took more than 20 h to put out the fire. There are side effects
caused by chemicals that affect the people living around them and affect their long-term
health.
In Thailand, the cabinet approved the project to develop the country, the “East-
ern Economic Corridor” (EEC). The EEC focused on three eastern provinces, namely
Chachoengsao, Chonburi, and Rayong to be the main industrial production base of the
country and strengthen the industry, be a leader in manufacturing, and be export centers
to Asian countries. In order to support the upcoming economic expansion, there is a
need to develop infrastructure, transport, and logistics systems that are connected by
road, air, and water. Creating effective personnel and labor for entering the industrial
sector. Development of communication systems, standardization of international rules,
systems of work and control, etc.
All aspects of development must be integrated and implemented in tandem. To build
confidence for foreigners to invest, the Emergency and Public Disaster Prevention and
Mitigation System is of great importance for the development of the country. The indus-
trial sector and urban communities with high density of population are particularly
affected. When an incident occurs, there is a high amount of violence and damage com-
pared to the area outside the city. The Eastern Economic Corridor is expected to house
several establishments, including five large oil refineries, three petrochemical industries,
20 power plants, and 29 industrial estates, which are production bases for 3786 indus-
trial plants. Chachoengsao province, one of the three provinces under the EEC project,
is located in the east of Thailand. Chachoengsao is the province that has invested in
medium- and large-sized industrial plants for a long time. Laem Chabang Port has been
constructed to transport out and import in the Eastern Sea Board development project.
There is also a new Bangkok-Chonburi highway (motorway) and the location of Cha-
choengsao province is close to Suvarnabhumi Airport, making it convenient to transport
and import goods. When there is an emergency or public disaster, the most important
thing for the authorities to do is help the victims and control the situation as quickly as
possible. Therefore, it is important to determine the location of service stations in the
event of an emergency or public disaster. By determining the appropriate location of the
station, the authorities will be able to assist the victims in a timely manner.
This research proposes determining the optimal location of fire stations in industry.
Locating fire service stations based on relevant factors such as the risk level of an
industrial type and industrial spatial distribution.
2 Literature Review
Mathematical models have been developed for determining the location of service units
in various ways, as well as developing ways to find answers to problems. Research has
been conducted on the optimal location of fire stations by using mathematical model
such as Covering location problem such as Badri et al. [1], Huang [2], Naji-Azimi et al.
[3], Integer programming introduced by Shitai et al. [4], Jian et al. [5], and several
studies have applied metaheuristics, including Ant colony optimization presented by
Huang [2], Scatter search proposed by Naji-Azimi et al. [3], GA developed by Shitai
et al. [4], Variable neighborhood search examined by Davari et al. [6], Particle swarm
optimization developed by Elkady and Abdelsalam [7], Drakulić et al. [8], Differential
evolution, Artificial bee colony algorithm, Firefly algorithm examined by Soumen et al.
[9], Whale Optimization Algorithm introduced by Toloueiashtian et al. [10].
In case the size of the problem is large, as follows: Badri et al. [1] proposed multi-
objective optimization to determine the location of 31 fire stations in Dubai by consid-
ering several objectives such as travel time, distance, and cost to find the best solution.
Huang [2] introduced a solution to the problem of fire station locations in Singapore. The
author applied a multi-objective optimization model, including a linear feature covering
problem (LFCP) and Ant algorithm to two-phase local search, which defined cell space
and proposed reducing the time to arrive at the scene from 8 min to 5 min.
Yin and Mu [11] developed the Modular Capacitated Maximal Covering Location
Problem (MCMCLP), which expands on the MCLP model to determine the optimal
location of ambulances in Georgia, Athens, with the aim of covering as many service
areas as possible and at the same time. The objective is to reduce the total distance
between areas outside the coverage of the nearest ambulance location.
Naji-Azimi et al. [3] proposed a model for setting up satellite distribution centers to
provide assistance to victims in disaster areas by using the MCLP and the Scatter search
Optimal Fire Stations for Industrial Plants 203
algorithm. The objective is to keep the total distance as low as possible, and the Scatter
search method could solve large-scale problems well and was more effective than the
solution by mathematical models.
Shitai et al. [4] proposed a set covering model using integer programming and multi-
objective GA to select the location of a watchtower. The purpose of the set covering
model in this study was to reduce construction costs to the lowest value and cover the
area most in need of wildfire surveillance (cost and coverage) using GIS data analysis.
The results showed that for bi-objective, the GA method found the most appropriate
answer according to the pareto front.
Jian et al. [5] offer mixed integer nonlinear program (MINLP) model to select fire
station locations to cover the needs of the local population in Harbin, China. The objective
is to reach the population quickly with sufficient number of vehicles to meet the demand.
The authors applied the maximal covering location model and testing the model using
GAM algorithms.
According to research, in the past, mathematical models were used to solve numerous
service unit search problems. Most consider only the amount of demand for services and
often apply specific case studies in residential communities. However, the location of
industrial plants and the risk of their occurrence are rarely considered according to the
type of plant in the industrial zone.
To collect data, face-to-face interviews with stakeholders are used, as well as question-
naires to gather evidence and store additional data over the phone.
2. Inquiries.
Inquests are collected from the office work safety officers of the Chachoengsao
Provincial Industrial Estate Office. To determine the risk points that may cause disasters,
and to inquire about the types of industrial plants where disasters occur frequently.
There are 3 phases of fire duration: the initial, intermediate, and severe stages.
(1) The initial stage from the time of seeing the flame until 4 min can be extinguished
by the initial fire extinguisher.
(2) The intermediate to severe stage, which is the time that the fire has already burned,
which is from 4 to 8 min. There will be a temperature of 400 degrees Celsius, which
is a good time to get help in time because if the rescue is too late, the fire will be
difficult to control.
(3) The severe stage is considered when the fire has burned for more than 8 min and
there is still a lot of fuel, the temperature will be above 800 degrees Celsius. The
fire spreads violently and quickly in all directions. Therefore, fighting a raging fire
is difficult. The length of time it takes to assist burn victims from disaster assistance
centers to effective scenes is within 5 min [12]. The vehicle’s speed at a standard
204 O. Sangsawang and S. Chanta
speed is 60 km/h. Therefore, the disaster management center must have a radius of
covering the accident site of 5 km. So that, there are three minutes left to control the
fire before it reaches a severe stage.
In the objective of this research, types of factories with different levels of disaster
risk are multiplied by different weight values. Information on the number of factories in
the industrial estates is divided into 3 types according to the risk level, as follows:
• High-risk factories (red) are hazardous spots, such as those that use flammable mate-
rials or fuel or explode easily. This is in accordance with the type or type of plant spec-
ified in the notification of the Ministry of Industry on fire prevention and suppression
in factories such as energy, petrochemicals, electronic components, etc.
• Medium-risk factories (yellow) are areas to be monitored or a factory that operates
businesses other than those specified in the Ministry of Industry’s notification on fire
prevention and suppression in factories, such as the food industry, steel, etc.
• Low-risk factories (green) are low-risk areas for related types of businesses, such as
the service industry, building materials, etc.
n
m
Maximize ri zij (1)
i=1 j=1
m
Subject to xj = P (2)
j=1
m
m
kj aij xj ≥ wi zij , ∀i (3)
j=1 j=1
m
zij ≤ 1, ∀i (4)
j=1
Indices:
i = set of demand nodes = 1,…, n
j = set of fire station locations = 1, …, m
n= number of demand nodes
Optimal Fire Stations for Industrial Plants 205
0 otherwise
.
P = maximum number of fire stations to be opened.
S = distance that a call can be served in a standard response time.
Decision variables:
x i = 1 if node j is selected to be fire station.
= 0 otherwise.
zij = 1 if industrial plants at demand node i are covered by fire station at location j.
= 0 otherwise.
The objective in Eq. (1) is to maximize the total risk scores. The risk score is cal-
culated by multiplying the risk level of the area i by the number of industrial plants in
the area. Note that in this study, there are three levels of severity, which are high risk,
medium risk, and low risk. Constraint in Eq. (2) limits the number of fire stations to
be opened should not exceed P. Constraint in Eq. (3) forces that a demand node i can
be covered if there exists a fire station at location j that can serve demand node i in a
standard response time. Note that the workload of the fire station at location j cannot
exceed k j . Constraint in Eq. (4) prevents multiple coverage counting, so demand node i
can cover by a fire station. Constraints in Eqs. (5)–(6) assign the domain of the decision
variables.
Based on the data collected from the location of industrial plants in the 3 main
industrial estates which are Gateway city, Wellgrow, and TFD industrial estates. Table 1
shows the number of factories in Chachoengsao Province, 1699 factories, by fire risk
level 3 levels.
To determine the location of the fire stations, we analyzed the service coverage of
the current fire stations in Chachoengsao Province. According to the survey, there are
currently 20 fire stations in the province. Determining the importance or risk of fire at
each plant differently depends on the level of risk. In determining the proper location of
the fire stations, MCLP is applied for the maximal demand coverage issues. Designate
the coverage of the assistance as 8 min or 8 km. According to the analysis in Table 2,
the current fire stations in Chachoengsao province, 20 locations cover a risk score of
74.35%. In Table 3, when designated one fire station to serve up to 50 factories, we
varied number of fire stations P: 10, 15, 20, 21 and 22. From the result, P is increased,
the coverage percentage is also increased. The optimal number of fire stations is 22. In
Table 4, we defined each fire station to serve up to 100 factories, we varied number of
fire stations P: 10, 15, 20, 21 and 22. From the result, the optimal number of fire stations
is 22.
P Stations % Coverage
20 5 11 16 18 19 57 70 77 79 86 92 74.35
93 100 124 126 150 152 185 209 210
Table 3. Results of the coverage percentage according to the risk score with capacity of 50
factories.
P Stations % Coverage
10 8 16 43 66 70 122 151 154 175 210 85.69
15 8 16 43 57 66 69 92 121 137 144 150 188 194 200 216 98.23
20 1 8 16 36 50 59 60 65 107 110 134 137 140 144 156 183 187 200 205 216 99.88
21 1 8 16 24 51 56 59 60 65 106 110 116 137 140 144 156 183 187 200 205 99.94
216
22 1 8 16 24 36 51 59 60 65 110 116 128 137 140 144 156 182 186 200 205 100.00
211 216
From Fig. 1, the workload capacity of fire station is limited to 50 and 100 factories.
When the number of fire stations P is 10, the percentage of coverage of the workload
Optimal Fire Stations for Industrial Plants 207
Table 4. Results of the coverage percentage according to the risk score with capacity of 100
factories.
P Stations % Coverage
10 9 42 47 70 84 122 151 154 175 210 90.34
15 9 17 50 55 63 73 95 104 141 148 156 169 192 205 216 98.70
20 1 9 17 36 50 59 62 65 107 110 134 137 144 156 162 183 187 200 205 216 99.88
21 1 9 17 24 36 50 59 63 95 107 110 116 123 144 156 159 183 187 200 205 216 99.94
22 1 9 17 24 36 44 59 66 69 89 116 119 140 144 156 165 170 182 200 205 211 100.00
216
capacity of 100 factories is higher than that of 50 factories. When the number of fire
stations P is increased, the difference in maximum coverage percentage decreases. When
P = 22, the maximum coverage provided by the workload capacities of fire stations 50
and 100 is optimal.
fire station to be able to accommodate up to 50 and 100 factories. The optimal number
of fire stations is 22 in both cases.
For future research, locating the location of fire stations using metaheuristics, multi-
objective whale optimization, will be conducted, such as examining the impact on the
spread of hazardous chemicals that affect public health and environmental conditions.
References
1. Badri, M.A., Mortagy, A.K., Alsayed, C.A.: A multi-objective model for locating fire stations.
Eur. J. Oper. Res. 110(2), 243–260 (1998)
2. Huang, B., Yao, L.: A GIS supported ant algorithm for the linear feature covering problem
with distance constraints. Decis. Support Syst. 42(2), 1063–1075 (2006)
3. Naji-Azimi, Z., Renaud, J., Ruiz, A., Salari, M.: A covering tour approach to the location of
satellite distribution centers to supply humanitarian aid. Eur. J. Oper. Res. 51, 365–378 (2021)
4. Shitai, B., Ningchuan, X., Zehui, L., Heyuan, Z.: Optimizing watchtower locations for forest
fire monitoring using location model. Fire Saf. J. 71, 100–109 (2015)
5. Jian, W., Han, L., Shi, A., Na, C.: A new partial coverage locating model for cooperative fire
services. Inf. Sci. 373, 527–538 (2016)
6. Davari, S., Zarandi, M.H.F., Turksen, I.B.: A greedy variable neighborhood search heuristic
for the maximal covering location problem with fuzzy coverage radii. Knowl. -Based Syst.
41, 68–76 (2013)
7. Elkady, S.K., Abdelsalam, H.M.: A modified particle swarm optimization algorithm for solv-
ing capacitated maximal covering location problem in healthcare systems. Appl. Intel. Optim.
Biol. Med. 117–133 (2016)
8. Drakulić, D., Takaci, A., Marić, M.: New model of maximal covering location problem with
fuzzy conditions. Comput. Inform. 35, 635–652 (2016)
9. Soumen, A., Priya, R.S.M., Anirban, M.: Solving a new variant of the capacitated maximal
covering location problem with fuzzy coverage area using metaheuristic approaches. Comput.
Indust. Eng. 170 (2022). [Link]
10. Toloueiashtian, M., Golsorkhtabaramiri, M., Rad, S.Y.B.: An improved whale optimization
algorithm solving the point coverage problem in wireless sensor networks. Telecommun. Syst.
79, 417–436 (2022)
11. Yin, P., Mu, L.: Modular capacitated maximal covering location problem for the optimal siting
of emergency vehicles. Appl. Geogr. 34, 247–254 (2012)
12. Murray, A.T.: Maximal coverage location problem: impacts, significance, and evolution. Int.
Reg. Sci. Rev. 39(1), 5–27 (2016)
13. Church, R., ReVelle, C.: The maximal covering location problem. Pap. Reg. Sci. Assoc. 32,
101–118 (1974)
Optimisation of a Sustainable Biogas Production
from Oleochemical Industrial Wastewater
Mohd Faizan Jamaluddin1(B) , Kenneth Tiong Kim Yeoh1 , Chee Ming Choo1 ,
Marie Laurina Emmanuelle Laurel-Angel Guillaume1 , Lik Yin Ng2 ,
and Vui Soon Chok3
1 Centre for Water Research, Faculty of Engineering, Built Environment and Information
Technology, SEGi University, Petaling Jaya, Malaysia
faizanjamaluddin@[Link]
2 Department of Chemical and Petroleum Engineering, Faculty of Engineering, Technology and
Built Environment, UCSI University, Kuala Lumpur, Malaysia
3 KL-Kepong (KLK) Oleomas Sdn. Bhd, 42920, Pulau Indah, Malaysia
1 Introduction
The global energy demand is rapidly increasing, with fossil fuels accounting for about
88% of this demand [1]. Fossil fuel-derived carbon dioxide emissions are the main
contributor to the rapidly increasing concentrations of greenhouse gases [2]. In order
to minimise the environmental impacts of the energy production industry, the quest for
sustainable and environmentally friendly sources of energy has become an urgency in
recent years. Consequently, an interest in the production and use of fuels from organic
waste can be observed in recent years.
In this context, biogas from wastes, residues, and energy crops can play an important
role as energy source in the future. Generally, biogas consists of 50–75% of methane
2 Methodology
2.1 Optimisation Process Flow Diagram
The flow diagram depicts the individual steps of the process sequential order. This study
requires the collection of data for the optimisation model construction. The generic
structure of the biogas production is also developed before formulating the mathematical
optimisation equations. The simulation is then run using all the data collected for different
scenarios, Pathway A and Pathway B, which aim to maximise biogas yield and maximise
economic performance respectively (Fig. 1).
Pathway A can show the digester responsible for the highest biogas yield, while
Pathway B provides the economic upper and lower limits for use in the multi-objective
Optimisation of a Sustainable Biogas Production 211
optimisation simulation. By integrating fuzzy logic, the optimisation method can provide
the production plant with the highest achievable economic performance and biogas yield,
as well as the optimal type of pre-treatment and digester. Consequently, am optimised
and sustainable biogas production plant using oleochemical wastewater as feedstock can
be obtained.
Fig. 2. General superstructure for the optimisation of biogas from oleochemical wastewater
P
Fa = fap ∀a (1)
p=1
212 M. F. Jamaluddin et al.
P
A
Fp = fap CODp Xpg ∀p (2)
p=1 a=1
C
P
Fc = fpc CODc Xcg ∀c (3)
c=1 p=1
J
C
Fg = fcj CODj Xjg ∀g (4)
j=1 c=1
The generic equations for the volumetric flowrate balance of oleochemical wastew-
ater are shown in Eqs (1) and (2). Given the flowrate of oleochemical wastewater a,
Fa (m3 /day) is fed into physical pre-treatment technology p with a flowrate of f ap as
shown in Eq (1). The flowrate of pre-treated stream p, F p (m3 /day) from the physical
pre-treatment technology p can be determine by Eq (2) given the conversion rate (Xpg )
of biogas g from wastewater which is based on the COD removal efficiency. The COD
removed by physical pre-treatment technology p is denoted by CODp . Similarly, the
flowrate of stream from chemical pre-treatment c, F c (m3 /day) given the conversion rate
(Xcg ) where the COD removal from chemical pre-treatment c is denoted by CODc can
be obtained from Eq (3) and the biogas g with flowrate g(F g ) from the anaerobic digester
j can be obtained from Eq (4) given the conversion rate (Xjg ).
[r(1 + r)y ]
AF = (7)
(1 + r)y − 1
The annual capital is determined by multiplying the capital cost, CAPTOT with the
annualised factor, AF as shown in Eq. (6). Where the annualised factor, AF can be
obtained by using Eq. (7). The capital cost, CAPTOT can be obtained using the following
equation.
P
A
CAP TOT = (fap x CCp )
p=1 a=1
C
P
J
C
+ (fpc x CCc ) + (fcj x CCj ) (8)
c=1 p=1 j=1 c=1
where CCp is the capital cost of physical pre-treatment technologies p, CCc is the capital
cost of chemical pre-treatment technologies c, and CCj is the capital cost of digesters j.
The total annual operating cost, OC TOT is estimated based on the flowrate and oper-
ating cost of each equipment and technology. The operating cost of the physical pre-
treatment technologies, chemical pre-treatment technologies and digesters j are given
by OCp , OCc , and OCj respectively
P
A
C
P
J
C
OC TOT = (fap x OCp ) + (fpc x OCc ) + (fcj x OCj ) (9)
p=1 a=1 c=1 p=1 j=1 c=1
The annual total revenue, REV generated by the production of biogas is associated
with its selling price, Pg .
G
REV = Fg x Pg (10)
g=1
3 Fuzzy Optimisation
The objective of a general multi-objective optimisation is to minimise or maximise an
objective function set subject. Fuzzy optimisation can determine the ideal alternatives
in decision making problems by solving an objective function on a set of alternatives
given by the constraints. In this study, fuzzy optimisation is used due to its flexibility
and reliability. A degree of satisfaction, λ is required for the fuzzy model. The value of
λ ranges from 0 to 1, whereby 0 indicates that the economic performance approaches its
upper limit while 1 indicates that the optimisation objective is approaching its lower limit.
These objective functions are assumed to be linear functions bounded by boundaries. The
upper limit is represented as UL and the lower limit as LL. The economic performance
EP (USD/year) which needs to be maximised is determined by the following equation.
EP UL − EP
≥λ (11)
EP UL − EP LL
214 M. F. Jamaluddin et al.
4 Case Study
4.1 Maximising Biogas Production
In this scenario, the optimisation objective is set to maximise the production of biogas.
G
Maximise Fg (12)
g
MaximiseEP (13)
5 Conclusion
The results of this optimisation can therefore provide the most appropriate pathway for
an economically feasible biogas production with the highest achievable biogas yield. The
pathway obtained provide the pre-treatment and bioreactor technologies that provide the
maximum yield of biogas and at the same time the maximum economic performance.
The use of fuzzy logic optimisation allows the representation and manipulation of uncer-
tain data to provide an accurate model to improve the biogas production system using
oleochemical wastewater. The main advantage of this model is that both constraints,
economic performance and biogas yield are taken into consideration to provide a sus-
tainable biogas production process. This model can be further altered by also taking into
consideration environmental factors such as carbon footprint, greenhouse gases emission
and energy consumption. For future research purposes a larger selection of pre-treatment
technologies can be included to the model such as inclusion of preliminary treatment and
upgrading technologies in order to produce bio-methane. The biogas will be upgraded
to bio-methane of 95% methane content which have higher energy source. The environ-
mental impact in the form of greenhouse gas emission will also be included in the new
model.
Optimisation of a Sustainable Biogas Production 215
Acknowledgement. This study was supported in part by the RMIC SEGi University (Award No:
SEGiIRF/2022-Q1/FoEBEIT/005). KLK Oleomas Sdn Bhd deserves special thanks for providing
access to their archives and technical advice throughout the project.
References
1. Weiland, P.: Biogas production: current state and perspectives. Appl. Microbiol. Biotechnol.
85(4), 849–860 (2009). [Link]
2. Naik, S., Goud, V.V., Rout, P.K., Dalai, A.K.: Production of first and second generation biofuels:
a comprehensive review. Renew. Sustain. Energy Rev. 14(2), 578–597 (2010). [Link]
10.1016/[Link].2009.10.003
3. Isa, M.H., et al.: Improved anaerobic digestion of palm oil mill effluent and biogas production
by Ultrasonication Pretreatment. Sci. Total Environ. 722, 137833 (2020). [Link]
1016/[Link].2020.137833
4. Schnurer, A., Jarvis, A.: Microbiological handbook for biogas plants. Swed. Waste Manage
1–74 (2010)
5. Ismail, Z., Mahmood, N.A., Ghafar, U.S., Umor, N.A., Muhammad, S.A.: Preliminary studies
on oleochemical wastewater treatment using submerged bed biofilm reactor (SBBR). IOP Conf.
Ser. Mater. Sci. Eng. 206, 012087 (2017). [Link]
6. Ohimain, E.I., Izah, S.C.: A review of biogas production from palm oil mill effluents using dif-
ferent configurations of bioreactors. Renew. Sustain. Energy Rev. 70, 242–253 (2017). https://
[Link]/10.1016/[Link].2016.11.221
7. Makisha, N., Semenova, D.: Production of biogas at wastewater treatment plants and its further
application. MATEC Web Conf. 144, 04016 (2018). [Link]
404016
8. Bennich, T., Belyazid, S.: The route to sustainability-prospects and challengesof the bio-based
economy. Sustainability (Switzerland) 9(6), 887 (2017). [Link]
Ensemble Approach for Optimizing Variable
Rigidity Joints in Robotic Manipulators Using
MOALO-MODA
1 Introduction
Lately, there has been a surge in academic attention towards sophisticated robots
that necessitate direct collaboration and engagement with users. Traditional industrial
robotics technology faces considerable hurdles in meeting these demands due to inher-
ent system constraints. Classic robots function in segregated settings, utilizing highly
stiff joints to attain exceptional positional accuracy. As a solution, researchers have
proposed and designed compliant joints, which integrate mechanical flexibility into the
robotic joint structure. Nonetheless, ensuring human safety in proximity to robotic sys-
tems remains crucial. The incorporation of compliance in joints has been extensively
researched as a means to enhance safety measures. One method to achieve this is through
active compliance, which emulates mechanical flexibility by employing sensors and
actuators. However, these sensors and actuators sometimes become unreliable due to the
failure of the sensors. Li et al. [1] used variable stiffness mechanisms to design a novel
cable-driven joint in robotic components. Memar et al. [2] designed a robot gripper with
variable stiffness actuation to reduce the damage due to collision. Bilancia et al. [3]
presented a virtual and physical prototype of a beam-based variable stiffness actuator.
Their aim is to enhance safety in human-machine interaction. Nalson et al. [4] used a
variable stiffness mechanism to develop a redundant rehabilitation. Similarly, Hu et al.
[5] designed a novel antagonistic variable stiffness dexterous finger. Yun et al. [6] uti-
lized permanent rotary magnets to accomplish variable rigidity. The joints with variable
rigidity must have adequate stiffness during manipulator movement so that the robotic
system can bear the load. Tonietti et al. [7] demonstrated a robotic arm with variable
rigidity facilitated by a linear actuator. Within a standard human living space, Yoo et al.
[8] suggest that variable rigidity joints must produce over 10 Nm of torque to support a
1 kg payload on a 1 m robotic arm. Nevertheless, employing traditional electric motors
could result in excessively heavy variable stiffness joints. To address this issue, Yoo et al.
[8] introduced neodymium-iron-boron ring-shaped permanent magnets in conjunction
with variable stiffness joints. Magnets are sometimes utilized alongside direct current
actuators in robotic systems.
Optimizing such variable stiffness joints is very important to maximize the perfor-
mance and minimize the risk of any robot manipulator. Researchers proposed many opti-
mization techniques to perform single and multi-objective optimization. Again many of
them combined two optimization algorithms to build a hybrid algorithm. Because some
optimization techniques are good at exploration and some are good at exploitation, a
good exploration optimization technique is merged with a good exploitation technique to
obtain a hybrid optimization technique with good both in exploration and exploitation.
The above literature study reveals that the optimization of variable stiffness joints in
robot manipulators is essential. However, most research is performed on this by consid-
ering only straightforward single-objective and multi-objective optimization techniques.
Hybrid optimization techniques are not used to design the variable stiffness joint. So, in
this paper, a novel MOALO-MODA ensemble-based optimization technique is proposed
for designing the robot manipulator variable stiffness joint. The following contributions
are made in this paper—
• A novel MOALO-MODA ensemble is proposed to design variable stiffness joints in
robot manipulators.
• The advantage of the MOALO-MODA ensemble is established by comparing it with
its individual counterparts.
2 Methodology
2.1 Multi-objective Antlion Optimization (MOALO)
Multi-objective Antlion Optimization (MOALO) is a nature-inspired, population-based
optimization algorithm inspired by antlions’ hunting mechanism [9]. The algorithm sim-
ulates the behaviour of antlions and ants, where antlions are considered as the candidate
218 G. Shanmugasundar et al.
solutions, and ants represent the search agents. MOALO is particularly effective for
solving multi-response optimization problems, owing to its efficient exploration and
exploitation capabilities. The pseudocode for MOALO is detailed below:
Algorithm 1: MOALO
Initialize the population of antlions and ants
Evaluate the fitness of each antlion
Initialize the archive of non-dominated solutions
While stopping criteria are not met:
For each ant:
Select two antlions using binary tournament selection
Generate a new solution by random walk and update the ant’s position
Update the archive with the new solution if it is non-dominated
Update the antlion population with the solutions from the archive
Evaluate the fitness of each antlion
Update archive using new NDS solutions
Return the final archive of NDS solutions
Algorithm 2: MODA
Initialize the population of dragonflies
Evaluate the fitness of each dragonfly
Initialize the archive of non-dominated solutions
Calculate the initial velocities of the dragonflies
While stopping criteria are not met:
Update the position and velocity of each dragonfly
Evaluate the fitness of each dragonfly
Update the archive with the new NDS solutions
Update the neighbourhood of each dragonfly
Update the global best solution
Return the final archive of NDS solutions
The MOALO-MODA ensemble is a novel approach that combines the strengths of both
MOALO and MODA to solve multi-objective optimization problems [11]. Inspired by
Ensemble Approach for Optimizing Variable Rigidity Joints 219
ensemble machine learning models, this approach aims to build a robust optimization
technique by integrating the Pareto fronts generated from multiple independent trials of
both MOALO and MODA algorithms.
The MOALO-MODA ensemble comprises three stages: primary non-dominated
sorting (NDS) within each trial, secondary NDS for MOALO and MODA ensembles
separately, and tertiary NDS for the combined MOALO-MODA ensemble. This method
allows the ensemble to exploit potentially better NDS solutions at different segments of
the Pareto front, enhancing the overall performance of the optimization process.
3 Problem Description
The objective is to design a robot manipulator with variable stiffness joints. Three design
parameters namely, inner stator width (x1 ), outer stator width (x2 ), and magnet height
(x3 ) are used. The study’s objective is to simultaneously minimize the variable stiff-
ness joint’s overall weight while maximising the torque. This study uses a MOALO-
MODA ensemble technique to perform multi-objective optimization. MOALO-MODA
is a nature-inspired hybrid optimization technique combination of multi-objective ant
lion optimization (MOALO) and multi-objective dragonfly algorithm (MODA), and an
ensemble technique of machine learning is used to perform the study. The case study
is from the works of Yoo et al. [8]. The design of a variable rigidity joint for a robotic
manipulator aims to accommodate a robotic arm with a payload capacity of 1 kg. Fur-
thermore, it is presumed that the rotational stiffness of such a joint should possess a
value of 10 Nm. So, this problem is treated as a constraint optimization technique con-
sidering a constraint function that rejects the solutions of rotational stiffness value less
than 10 Nm. This ensures that the non-dominated solution archive contains only those
solutions that satisfy the rotational rigidity constraint criteria. The upper limits and the
lower limits of the design parameters in mm are given as follows:
6 ≤ x1 ≤ 10; 5 ≤ x2 ≤ 15; 6 ≤ x3 ≤ 10 (1)
A central composite design (CCD) is performed using the upper and lower limits of
the design variables to get the design points and to perform the experiment. Based on
220 G. Shanmugasundar et al.
the experiment studies Yoo et al. [8] modelled a second-order polynomial mathematical
equation of torque and weight as shown in Eqs. (2) and (3). Yoo et al. [8] reported a
very good co-efficient of regression value (R2 ) of 0.97 and 1 for torque and weight,
respectively. However, the design variables’ upper and lower limits are further coded in
the range between ± 1.
Fig. 1. Pareto front generated in three different trials by (a) MOALO (b) MODA.
The box plot of three trials for torque and weight utilizing the MOALO optimizer is
depicted in Fig. 2, and it can be seen that for trial two, there are outliers for both cases.
Ensemble Approach for Optimizing Variable Rigidity Joints 221
Figure 2 demonstrates that in the first trial of torque, the most significant number of
higher values are obtained. In contrast, in the second trial of weight minimization, the
most significant number of lower values are obtained.
Fig. 2. Box plot of Pareto fronts generated by MOALO in three different trials.
Figure 3 displays the box diagram of all functional evaluations for torque and weight
using the MODA optimizer. The ensemble method is used to condense further the Pareto
front obtained from MOALO and MODA optimizers. The ensemble method is intro-
duced to evaluate the non-dominant solutions generated by the MOALO and MODA
optimization strategies.
Figure 4a depicts the Pareto front derived from the MOALO ensemble method,
whereas Fig. 4b depicts the Pareto front derived from the MODA ensemble method. In
addition, the MOALO-MODA ensemble technique is used to condense the Pareto front
derived from the MOALO and MODA ensemble techniques.
Figure 5 depicts the Pareto front derived using MOALO-MODA ensemble tech-
niques. Consequently, the Pareto front derived from the MOALO-MODA ensemble can
be used to design robot manipulator joints with variable stiffness according to the design
requirements.
It is important to state here that during the initial independent n runs of the MOALO
and MODA algorithm, the archive size is limited to 500. This ensures that sufficient
Pareto solutions are recorded on the Pareto fronts. In this study, for all three runs of
MOALO, 500 Pareto solutions were obtained, whereas in the case of MODA in 2 out of
3 trials 500 Pareto solutions were obtained. The third trial recorded 499 Pareto solutions.
On the other hand, the ensemble MOALO and ensemble MODA contain only 114 and
263 Pareto solutions. Thus, a reduction of approximately 77% and 47% is undergone
in the ensemble MOALO and ensemble MODA as compared to a typical Pareto front
of MOALO or MODA. The final MOALO-MODA ensemble contains only 340 Pareto
solutions. This is about 32% compared to a typical Pareto front of MOALO or MODA.
222 G. Shanmugasundar et al.
Fig. 3. Box plot of Pareto fronts generated by MODA in three different trials.
Out of the 340 Pareto solutions in the MOALO-MODA ensemble, 87 are from MOALO
and 253 are from MODA. This corresponds to 25.6% of the MOALO-MODA ensem-
ble being made out of MOALO solution whereas the remaining 74.4% of the solution
was originally generated by MODA. This highlights the importance of the heteroge-
neous mixing of Pareto solutions from multiple multi-objective algorithms in forming
an ensemble Pareto front.
In terms of computational time, the MOALO and MODA algorithms are found to
be comparable with an average of 99 s (± 21 s) and 91 s (± 33 s) for both algorithms
Ensemble Approach for Optimizing Variable Rigidity Joints 223
5 Conclusions
This study presents a novel MOALO-MODA ensemble technique for optimizing a robot
manipulator’s variable stiffness joint. The objective of the study is to maximize the
generated torque and minimize the weight. The design parameters for this investigation
are the inner stator width (x1 ), outer stator width (x2 ), and magnet height (x3 ). Ini-
tially, MOALO and MODA optimization techniques are used to generate Pareto fronts.
The obtained Pareto solutions are then collated and condensed using NDS to form the
MOALO ensemble and MODA ensemble. Subsequently, the Pareto fronts derived from
both the MOALO and MODA ensembles are combined, followed by the application of
the NDS. The emerging Pareto front is referred to as the MOALO-MODA ensemble.
This study demonstrates that the Pareto front generated by the MODA optimizer yields
a more significant proportion of non-dominant outcomes.
References
1. Li, Z., et al.: A novel cable-driven antagonistic joint designed with variable stiffness
mechanisms. Mech. Mach. Theory 171, 104716 (2022)
2. Memar, A.H., Esfahani, E.T.: A robot gripper with variable stiffness actuation for enhancing
collision safety. IEEE Trans. Industr. Electron. 67, 6607–6616 (2019)
3. Bilancia, P., Berselli, G., Palli, G.: Virtual and physical prototyping of a beam-based vari-
able stiffness actuator for safe human-machine interaction. Rob. Comput.-Integr. Manuf. 65,
101886 (2020)
4. Nelson, C.A., Nouaille, L., Poisson, G.: A redundant rehabilitation robot with a variable
stiffness mechanism. Mech. Mach. Theory 150, 103862 (2020)
224 G. Shanmugasundar et al.
5. Hu, H., Liu, Y., Xie, Z., Yao, J., Liu, H.: Mechanical design, modeling, and identification for
a novel antagonistic variable stiffness dexterous finger. Front. Mech. Eng. 17, 35 (2022)
6. Yun, S., Kang, S., Kim, M., Hyun, M., Yoo, J., Kim, S.: A novel design of high responsive
variable stiffness joints for dependable manipulator. Proc. ACMD (2006)
7. Tonietti, G., Schiavi, R., Bicchi, A.: Design and control of a variable stiffness actuator for safe
and fast physical human/robot interaction. In: Proceedings of the 2005 IEEE International
Conference on Robotics and Automation (2005)
8. Yoo, J., Hyun, M.W., Choi, J.H., Kang, S., Kim, S.-J.: Optimal design of a variable stiffness
joint in a robot manipulator using the response surface method. J. Mech. Sci. Technol. 23,
2236–2243 (2009)
9. Mirjalili, S., Jangir, P., Saremi, S.: Multi-objective ant lion optimizer: a multi-objective
optimization algorithm for solving engineering problems. Appl. Intell. 46, 79–95 (2017)
10. Mirjalili, S.: Dragonfly algorithm: a new meta-heuristic optimization technique for solving
single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 27, 1053–
1073 (2016)
11. Kalita, K., Kumar, V., Chakraborty, S.: A novel MOALO-MODA ensemble approach
for multi-objective optimization of machining parameters for metal matrix composites.
Multiscale Multidisciplinary Model Exp. Des. 1–19 (2023)
Future Design: An Analysis of the Impact of AI
on Designers’ Workflow and Skill Sets
Abstract. This research examines the current state of artificial intelligence (AI) in
design, its potential impact, and the skills that will be required for future designers
to work effectively with AI. The future of design with AI is investigated. This
article explores the ethical and responsible application of AI in design, analyzing
case studies of AI-driven design projects from a variety of industries, including
architecture, fashion and discussing these initiatives. It is not the end of human
invention, and designers need to be aware of the potential biases and limitations
in AI algorithms. Although artificial intelligence has the ability to improve and
optimize the design process, it is not the end of human innovation. In its conclusion,
the study offers several thoughts on the trajectory of design in conjunction with
AI as well as some directions for future research.
1 Introduction
Since the beginning of human civilization, design has played an important role in the
development of many aspects of society. Design is an essential component in the pro-
duction of anything from tools and structures to works of art and clothing, and it plays
an important part in the formation of our environment [1]. Recent years have seen
tremendous breakthroughs in the field of design, brought about by the rise of artificial
intelligence (AI). These advancements have had an impact on a variety of industries and
changed the way that we approach design. It is impossible to overestimate the signifi-
cance of design because of the countless ways in which it affects our day-to-day lives.
Design plays an essential role in the process of defining the physical world around us,
from the structures in which we live and work to the objects that we use on a daily basis
[2]. It is also an essential component in the process of producing unforgettable experi-
ences, like in the entertainment and hotel industries. The way things are designed has an
effect on everything from our mental and emotional health to our level of productivity
and efficiency [3]. The function of design is undergoing a period of rapid transition as a
result of the development of AI technologies. AI is already being utilized in a variety of
facets of design, including product design, graphic design, and architectural design, to
name just a few of these specializations. AI is revolutionizing the process that designers
follow, from the time they first conceive of a concept until they ship the finished product
[4–6].
This study will investigate the relationship between AI and designers, focusing on
the effects of AI technology on the design industry as well as the potential of design in
conjunction with AI. Additionally, it will investigate the necessary skill set for future
designers to have in order to work effectively with AI, as well as the ethical implications
that surround AI-driven design. The purpose of this paper is to provide insight into the
ways in which AI is influencing the area of design as well as the potential and problems
that lie ahead through the exploration of the aforementioned themes [7].
The field of computer science known as AI focuses on the development of intelligent
machines that are capable of doing activities that would normally require the intelligence
of a human being. The beginning of AI can be dated back to the 1950s, and ever since
that time, it has been utilized in many different areas, including design [4, 7]. Since its
earliest uses in computer-aided design (CAD), AI has seen significant development in
the design industry, giving rise to more sophisticated tools and methods that make use
of machine learning and natural language processing [8].
Generative design is one application of AI in the design industry. The use of algo-
rithms and artificial intelligence in generative design allows for the creation and explo-
ration of design choices depending on specified parameters and limitations. When adopt-
ing this methodology, designers are able to explore more design choices than they would
be able to using traditional methods, which ultimately results in solutions that are more
innovative and efficient [9].
Another illustration of this is image recognition, which is powered by AI and can
examine pictures in order to recognize objects, colors, and patterns. This technology
can be used to automate repetitive chores for designers, such as the choosing of colors
and the categorization of images. As a result, designers are freed up to focus on more
complex and creative jobs. In spite of the numerous benefits that artificial intelligence
brings to the design process, there are also some constraints to take into consideration.
For instance, it is possible that AI will not be able to mimic the creative and intuitive
parts of human design thinking. Additionally, there’s a chance that AI-driven designs
won’t have the same emotional resonance and human touch that humans bring to the
design process. In addition, the quality of the data that is used to train the AI models
and the potential biases that may be present in that data may both be factors that prevent
AI-driven design from reaching its full potential [10].
AI is playing an increasingly crucial role in the design process, bringing new
prospects for creativity and efficiency. However, it is essential to take into account both
the benefits and drawbacks of employing AI in design and to look for ways to strike a
balance between the advantages offered by AI and the distinct advantages offered by
human designers.
2 Research Methodology
The research for the article was most likely conducted using a variety of different
approaches. In addition to the examination of the relevant literature in order to acquire
a complete comprehension of the current state of the field as well as developing tenden-
cies and problems in the second step of the process, case studies of real-world AI-driven
design projects in architecture and product design would be investigated in order to
Future Design: An Analysis of the Impact of AI 227
comprehend the influence that AI has on the design process and the results. Interviews
with Subject Matter Experts In order to acquire insights into the future of design with AI
as well as the essential skills and training for future designers, interviews with subject
matter experts in the fields of AI research, design, and industry professionals would be
carried out [9]. A data analysis of design projects that were developed utilizing AI-driven
design tools would be carried out in order to gain an understanding of the efficacy and
efficiency of these tools, in addition to any limitations or biases that they may have.
In order to comprehend the distinctions between AI-driven design tools and traditional
design processes, as well as their respective advantages and limitations, a comparative
study of both types of design tools and methodologies will be carried out. To get an
all-encompassing understanding of the role AI plays in the design process, the method-
ology of the article would, in general, entail a mix of qualitative and quantitative research
approaches to investigate various aspects of the topic.
The AI-powered design tools are software programs that designers can use to assist
them with their work. These tools have the capability to automate a great deal of the
normal design work, produce new design possibilities, and provide real-time feedback
and ideas. In this chapter, we will investigate some of the AI-driven design tools that are
now available, as well as their applications and the effectiveness and efficiency of using
them. An Overview Image recognition and generative design software, as well as virtual
assistants and design collaboration platforms, are among the numerous AI-powered
design tools presently available on the market. These instruments are applicable for a
variety of design applications, including graphic design, product design, architectural
design, and fashion design [3].
AI-Driven Design Tool Examples and Their Applications Adobe Sensei, which uses
machine learning to automate duties such as object removal, color matching, and image
cropping, is an example of an AI-driven design tool [5, 9]. Autodesk Dreamcatcher
is another example of a program that employs generative design to explore numerous
design options based on particular parameters and constraints. H&M uses an AI tool
called Color on Demand to analyze consumer preferences and generate custom color
palettes for its clothing lines. In the field of architectural design, the AI-powered plat-
form [Link] enables architects to generate and examine building designs based on
particular requirements and regulations [12].
Future Design: An Analysis of the Impact of AI 229
the training and education necessary to prepare future designers, and the balance between
creativity and automation in design. Competencies Required for Designers Working with
AI, Designers utilizing AI-powered design tools will need to possess both technical and
creative skills [7, 14]. Technical abilities include proficiency with software applications
associated with AI development, data analysis, and programming languages. Creative
abilities include imagination, critical reasoning, and problem-solving [8, 10]. In addi-
tion, designers must be able to effectively collaborate and communicate with clients,
engineers, and data scientists, among others. To integrate these tools into the design
process effectively, it will be necessary to understand the capabilities and limitations of
AI technology. Required Education and Training for the Preparation of Future Design-
ers To prepare future designers to work with AI-powered design tools, design colleges
and programs must incorporate AI education into their curricula [15]. This education
should include both technical and creative skills, in addition to a comprehensive under-
standing of the ethical and social implications of AI in design. Continuing education
and professional development will also be necessary for designers to stay apprised of
the most recent advancements in AI technology and best practices for employing these
tools. Design with a Balance of Creativity and Automation [5, 12, 14].
A discussion of how creativity in the design process may be impacted by the increas-
ing use of AI-driven design tools is a potential cause for concern. AI may lack the cre-
ativity and intuition of human designers, despite its ability to automate commonplace
tasks and generate numerous design options. To achieve a balance between creativity and
automation in design, designers should view AI-powered design tools as a supplement
to their own creative process, as opposed to a replacement. By combining their own
distinct perspectives and creative abilities with AI’s capabilities, designers can produce
innovative and effective designs [8]. The incorporation of AI-driven design tools into
the design process will necessitate the development of new skills and knowledge among
designers. Design schools and programs will be required to integrate AI education into
their curricula, and designers will be required to remain abreast of the most recent AI
technological advancements. By balancing creativity and automation in design, design-
ers can capitalize on the assets of both human and AI design thinking to produce effective,
innovative designs.
6 Case Study
By examine several case studies of AI-driven design projects in a variety of industries,
analyze their success and limitations, and address the impact of AI on the design process
and its outcomes. The case studies illustrate the potential and limitations of AI-driven
design in various industries. AI has shown promise in improving design efficiency and
outcomes, but it is essential to consider the potential impact on creativity and the need
for human designers to add their own unique perspective and touch to the final product.
By harmonizing automation and originality, designers can utilize AI-powered design
tools to create innovative and effective designs.
Future Design: An Analysis of the Impact of AI 231
In the development of autonomous vehicles, the automotive industry has also incor-
porated AI-driven design. Improving safety and efficiency, AI algorithms can analyze
sensor data in real-time to make decisions regarding steering, halting, and acceleration.
However, limitations include the requirement for extensive testing and validation, as
well as ethical concerns regarding the potential loss of employment for human chauf-
feurs [5, 12]. Business In the graphic design industry, AI-driven design tools have been
developed for the construction of logos, business cards, and websites. These tools can
generate multiple design options rapidly, sparing designers time and increasing their
productivity.
In 2018, the Japanese architectural firm Nikken Sekkei collaborated with the technol-
ogy company NVIDIA to create an AI-generated design for a Tokyo office structure
[6]. The team trained a neural network on a dataset of buildings and then used the net-
work to generate 10,000 building design options. The ultimate design was then chosen
based on a variety of factors, including structural efficiency and aesthetic appeal [13,
14]. The architectural firm Gensler designed the new terminal at Los Angeles Interna-
tional Airport using AI. They analyzed data on passenger flows, luggage handling, and
other variables using machine learning algorithms to optimize the terminal’s layout for
optimum efficiency and passenger comfort. Beijing-based MAD Architects used an AI
program to generate the design for the new China Philharmonic Hall in 2019 [16]. The
program analyzed data on the neighboring area, including topography, wind patterns,
and sunlight levels, in order to generate a design optimized for the site’s particular con-
ditions [5]. The architectural firm COOKFOX utilized AI to optimize the design of a
residential superstructure in New York City in 2020 [17]. The team generated thou-
sands of design options using a generative design algorithm and then identified the most
efficient and structurally sound options using machine learning. This enabled them to
design a structure that maximized natural light and ventilation while minimizing energy
consumption.
These examples illustrate how AI can assist architects and designers in creating
designs that are more efficient, optimized, and aesthetically pleasing. AI cannot yet
completely replace human creativity and intuition, but it can improve the design process
and assist architects and designers in making more informed and data-driven decisions.
Designers’ Obligation to Ensure the Ethical Use of AI. It is the responsibility of designers
to use AI in an ethical, transparent, and inclusive manner [10]. This includes designing
AI algorithms that are impartial and equitable, and being transparent about the data
sources used to train the algorithms. In addition, designers must prioritize user privacy
and security and ensure that their work does not perpetuate detrimental stereotypes or
discrimination [11].
AI Design Taking Into Account Potential Biases and Limitations AI algorithms may
be subject to biases and limitations, which can have a substantial impact on the outcomes
of design [6]. Due to biases in the data sets used to train the algorithms, facial recognition
technology has been criticized for being less accurate when identifying individuals with
darker skin tones. Designers must be aware of such constraints and biases and take
measures to mitigate them in their work. The incorporation of AI into design has the
potential to revolutionize the industry, but designers are responsible for ensuring AI is
used ethically and inclusively. By considering the potential biases and limitations of AI
algorithms, designers can create designs that are transparent, equitable, and inclusive
[10]. As AI technology continues to advance, it is crucial for designers to remain current
on ethical considerations and prioritize the wellbeing of their users.
9 Conclusion
Examined the intersection of design and AI, analysing the history and advancements of
AI in design, the current landscape of AI-driven design tools, the skillset necessary for
future designers to work with AI, case studies of AI-driven design projects, and the ethics
Future Design: An Analysis of the Impact of AI 233
and responsibilities of designers when using AI. The most important topics covered in this
article were the potential benefits and limitations of using AI in design, the importance of
designers maintaining a healthy balance between creativity and automation in design, the
need for designers to be aware of the potential biases and limitations of AI algorithms, and
the responsibility of designers to ensure that the use of AI is both ethical and inclusive.
With its potential to streamline repetitive tasks and foster greater innovation, AI holds
great promise for the future of the design industry. However, designers must be cautious
not to perpetuate prejudices and discrimination, and they should prioritize the well-
being of their consumers in their work. Future designers will need to possess a deeper
understanding of AI and how it influences design. To investigate the effects that AI will
have on the design industry and to develop new methods, tools, and strategies that will
facilitate the incorporation of AI into design practices, additional research is required.
As AI technology evolves, designers must remain current with ethical considerations
and continue to prioritize user needs and the welfare of their communities. In general,
AI holds great promise for the future of design, and if designers collaborate with AI
technology, they will be able to create designs that are forward-thinking, inclusive, and
efficient, all of which will benefit society as a whole.
References
1. Alfrink, K., Keller, I., Kortuem, G., Doorn, N.: Contestable AI by design: towards a
framework. Minds Mach 1–27, Aug (2022)
2. Dhanorkar, S., Wolf, C.T., Qian, K., Xu, A., Popa, L., Li, Y.: Who needs to know what,
when?: Broadening the Explainable AI (XAI) design space by looking at explanations across
the AI lifecycle. In: DIS 2021—Proceeding of the 2021 ACM designing interactive systems
conferences nowhere everywhere, vol. 12, pp. 1591–1602, Jun 2021
3. Zhao, D., et al.: Overview on artificial intelligence in design of organic Rankine cycle. Energy
AI 1, 100011 (2020)
4. Zhu, J., Liapis, A., Risi, S., Bidarra, R., Youngblood, G.M.: Explainable AI for designers: a
human-centered perspective on mixed-initiative co-creation. [Link]
5. E.K.-O.J. of A. and Design and undefined 2018: usage of artificial intelligence in today’s
graphic design. [Link], vol. 6, no. 4 (2018)
6. Endo, J., Ikaga, T.: Nikken Sekkei building: Tokyo, Japan. Sustain. Build. Pract. What Users
Think, 183–191, Jan (2010)
7. Castro Pena, M.L., Carballal, A., Rodríguez-Fernández, N., Santos, I., Romero, J.: Artificial
intelligence applied to conceptual design. A review of its use in architecture. Autom. Constr.
124, 103550, Apr (2021)
8. Chaillou, S., Chaillou, S.: ArchiGAN: artificial intelligence x architecture. Archit. Intell.
117–127 (2020)
9. Olsson, T., Väänänen, K.: How does AI challenge design practice? Interactions 28(4), 62–64
(2021)
10. Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Mind. Mach. 30(1),
99–120 (2020). [Link]
11. Smithers, T., Conkie, A., Doheny, J., Logan, B., Millington, K., Tang, M.X.: Design as
intelligent behaviour: an AI in design research programme. Artif. Intell. Eng. 5(2), 78–109
(1990)
12. Candeloro, D.: Towards sustainable fashion: the role of artificial intelligence—H&M, Stella
McCartney, Farfetch, Moosejaw: a multiple case study. Zo. J. 10(2), 91–105 (2020)
234 K. D. Singh and Y. X. Duo
13. Rauterberg, M., Fjeld, M., Krueger, H., Bichsel, M., Leonhardt, U., Meier, M.: BUILD-IT: a
planning tool for construction and design. 13–028
14. Irbite, A., Irbite, A., Strode, A.: Artificial intelligence vs designer: the impact of artificial
intelligence on design practice. Soc. Integr. Educ. Proc. Int. Sci. Conf. 4, 539–549 (2021)
15. Gero, J.S.: IJCAI-91 Workshop on Ai in design position paper ten problems for Ai in design
16. Yansong, M.: Designing the realizable Utopia (2011)
17. Orhon, A.V., Altin, M.: Utilization of alternative building materials for sustainable construc-
tion. Green Energy Technol. 727–750 (2020)
Clean Energy, Agro-Farming,
and Smart Transportation
Application of Remote Sensing to Assess
the Ability to Absorb Carbon Dioxide of Green
Areas in Thu Dau Mot City, Binh Duong
Province, Vietnam
Nguyen Huynh Anh Tuyet(B) , Nguyen Hien Than, and Dang Trung Thanh
ij
Thu Dau Mot University, Thu Dầu Mô.t, Binh Duong Province, Vietnam
tuyetnha@[Link]
Abstract. Climate change has been causing certain impacts on the environment
and public health, especially in large urban areas, where the population density is
dense and the green area is not enough. Trees play an important role in climate
regulation because of their ability to absorb CO2 generated from city activities.
The study was conducted to evaluate the CO2 absorption of green areas in Thu
Dau Mot City, Binh Duong province, where has many large urban and residential
areas, by using remote sensing technology and field investigation. Total load of
CO2 absorbed by green space of Thu Dau Mot city is about 874 thousands tons,
including 698 thousands tons from high vegetation areas (accounting for 80%) and
176 thousand tons from low vegetation areas (making up 20%). Hoa Phu is the
suburban ward that has the highest load of CO2 absorbed by green space while the
central ward of Phu Cuong is the lowest one. The economic value of CO2 stored
in urban green area of Thu Dau Mot city is about 15 millions USD, equivalent to
346 billions VND. This value will increase every year due to the growing of tree
biomass. This research proves that trees have a great role in absorbing CO2 arising
from human activities and therefore it is necessary for authorities of Thu Dau Mot
city as well as other places to promote the planting, caring and protecting of green
spaces as much as possible to protect the environment and community’s health.
1 Introduction
Climate change is now one of top environmental challenges of many countries around
the world because of its immeasurable direct dangers to the ecological environment,
biodiversity, and human life, especially in big cities where green areas are little, and
population is crowded. Therefore, urban green space plays an important role in the
sustainable development of urban ecosystems and people’s quality of life. They have a
direct impact on microclimate conditions, air quality, recreational and aesthetic value of
urban areas [1]. In addition, urban green space plays an important role in managing the
physical, psychological and public health of urban residents [2]. Therefore, the study
The process of performing this research and research methods are presented in Fig. 1.
Field survey method: this method was used for the two following purposes:
240 H. A. T. Nguyen et al.
Field survey for image classification: This survey was used to evaluate the accuracy
of the classification result. This survey was carried out in December 2021 with a total of
70 samples divided into 4 different types of objects: water surface, non-vegetation area,
low vegetation area and high vegetation area.
Field Survey for calculation of carbon stocks: Eight standard plots with the size of
10mx10m was chosen that represents for many kinds of high vegetation areas in TDMC
to investigate this information: plant name and the trunk diameter at a height of 1.3 m
(D1, 3) of all trees. This information was used to calculate the tree biomass.
Method of image classification: Remote sensing images are classified into 4 objects:
water surface, non-vegetation area (bare land, house, rock, sand, road, yards…), low
vegetation area (shrub and grassland, annual plants with plant height < 2m) and high
vegetation area (perennial plants, forest… with plant height > 2m) based on the results
of vegetation index calculation.
Vegetation index is an indicator to determine the plant greenness captured by satellite
image. There are several vegetation indices, but the most frequently used index is the
Normalized Difference Vegetation Index (NDVI) [14]. The formula of NDVI calculation
can be expressed as:
where: RED stands for the spectral reflectance measurements acquired in the red (vis-
ible) region, is band 4 of Landsat 8 image. NIR stands for the spectral reflectance
measurements acquired in the near-infrared regions, is band 5 of Landsat 8 image.
The NDVI varies between −1 and + 1. Negative values of NDVI correspond to
water. Values close to zero (−0.1 to 0.1) generally correspond to barren areas of rock,
sand, or snow. Lastly, low, positive values represent shrubs and grassland (approximately
0.2 to 0.3), while high values indicate temperate and tropical rainforests (0.6–0.8).
Method of assessing the accuracy of image classification: This study used the error
matrix to evaluate the accuracy of the classification results. Confusion matrix is estab-
lished based on field survey data combined with classified images. In which, the columns
of the matrix represent the reference data, the rows represent the classification applied
to this data to calculate the error. Kappa index is used to evaluate the accuracy level of
the classification results. A Kappa coefficient close to zero means that the accuracy is
poor whereas a value equal to 1 means the accuracy is perfect. The formula is as follows
[15]:
(T − E)
κ=
(1 − E)
where: T: Observed accuracy, E: Chance agreement incorporates off-diagonal
Method of calculating the biomass of green space area: The amount of carbon stored
in the TDMC green space is estimated from two components, including C accumulated
in the high vegetation and the low vegetation cover. Therefore, the study applied a
combination of the following rapid assessment methods to relatively quantify the amount
of C accumulated in the green area at TDMC:
Application of Remote Sensing to Assess the Ability to Absorb 241
The biomass and C absorption of high vegetation area: The biomass of a tree in surveyed
plots is calculated according to the formula: Y = 0.11 * r * D2 + c [16], where: Y =
biomass per tree (kg/plant), D = Diameter at a height of 1.3 m (cm), r = Wood density
= 0.6 mg/cm3 = the standard average density of wood, c = 0.62. Root biomass is one
per fourth of tree biomass.
The biomass of the whole plots is calculated according to the formula: W = ni=1 Yi,
where: W = total biomass of the whole plot (kg/plot area), n = the number of tree in a
standard plots, i = the order of tree, from 1 to n.
The Carbon storage calculation formula is: MC = (Y x C) (kg/plant), where: MC =
amount of stored carbon (kg/plant), C = percentage of carbon in the wood = 46%. [17].
The biomass of low vegetation area: The calculation of low vegetation area in TDMC
is based on the research results of biomass of shrubs and greengrass as follow (Table 1):
Table 1. Load of biomass and C storage in many kinds of shrubs and Greengrass [18].
3.2 Establishing Map and Calculating the Area of Green Spaces in TDMC
Map of green space distribution and area statictis of landuse types in Thu Dau Mot City
is presented in Figs. 2 and 3. Total area of green space in TDMC is about 63.05km2,
accounting for about 61,68% total area. Where as, area of low vegetation in TDMC is
about 46.96 km2 , accounting for 39.65% total natural area and that of high vegetation
area is about 26.09 km2 , making of 22.03%. This result is suitable to the statistic data of
TDMC on land use types in 2020. According to this data, the area of perennial agricultural
land is 26.98 km2 , accounting for 22.7% of natural land area. It is equivalent to the area of
high vegetation given by the study. The area of non-vegetation given by this study is about
43.52 km, accounting for 36.75%, much lower than area of non-agriculture in TDMC is
91.92 km2 , accounting for 77.3%. This can be explained that the non-agricultural land
area is counted from the type of land use recorded on the Land Use Right Certificate.
However, in reality, that land use purpose may be different, the land can be used for
agricultural purposes or unused land with shrubs and grasslands growing. Therefore, the
area of low vegetation is much more than the total area of annual agricultural land and
unused land.
Among 14 wards of TDMC, Dinh Hoa, Phu Tan, Phu My, Tuong Binh Hiep are units
having the highest rate of low vegetation area with over 45% of natural land area of each
ward. Tan An and Chanh My have the highest rate of high vegetation area with more
than 44%. Wards in the central of TDMC have the lowest rate of vegetation, including
Phu Cuong, Hiep Thanh, Chanh Nghia and Phu Hoa.
Table 4. Parameters of trees in surveyed plots.
No D13 (cm) Above ground Below ground Total Carbon CO2 absorption (tons/ha)
biomass (tons/ha) biomass (tons/ha) biomass (tons/ha) storage (tons/ha)
Plot 1 17.8 95.9 24.0 119.9 55.2 202.5
Plot 2 18.7 80.3 20.1 100.4 46.2 169.5
Plot 3 19.0 88.6 22.2 110.8 51.0 187.1
Plot 4 25.0 90.1 22.5 112.6 51.8 190.1
Plot 5 29.7 139.6 34.9 174.4 80.2 294.5
Plot 6 37.0 158.7 39.7 198.4 91.2 334.9
Plot 7 29.5 183.1 45.8 228.9 105.3 386.5
Plot 8 29.5 177.2 44.3 221.5 101.9 374.0
Average 25.8 126.7 31.7 158.4 72.9 267.4
Application of Remote Sensing to Assess the Ability to Absorb
243
244 H. A. T. Nguyen et al.
1.58%
22.02
%
36.75
%
39.65
%
Water
gas emissions is about 267.4 tons (Table 4). This result is suitable with similar research
results of Nguyen Hai Hoa in 2016, average load of CO2 absorption of Acacia Hybrid
plantation forest in Yen Lap District, Phu Tho Province was 296,64 tons/ha, little higher
than load of CO2 absorption given by this study due to the more diversity of forest
ecosystems than perennial tree ecosystems [21].
Carbon storage in low vegetation areas: According to the field survey, the low
vegetation in TDMC is mainly shrub less than 2m high, therefore the load of dry biomass
is about 20.48 tons/ha and load of C storage is about 10.24 tons/ha (Table 1). So, the
load of CO2 absorption of low vegetation area is about 37.6 tons/ha.
Total carbon storage of TDMC and its wards:
Total load of CO2 absorption of green areas in each administrative of TDMC and whole
TDMC is presented in Figs. 4 and 5. Total load of CO2 absorption in TDMC is about 874
thousands tons, including 698 thousands tons from high vegetation areas (accounting
for 80%) and 176 thousands tons from low vegetation area (making up 20%). Hoa Phu
has the highest loads of CO2 absorption with over 210 thousand tons, accounting for
24%, Phu Cuong has the lowest load of CO2 absorption with less than 1%. This result
show that the more area of green space, the more CO2 load can be absorded.
Total economic value of CO2 stored in green area of TDMC is about 15 millions USD,
equivalent to 346 billions VND. This value is so hight and continues increasing every
year due to the growing biomass of trees. This result only partly reflects the great benefits
of trees in terms of being a CO2 absorption tank, contributing to reducing the greenhouse
246 H. A. T. Nguyen et al.
Hoa Phu
Tan An
Chanh My
Phu Tan
Dinh Hoa
Phu Loi
Tuong Binh Hiep
Hiep An
Phu My
Phu Tho
Phu Hoa
Hiep Thanh
Chanh Nghia
Phu Cuong
0.0 5.0 10.0 15.0 20.0 25.0 30.0
effect. In addition to this role, trees also have many other social, economic and envi-
ronmental benefits that the study has not mentioned. Therefore, it is necessary for local
authorities to plant and protect trees to not only contribute to environmental protection
but also create regional landscapes and protect community’s health.
4 Conclusion
Urban green tree system has a certain important role in the process of storing CO2 ,
participating in climate regulation, creating landscape and mitigating climate change.
The study was carried out to evaluate the CO2 absorption capacity of green spaces in
TDMC by applying GIS and remote sensing and field investigation. Research results
show that total load of CO2 absorbed by green spaces is about 874 thousands tons,
including 698 thousands tons from high vegetation areas (80%) and 176 thousand tons
from low vegetation area (20%). Hoa Phu, a suburban ward with highest green spaces, has
the highest loads of CO2 absorption and Phu Cuong has lowest load of CO2 absorption
because of high density of population and lowest green spaces. The economic value of
CO2 stored in urban green area of Thu Dau Mot city is about 15 millions USD, equivalent
to 346 billions VND. This result proves that trees have a great role in reducing CO2 , one
of main green house gases that arises from human activities and cause climate change.
Therefore, the authorities of TDMC as well as other places should pay attendance to
plant, care and protect more and more green spaces for the communities.
References
1. Vatseva, R., et al: Mapping urban green spaces based on remote sensing data: Case studies
in Bulgaria and Slovakia. In 6th International Conference on Cartography and GIS. Albena,
Bulgaria (2016)
Application of Remote Sensing to Assess the Ability to Absorb 247
2. Atasoy, M.: Monitoring the urban green spaces and landscape fragmentation using remote
sensing: a case study in Osmaniye, Turkey. Environ. Monit. Assess. 190(12), 1–8 (2018).
[Link]
3. Shahtahmassebi, A.R., et al.: Remote sensing of urban green spaces: a review. Urban Forestry
Urban Greening 2020
4. Senanayake, I.P., Welivitiya, W.D.D.P., Nadeeka, P.M.: Urban green spaces analysis for devel-
opment planning in Colombo, Sri Lanka, utilizing THEOS satellite imagery—A remote
sensing and GIS approach. Urban For. Urban Greening 12(3), 307–314 (2013)
5. Malika, O., et al.: Quantitative and qualitative assessment of urban green spaces in Boussaada
City, Algeria using remote sensing techniques. Geogr. Reg Planning 14(3), 123–133 (2021)
6. Garbulsky, M.F., et al.: Remote estimation of carbon dioxide uptake by a Mediterranean forest.
Glob. Change Biol. 14(12), 2860–2867 (2008)
7. Hazarin, A.Q., Rokhmatuloh, Shidiq, I.P.A.: Carbon dioxide sequestration capability of green
spaces in bogor city. In: IOP conference series: earth and environmental science (2019)
8. Pham, T.D., et al.: Improvement of mangrove soil carbon stocks estimation in north Vietnam
using sentinel-2 data and machine learning approach. GIS Sci. Remote Sens. 58(1) (2020)
9. Nguyen, H.H., et al.: Biomass and carbon stock estimation of mangrove forests using remote
sensing and field investigation- based data on Hai Phong coast. Vietnam J. Sci. Technol. 59(5),
560–579 (2021)
10. Nguyen, H.K.L., Nguyen, B.N.: Mapping biomass and carbon stock of forest by remote sens-
ing and GIS technology at Bach Ma National Park, Thua Thien Hue province. J. Vietnamese
Environ. 8(2) (2016)
11. Do, H.T.H., et al.: Forest carbon stocks and evaluating CO2 sequestration in the mixed
broadleaf and coniferous of Bidoup—Nui Ba National Park, Lam Dong province. Sci.
Technol. Dev. J. Sci. Earth Environ. 5(2), 95–105 (2021)
12. Van Luong, C., Nga, N.T.: Initial assessment of carbon storage capacity of seagrasses through
biomass in Thi Nai lagoon, Binh Dinh province. J. Mar. Sci. Technol. 17(1), 63–71 (2017)
13. Bao, T.Q., Son, L.T.: Research and application of high-resolution satellite images to determine
the distribution and carbon absorption capacity of forests. J. Agric. Rural Dev. (2012)
14. Taufik, A., et al.: Classification of Landsat 8 satellite data using NDVI thresholds. J. Commun.
Electron. Comput. Eng. 8(4), 37–40 (2016)
15. Adam, A.H.M., et al.: Accuracy assessment of land use & land cover classification (LU/LC)
“Case study of Shomadi area-Renk County-Upper Nile State, South Sudan. Int. J. Sci. Res.
Publ. 3(5) (2013)
16. Da, T.B.: Estimating the CO2 absorption capacity of the forest recovered from cultivation land
in the Thuong Tien nature reserve, Hoa Binh District. J. Agric. Rural Dev. 10(154), 85–89
(2010)
17. Kiran, S., Kinnary, S.: Carbon sequestration by urban trees on roadsides of Vadodara City.
[Link] Last accessed 22 May 2022
18. Phuong, V.T.: Research on carbon stocks and shrubs—Basis for determining baseline carbon
in afforestation/reforestation projects under the clean development mechanism in Vietnam. J.
Agric. Rural Dev. 2 (2006)
19. Akbar, T.A., Hassan, Q.K., Ishaq, S., Batool, M., Butt, H.J., Jabbar, H.: Investigative spa-
tial distribution and modelling of existing and future urban land changes and its impact on
urbanization and economy. Remote Sens. 11(105) (2019)
20. Rizvi, R.H., Yadav, R.S., Singh, R., Datt, K., Khan, I.A., Dhyani, S.K.: Spectral analysis
of remote sensing image for assessment of agroforestry areas in Yamunanagar district of
Haryana. Precis. Farming Agro-meteorol. (2009)
21. Hoa, N.H., An, N.H.: Application of Landsat 8 remote sensing images and GIS to build a
map of biomass and carbon reserves of acacia hybrid plantations in Yen Lap District, Phu
Tho Province. J. Forest. Sci. Technol. (2016)
Efficient Cooling System of Cloud Data Center
by Reducing Energy Consumption
Abstract. Cloud computing makes computers a utility and allows scientific, con-
sumer, and corporate applications. This implementation raises energy, CO2, and
economic problems. Cloud computing companies are concerned about energy use
in cloud data centers. Green Cloud Environments, known as GCE, have provided
formulations, solutions, and models to reduce the environmental effect as well as
energy consumption under the latest models while considering components for
static and dynamic clouds. Our technique models cloud computing data centers.
To accomplish this, you must understand trends in cloud energy usage. We analyze
energy consumption trends and show that by using appropriate optimization tech-
niques guided by our energy consumption models, cloud data centers may save
20% of energy. Our study is incorporated into cloud computing while monitoring
energy usage and helping to optimize on a system level.
1 Introduction
Recent advances in cloud computing enable the sharing of computer resources, includ-
ing web applications, processing power, memory, networking infrastructure, etc. [1].
The prevalent utility computing paradigm followed by the majority of cloud computing
service providers inspires features for clients whose need for virtual resources fluctu-
ates over time. The tremendous scalability potential of e-Commerce, online banking
systems, social media and networking, e-government, and information processing has
resulted in diverse and extensive workloads. In the meantime, the computer and informa-
tion processing capabilities of a variety of commercial corporations and organizations
that are public, ranging from banking and transportation to manufacturing and housing,
have been rapidly expanding. Such a dramatic rise in computer resources necessitates a
scalable and efficient information technology infrastructure known as IT infrastructure,
consisting of data centers, power plants, storage facilities, network transfer rates, staff,
and enormous capital expenditures and operating costs.
Power consumption is a limiting factor in the architecture of today’s data centers
and infrastructures for cloud computing. The electricity as well as the energy used by
computers and their associated cooling systems account for a significant portion of these
high energy expenditures and greenhouse gas emissions. With a predicted annual growth
rate of 12% [2, 3], the global energy consumption of data centers is expected to reach
26 GW or around 1.4% of global electrical energy consumption. In Barcelona, a data
center with a medium-sized supercomputer has an annual energy expenditure of about
11 million dollars since its consumption of 1.2 megawatts [4] is almost equivalent to that
of 1200 homes [5]. The United States Environmental Protection Agency (EPA) reported
to Congress [6] that in 2006, data centers in the United States used 61 billion kWh
of electricity, or 1.5 percent of all electricity consumed in the country, at a cost of
$4.5 billion. While servers, cooling, connectivity, backup storage, as well as power
distribution equipment or PDU account is between 1.71% and 2.21% of total data center
electricity consumption [3], US data centers, which host 40% of the world’s cloud-
based data center servers, with their electrical consumption increasing by approximately
40% during the economic crisis. From 2000 to 2005, total US consumption of energy
increased from 0.8 to 1.5% [7]. Also, in 2006, cloud data centers will cause 116.2 million
tons of CO2 to be released into the atmosphere [6]. In 2010, Google’s data center used
approximately 2.26 MWh of power, leaving 1.47 million metric tons of CO2 , which
is called a carbon footprint [8]. The Intergovernmental Panel on Climate Change has
recommended a reduction of 60 to 80% by 2050 to prevent catastrophic environmental
damage.
A cloud computing system consists of hundreds to tens of thousands of server com-
puters providing client services [9, 10]. Existing servers lack energy homogeneity. Even
with 20% usage, servers require 80% of the maximum power [11]. In a cloud computing
context, the non-uniformity of energy servers is a major source of inefficiency. Servers
are often operated at 10–50% of their maximum load while undergoing frequent peri-
ods of inactivity [12]. This suggests that servers most of the time are not being used at
their optimal and efficient power-performance trade-off points, even when they are not
actively processing data, they are consuming a considerable amount of power. The cost
of running air conditioners and refrigerators (CACU), which accounts for approximately
30 percent of the cloud environment’s total energy cost of the cloud environment [13],
is one of the main contributors to cloud computing environments’ power inefficiency.
The cooling of cloud data centers continues to be a significant contributor to inefficien-
cies in cloud computing due to excessive energy consumption. The majority of the UPS
modules in cloud data centers run between 10 and 40% of their maximum capacity [14].
The efficiency conversion of UPS is pretty low, unfortunately. The effectiveness of PUE
or power usage shows how much energy is lost during power conversion, cooling, and
air conditioning distribution [15]. The PUE measure has gradually decreased over the
past decade. A typical cloud-based data center PUE estimation would be around 2.6 in
2003 [16]. One analysis conducted by Koomey in 2010 indicated that the average PUE
250 N. T. Natasha et al.
during that year ranged from 1.83 to 1.83 [3]. The PUEs of the latest cloud data centers
owned by Google, Microsoft, and Facebook are reported to be exceptionally low, with
values ranging from below 1.20 to as low as 1.10 [8].
The measured efficiency of cloud-based data center energy, known as CDCEE
(Cloud-based Data Center Energy Efficiency) is as follows:
ITE
CDCEE = ITU × (1)
PUE
When referring to information technology, efficiency is defined as the number of
productive joules per unit of energy. Cloud data center IT utilization is the ratio of
typical IT use to peak IT capacity.
With dynamic workload distribution and server consolidation, energy proportionality
can be achieved at the server pool or cloud data center level. This reduces the importance
of the server power use curve while reducing it to a straight line to the origin in the cloud
data center [11]. In addition, the coordination of low-power modes for inactive servers has
been shown to enable energy-proportional operation [17]. The authors claim that a 50%
reduction in energy consumption can be achieved by switching to energy proportional
servers, which consume only 10% of peak power during their idle state. The authors
demonstrated that one method of creating energy-efficient servers is to reduce the power
consumption of the hard drive, RAM, network cards, and central processing unit.
Combining air handling units as well as computer room air conditioning (CRAC)
units with supply, variable frequency drive, called VFD fans in heat exchanges, can help
to cool a cloud data center while using less energy. This allows the use of varying airflow
rates with a variety of heat loads. Lastly, reducing this energy use may have unintended
financial benefits. The high cost of energy is only one downside to using more power;
the increased heat produced also increases the likelihood that hardware systems will fail
[18]. For these reasons, cutting back on energy consumption is good for the environment.
The work in [19–29] shows good contributions and guidelines.
To solve this problem and make sure that cloud computing and data centers can
continue to grow in an energy-efficient way in the future, especially when cloud resources
need to meet Quality of Service (QoS) requirements set by users through Service Level
Agreements (SLAs), energy use needs to go down. The main goal of this study is to come
up with a new energy consumption model that gives a full picture of how much energy
is used in data centers that are virtualized. The aim is to make cloud computing eco-
friendlier and sustainable in the forthcoming years. The remaining sections are grouped
as follows. In Sect. 2, we introduce related research on cloud data center settings, and in
Sect. 3, we detail the formulas and patterns behind our energy use; energy consumption
models are formulated for cloud computing environments. Section 4 contains results
and a discussion of our energy consumption architecture, our models’ assessment, and
implementation are presented in Sect. 5. Section 6 finishes the study with a review of
the main difficulties and ideas for further research.
This study is about how much energy a cloud data center consumes and how cooling
can be efficient and energy-saving. It has explored and analyzed the energy consumption
and cooling models of cloud-based data centers and describes how they can be modeled
to be efficient cloud data centers. In addition, the study is based on a random cloud data
Efficient Cooling System of Cloud Data Center by Reducing 251
center as a case study. The study will be based on how a cloud-based data center can be
efficient by reducing energy consumption.
2 Related Work
Table 1 shows a summary of related work.
The authors proposed an economical approach for computing the overall expenses
of owning and operating cloud-based systems [35]. For this computation, they created
quantifiable measures. Their calculating granularity, however, is predicated on a single
piece of hardware.
Keeping track of the energy profiles of certain system elements, such as CPU, RAM,
disk, and cache, can enable these. Due to the inbuilt card controllers that are utilized to
wake up distant nodes, Anne et al. observed that nodes continued to use energy even
after being turned off [36].
For varying the server’s operational modes, Sarji et al. suggested two energy models
[37]. They look at the server’s AC input to figure out how much energy is used in the idle,
sleep, and off states, and to switch between these states while calculating the real power
measurement. But if the load increases suddenly, switching between power modes could
result in poor performance.
252 N. T. Natasha et al.
However, other research initiatives have also been conducted, focusing mainly on
virtualization to reduce energy use in cloud systems. Cloud virtualization was suggested
by Yamini et al. [33] as a viable means of reducing energy use and global warming.
Instead of employing many servers to provide services for various devices, their method
uses fewer servers.
For us, the most relevant power modeling methodologies are those put out by Pel-
ley et al. [38] for physical infrastructure which are cooling and power systems. They
developed the first model that was designed to fully represent a data center.
This research study provides methods for assessing and examining the overall energy
consumption in cloud computing settings. Our strategy targets both analytical models
and cloud data center simulations. We show how putting energy optimization rules into
place and using energy consumption models can save a lot of energy in data centers and
the cloud.
This paper specifies the methods and procedures to achieve its research objective. The
design and data sources have been designed here for the study. We collected data from
various online resources on how cloud-based data centers consume energy and how
efficient their cooling is based on 6 cases.
During a given time interval, a server’s overall energy consumption, denoted by ETotal ,
is the sum of the energy consumed in both the static and dynamic states of the system,
as formulated below:
The energy used by the server when it is idle, computation, cooling systems, and
utilization of communication resources are the main topics of this paper.
EIdle = energy consumption of idle server.
ECool = energy consumption of the cooling system.
ECompu = energy consumption of computation resources.
EStore = energy consumption of storage resources.
ECommu = energy consumption of communication resources.
ESched = energy generated by scheduling overhead.
Therefore, the above formula (2) can be transformed into:
ETotal = EIdle + ECool + ECommu + EStore + ECompu + ESched (3)
Efficient Cooling System of Cloud Data Center by Reducing 253
The well-known equation number 4 [39] from Joule’s Ohm’s law can be used to figure
out how much energy a server uses when it is not being used:
E =I ×V (4)
i = idle
Ei = Ii × Vi (5)
Here E i = power, I i = current and V i = voltage of the i-th core, respectively. Also,
by looking at the relationship between I i and V i , we can get the current leakage can be
modeled by the following polynomial of the second order:
where α = 0.114 ((V )−1 ), β = 0.228 ()−1 , and γ = 0.139 (V/ ) are the coefficient
[40].
With the installation of energy-saving techniques, a core’s (processor’s) idle energy
usage drops dramatically. This is achieved by reducing a core’s Dynamic Voltage and
Frequency Scaling (DVFS).
Eri = βi Ei (7)
n
EIdle = Eri (8)
i=1
For several types of Intel processor, the values of the reduction factor βi is also different.
The material used for the chiller, the speed of air leaving the CRAH unit, and other
factors affect the effectiveness of the cooling process. According to the thermodynamic
principle, heat is generally transmitted between two bodies as follows:
where = fluid mass flow, the power transmission from a single device to the existing
fluid, and C hc = fluid’s specific heat capacity. T ha = hot temperature and T ca = cold
temperature. The values of , T ha and T ca rely on the air recirculation and airflow
throughout the data center.
254 N. T. Natasha et al.
By the server or CRAH, ρ = heat transfer. air = total volume of air passing
through the equipment; Taha = temperature of the air exhaled by the server and Taca
= temperature of the cold air supplied by the CRAH. Therefore, We utilize an adapted
effectiveness-NTU approach to simulate Eq. (11) which is mentioned previously [43]:
By the CRAH, the heat, which will be removed is ρCRAH . E, the transfer efficiency,
flow rate (0 to 1) at the flow maximum mass, f denotes the rate of the volume and T wt
is the chilled water temperature.
The Coefficient of Performance (COP), which is the ratio between the heat removed
by the CRAH unit (Q) and the total amount of energy used to chill the air by the CRAH
unit (E), is used to evaluate the efficiency of a CRAH unit [44]:
The cold air temperature (T S ), that a CRAH unit provides cloud facilities with
supplies affects its coefficient of performance. The sum of the energy consumed by the
CRAH (ECRAH ) and IT (EIT ) devices [45] is equal to the cloud environment’s overall
power dissipation. Energy usage can be stated as:
EIT
ECRAH = (13)
COP(Ts )
In the CRAH unit, the power required by the fans is the primary contributor to
the total energy budget. The magnitude of the force grows proportionally to the cubic
function of the mass flow rate ( f ), and plateaus at a maximum threshold. The addition of
sensors and control systems consumes a predetermined amount of energy. Consequently,
the energy spent by the CRAH unit equals the sum of its fixed and dynamic activity.
At constant outdoor and chilled water supply temperatures, the energy needed for the
chillers will expand quadratically when the amount of heat it will lose during use. The
HVAC industry has created several modeling techniques to evaluate the performance of
the chiller. Despite the fact that there are models based on physics, we chose the DOE-2
chiller model [46]. Numerous physical measures are needed to fit the DOE-2 model to a
specific chiller. The California Energy Commission [19] has given us a set of regression
curves that we used.
Efficient Cooling System of Cloud Data Center by Reducing 255
The infrastructure required for cloud computing systems must be substantial just to
provide uninterrupted and reliable electricity. They constantly lose energy and use energy
loss that is inversely correlated with the square of the load [19]:
2
EPDU = EPDU idle + ηPDU μSrv (15)
Server
where E PDU = energy consumption of PDU (Power Distribution Unit), the energy loss
coefficient of PDU denotes ηPDU , and E PDU Idle = idle PDU’s energy consumption. UPS
energy costs are determined by the relationship [19]:
EPDU = EUPS idle + ηUPS EPDU (16)
PUDs
where ηUPS = UPS loss coefficient. About nine percent of the input energy is lost at full
load. Loss accounts for a total of 12% of server energy use at peak load. These losses
produce additional heat that the cooling system must remove.
4.1 Case 1, 2
These are typical cloud-based data centers with outdated physical infrastructure, which
are found in buildings built during the last three to five years. For the temperature of
the outside air, we use the annual average. Furthermore, we consider an inefficient and
typical server with idle power by setting peak power at 60%, static chilled water set at
7 °C, and setting CRAH air supply at 18 °C. We scaled the cloud data center so that, at
peak usage, the Case 1 facility uses precisely 10 MV.
Efficient Cooling System of Cloud Data Center by Reducing 257
4.2 Case 3
This case shows a decrease was observed in the percentage of operational servers within
a data center, going from 81 to 44%. The aggregate power draw is drastically reduced
by improved consolidation, but paradoxically, the PUE increases. These findings high-
light the flaw as a measure of energy efficiency since it ignores the inefficiency of IT
equipment.
4.3 Case 4
As a result, in Case 4, servers can operate at 5% of their maximum power while quickly
entering a low-power sleep mode, thus reducing the power consumption of the entire
data center by 22% [18]. The goal of both this strategy and the integration of virtual
machines is to overcome the energy inefficiency of the same source, which is the waste
of idle server power.
4.4 Case 5
4.5 Case 6
In this case, the power consumption of the cooling system is significantly reduced,
and power efficiency is constrained by the infrastructure for power conditioning. We
have provided comprehensive models of cloud data center foundations that are practical
to apply in a thorough cloud environment simulation architecture, as well as abstract
estimating and green energy forecast as an effective solution.
Figure 3 summarizes all six cases that are discussed above and validates the efficiency
of our proposed model for cases 5 and 6.
258 N. T. Natasha et al.
3500
3000
2500
2000
1500
1000
500
0
1 2 3 4 5 6
Cases
5 Conclusion
As it offers many advantages to its users, cloud computing is gaining increasing impor-
tance in the IT field. IT resources are available in the cloud environment due to the high
demands of users. Because of this, the amount of power and energy used by the cloud
has become a problem for environmental and economic reasons. In this paper, we give
formulas for figuring out the total amount of energy used in cloud environments and
show the reasons to use less energy. In this regard, we talked about tools for measuring
energy use and ways of doing empirical analysis. Also, we have models for how much
energy a server uses when it is idle and when it is working. This research result is impor-
tant for coming up with possible energy laws and ways to manage energy use to reduce
energy use while keeping system performance high in cloud environments.
6 Future Work
As part of our future work, we are determined to in estigate various cloud environments.
Additionally, we develop new optimization policies to reduce CO2 emissions from cloud
environments. We want to incorporate the rate of energy cost into our future re-designed
models to reduce the total energy cost and ensure that they have a minimal environmental
impact.
References
1. Armbrust, M.: Above the clouds: a Berkeley view of cloud computing. Technical Rep.
UCB/EECS-2009–28 (2009)
Efficient Cooling System of Cloud Data Center by Reducing 259
2. BONE Project: WP 21 tropical project green optical networks: report on year 1 and update
plan for activities. No. FP7-ICT-2007–1216863 BONE project, Dec. 2009
3. Koomey, J.: Estimating Total Power Consumption by Server in the U.S and the World,
Lawrence Berkeley National Laboratory, Stanford University (2007)
4. Toress, J.: Green computing: The next wave in computing. In: Ed. UPC Technical University
of Catalonia (2010)
5. Kogge, P.: The tops in flops. IEEE Spectrum, 49–54 (2011)
6. U.S Environmental Protection Agency.: Report to Congress on Server and Datacenter Energy
Efficiency Public Law (2006)
7. Liu, Z., Lin, X., Hu, X.: Energy-efficient management of data center resources for cloud
computing: a review. Front. Comp. Sci. 7(4), 497–517 (2013)
8. Miller, R.: Google’s energy story: high efficiency, huge scale (2011). Available
at: [Link]
efficiency-huge-scale. (Accessed: 15 Oct 2022)
9. Armbrust, M., et al.: A view of cloud computing. Commun. ACM 53(4), 50–58 (2010)
10. Buyya, R.: Market-oriented cloud computing: Vision, hype, and reality of delivering
computing as the 5th utility. in Proc. Int. Symp. Cluster Comput. Grid, p. 1 (2009)
11. Barroso, L.A., Hölzle, U.: The case for energy-proportional computing. IEEE Comput. 40(12),
33–37 (2007)
12. Barroso, L.A., Hölzle, U.: The datacenter as a computer: an introduction to the design of
warehouse- scale machines. Morgan and Claypool, San Rafael, CA (2009)
13. Rasmussen, N.: Calculating total cooling requirements for datacenters. Am. Power Convers.
white paper 25 (2007)
14. U.S. Department of Energy.: Data center energy efficiency training. Electr. Sys. (2011). Avail-
able at: [Link]
ters (Accessed: 15 Oct 2022)
15. Belady, C., Rawson, A., Pfleuger, J., Cader, T.: The green grid datacenter power efficiency
metrics: PUE and DCIE. GreenGrid, White Paper-06 (2007)
16. Ghemawat, S., Gobioff, H., Leung, S.-T.: The google file system. In: Proceeding of the ACM
Symposium Operating Systems Principles, pp. 29–43 (2003)
17. Meisner, D., Gold, B.T., Wenisch, T.F.: PowerNap: eliminating server idle power. In: Pro-
ceeding of the 14th International Conference on Architectural Support for Programming
Languages and Operating Systems (ASPLOS), USA (2009)
18. Feng, W.C., Feng, X., Rong, C.: Green supercomputing comes of age. IT Prof 10(1), 17–23,
Jan.-Feb (2008)
19. Uchechukwu, A., Li, K., Shen, Y.: Improving cloud computing energy efficiency. In:
Proceeding of the Asia Pacific Cloud Computing Congress (2012)
20. Yeasmin, S., Afrin, N., Saif, K., Reza, A.W., Arefin, M.S.: Towards building a sustainable
system of data center cooling and power management utilizing renewable energy. In: Vas-
ant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent
Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569.
Springer, Cham (2023). [Link]
21. Liza, M.A., Suny, A., Shahjahan, R.M.B., Reza, A.W., Arefin, M.S.: Minimizing E-waste
through improved virtualization. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A.,
Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture
Notes in Networks and Systems, vol 569. Springer, Cham (2023). [Link]
978-3-031-19958-5_97
22. Das, K., Saha, S., Chowdhury, S., Reza, A.W., Paul, S., Arefin, M.S.: A sustainable E-waste
management system and recycling trade for bangladesh in green IT. In: Vasant, P., Weber,
GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing &
260 N. T. Natasha et al.
Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham
(2023). [Link]
23. Rahman, M.A., Asif, S., Hossain, M.S., Alam, T., Reza, A.W., Arefin, M.S.: A sustainable
approach to reduce power consumption and harmful effects of cellular base stations. In:
Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent
Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569.
Springer, Cham (2023). [Link]
24. Ahsan, M., Yousuf, M., Rahman, M., Proma, F.I., Reza, A.W., Arefin, M.S.: Designing a
sustainable E-waste management framework for Bangladesh. In: Vasant, P., Weber, GW.,
Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Opti-
mization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham.
[Link]
25. Mukto, M.M., Al Mahmud, M.M., Ahmed, M.A., Haque, I., Reza, A.W., Arefin, M.S.: A
sustainable approach between satellite and traditional broadband transmission technologies
based on green IT. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas,
J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and
Systems, vol 569. Springer, Cham. [Link]
26. Meharaj-Ul-Mahmmud, Laskar, M.S., Arafin, M., Molla, M.S., Reza, A.W., Arefin, M.S.:
Improved virtualization to reduce e-waste in green computing. In: Vasant, P., Weber, GW.,
Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimiza-
tion. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023).
[Link]
27. Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S. (2023). Developing an
energy cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A.,
Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture
Notes in Networks and Systems, vol 569. Springer, Cham (2023). [Link]
978-3-031-19958-5_75
28. Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustain-
able and profitable IT infrastructure of Bangladesh using green IT. In: Vasant, P., Weber,
GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing &
Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham
(2023). [Link]
29. Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for
improving E-waste management in Bangladesh. In: Vasant, P., Weber, GW., Marmolejo-
Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO
2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023). [Link]
org/10.1007/978-3-031-19958-5_95
30. Shang, L., Peh, L.S., Jha, N.K.: Dynamic voltage scaling with links for power optimization
of interconnection networks. In: The 9th International Symposium on High-Performance
Computer Architecture (HPCA 2003), pp. 91–102, Anaheim, California, USA (2003)
31. Buyya, R., Beloglazov, A., Jemal, A.: Energy efficient management of data center resources
for cloud computing: a vision architectural elements and open challenges. In: Proceeding of the
International Conference on Parallel and Distributed Processing Techniques and Applications
(2010)
32. Chen, F., Schneider, J., Yang, Y., Grundy, J., He, Q.: An energy consumption model and
analysis tool for Cloud computing environments. In: 1st International Workshop no Green
and Sustainable Software (GREENS), pp. 45–50
33. Yamini, B., Selvi, D.V.: Cloud virtualization: a potential way to reduce global warming. In:
Recent Advances in Space Technology Services and Climate Change (RSTSCC), pp.55–57
(2010)
Efficient Cooling System of Cloud Data Center by Reducing 261
34. Zhang, Z., Fu, S.: Characterizing power and energy usage in cloud computing systems. In:
2011 IEEE Third International Conference on Cloud Computing Technology and Science
(CloudCom), pp. 146–153 (2011)
35. Li, X., Li, Y., Liu, T., Qiu, J., Wang, F.: The method and tool of cost analysis for cloud
computing. In: The IEEE International Conference on Cloud Computing (CLOUD 2009),
pp. 93–100, Bangalore, India (2009)
36. Orgerie, A.C., Lefevre, L., Gelas, J.P.: Demystifying energy consumption in grids and clouds.
Green Comput. Confer. Int. 335–342 (2010 )
37. Sarji, I., Ghali, C., Chehab, A., Kayssi, A.: CloudESE: energy efficiency model for cloud com-
puting environments. In: International Conference on Energy Aware Computing (ICEAC),
pp. 1–6 (2011)
38. Pelley, S., Meisner, D., Wenisch, T.F., VanGilder, J.W.: Understanding and absracting total
data center power. In: WEED: Workshop on Energy Efficienct Design
39. Meade, R.L., Diffenderfer, R.: Foundations of Electronics: Circuits & Devices. Clifton Park,
New York (2003). ISBN: 0-7668-4026-3
40. Zimmer, P.A.Z., Brodersen, R.W.: Minimizing Power Consumption in CMOS Circuits.
University of California at Berkeley. Technical Report (1995)
41. Tozer, R., Kurkjian, C., Salim, M.: Air management metrics in data centers. In: ASHRAE
(2009)
42. VanGilder J.W., Shrivastava, S.K. Capture index: an airflow-based rack cooling performance
metric. ASHRAE Trans. 113(1) (2007)
43. Çengel, Y.A.: Heat transfer: a practical approach, 2nd ed. McGraw Hill Professional (2003)
44. Moore, J., Chase, J., Ranganathan, P., Sharma, R.: Making scheduling “Cool”: temperature-
aware workload placement in data centers. In: Proceeding of the 2005 USENIX Annual
Technical Conference, Anaheim, CA, USA (2005)
45. Ehsan, P., Massoud, P.: Minimizing data center cooling and server power costs. In: Proceed-
ing of the 4th ACM/IEEE International Symposium on Low Power Electronic and Design
(ISLPED), pp. 145–150 (2009)
46. Rasmussen, N.: Electrical efficiency modeling for data centers. APC by Schneider Electric,
Tech. Rep. #113 (2007)
Reducing Energy Usage and Cost
for Environmentally Conscious Cooling
Infrastructure
1 Introduction
Most of the electricity in Bangladesh is generated from coal [2]. Our country’s edu-
cational institutions, such as schools, colleges, and universities, need a lot of electricity.
Thus, most of the electricity consumed is to cool the rooms of schools, colleges, and uni-
versities, which is very expensive. So, we are proposing a new model to reduce electricity
costs. As energy is being used worldwide and most electricity comes from nonrenew-
able sources, we need to invest more in efficient systems that require less energy and are
more effective than existing systems. Home, educational institutions, and universities
are perfect places to start implementing intelligent procedures for lowering our power
needs and for a greener future. The work in [16–25, 26] shows good contributions and
guidelines.
In our research, we mainly worked on relevant topics correlating with our proposed
modes. We organized our findings regarding these topics. These are some essential
considerations we took when researching the issues. How to calculate the total energy
outcome of the solar panel? How much does the temperature differ using the bucket
fan model? How to calculate the total energy output of the wind turbine? Our paper is
organized into subsections. We have explained our Related Works for existing method-
ologies of word embeddings in Sect. 2. We have defined our Materials and Method in
Sect. 3. In Sect. 4, we discussed our Results, followed by a Conclusion in Sect. 5.
2 Literature Review
Our model design must be cheap and easy to implement, as the primary goal of our project
is to minimize carbon emissions and low energy usage and generate clean energy with a
single model. While building our models, we also considered environmentally friendly
and more efficient methods.
Following an in-depth examination of the research issues, we have divided the pri-
mary objectives of our research into many sub-goals. Passive cooling solution model
that cools the air using cold water, a new design of a solar panel and wind turbine com-
bination to generate power. We determined that we would concentrate our whole inquiry
on the areas in Table 1.
3 Methodology
In our paper, we have included a new model that combines a passive cooling solution
model that cools the air using cold water, a new design of a solar panel and wind turbine
combination to generate power, and a classroom automation system. This proposed
system aims to minimize the dependency on nonrenewable energy by reducing the
energy usage of the existing system. Our passive cooling system can be used with existing
cooling systems to cool a room. Our automation system lowers classroom energy usage
by controlling electronic appliances, and our energy-producing system can be used on
rooftops and in cities.
We are considering some parameters such as: Outside temperature and humidity, Cool-
ing requirements of the equipment, the Desired temperature inside the space, Efficiency
of the cooling system, and Heat absorption capacity of the water used in the cooling
system. Energy consumption and waste heat generation of the power generation sys-
tem (e.g., wind turbine, solar panels, thermoelectric generator). These parameters may
vary depending on the specific Design and Implementation of the cooling and power
generation systems, as well as the environmental conditions and user requirement.
Passive cooling solution. To reduce energy consumption and protect the environment
while maintaining appropriate levels of comfort, stagnant cooling circumstances neces-
sitate using passive natural solutions. Solar control systems, surplus heat dispersion into
low-temperature natural heat sinks (air, water, and soil), and using a building’s ther-
mal mass to absorb excess heat are all natural and passive solutions. Biological cooling
systems have a high cooling capacity and can cut cooling-induced energy consumption
considerably [6]. Although such facilities can serve as indoor pre-conditioning systems,
the weather strongly impacts their effectiveness. They could, however, be insufficient to
meet your needs. We propose a model that employs natural and passive cooling methods
to cool semi-enclosed spaces as an option for indoor comfort. Figure 1 shows the visual
Implementation of the passive cooling system, which we designed by AutoCAD. We
have made an AutoCAD model for a passive cooling system, electricity produced with a
turbine windmill, solar panel combination, and a Thermodynamic Power Generator [7].
Our renewable energy sources model must be small, mobile, and space efficient as we
develop a model for homes and universities, typically in dense areas. Our model com-
bines a small air turbine and solar panels that can produce energy in densely populated
areas and cities. This also includes a thermoelectric generator to generate electricity
from hot air.
Figure 2 shows the new cooling system model of the Datacenter. When the turbine
fan pulls the wind from outside when hot air from outside enters inside, it will absorb
the heat of hot air in the water. As a result, cold air will enter inside. No matter how hot
the air is in Dhaka City, the water will absorb the hot air and let the cold air in. Cold air
is entering the data center/classroom. Once inside, the equipment will continue to cool.
Moreover, the heat generated inside will go out from the other side. Below the data
center, there is also a water-cooling system. When hot air comes out of another site
Reducing Energy Usage and Cost for Environmentally 265
through a pipe, it goes into a heat converter. The heat converter (Thermodynamic Power
Generator) will act like a one-of-a-kind thermal power plant so that the heat transmits to
electricity. As a result, the electricity generated from here can be re-supplied to power
the data center.
In this model, we use a new electricity production model: a Blade turbine with a
solar panel. It works together to produce electricity through wind and sunshine.
The model that is shown in Fig. 3 is a model that is created with a vertical wind turbine
with a solar panel on top of it. This makes solar and wind energy simultaneously [8] and
can be used in cities and densely populated areas as it is small and can work with wind
from any direction. We used a compact Vertical Helix Wind Power Turbine Generator
and put a solar panel on top of it [9]. We used wind power to generate electricity in our
Design, around 400 W monthly. The following formula can determine the overall cost
of wind power: This model uses the thermal heat-sharing property of air and water to
share heat. This model is very primitive, therefore, straightforward to implement. This
requires very little knowledge and is very cheap [10]. This model does not use energy,
266 M. J. Hossain et al.
Rated power 1 kW
Start wind speed 6.27 mph
Swept area 3.71 m2
The energy produced (electricity) 400 W 12 V
Model of renewable energy sources data calculation: We took daily sun coverage
and wind speed data for Dhaka, Bangladesh, to simulate average solar and wind speeds
near East West University [13].
(Average wind data per month)
Average wind speed in Dhaka = (1)
12
Using Eq. (1), we found average wind speed in Dhaka is 6.9 mph.
(Average sunlight hours per month)
Average sunlight(sunrise to sunset) = (2)
12
Using Eq. (2) average sunlight in Dhaka is 12.15 h.
Per day solar power generated = solar panel s power × Sunlight of average day × 80%
(3)
Total solar power generated per day = Total solar panels × Per day power generated
(4)
If we use 20 solar panels, we found the total power generated per day 400 W.
Total solar power generated per month = Total solar power generated per day × 30
(5)
268 M. J. Hossain et al.
Using Eq. (5), we got the total solar energy generated per month, 240000 W.
The 240000 W of solar power generated per month can be a cost-effective solution
for reducing electricity costs in Dhaka city [15]. This amount of solar power can provide
a significant portion of the city’s energy needs, reducing the dependence on traditional
fossil fuels and potentially leading to lower energy bills for consumers. To further reduce
electricity costs, the city could consider implementing energy efficiency measures such as
using energy-efficient appliances, improving building insulation, and encouraging public
transportation. Additionally, the city could explore other renewable energy sources, such
as wind and hydropower, to diversify its energy portfolio and further reduce.
Given the total solar power generated per month in Dhaka city, we can estimate the
potential cost savings as follows:
1. Assuming 30 days a month, the total energy generated would be 7200000 Wh.
2. For households, for 0.053 U.S. dollars per kWh, the total cost of electricity from
traditional sources would be 38064 USD.
3. For businesses, for 0.084 U.S. dollars per kWh, the total cost of electricity from
traditional sources would be 60320 USD.
So, the 240000 W of solar power generated per month would result in a savings of
38064 USD for households and 60320 USD for businesses, based on these calculations.
To convert these savings into Bangladeshi Taka, use the exchange rate of 1 USD =
106.40 (according to the exchange rate of February 2023) Taka:
1. For households: 4,038,336.64 Taka
2. For businesses: 6,428,064.00 Taka
Please note that these calculations are based on assumptions and rough estimates,
and the actual savings could be different based on several factors, including the actual
cost of implementing and maintaining the solar power system, changes in the price of
electricity over time, and the Efficiency of the solar power system itself.
The most critical details in this text are that it is crucial to conduct a feasibility study to
determine if the proposed solutions are practical and economically viable for the insti-
tutions, develop a pilot project, conduct a cost-benefit analysis, involve stakeholders,
develop a monitoring and evaluation framework, conduct research on the Bucket Fan
Model, explore other eco-friendly solutions, and explore ways to reduce carbon immu-
nity and increase sustainability in institutions. The feasibility study will help identify
potential challenges, costs, and benefits associated with the proposed solutions, while
the pilot project will test the feasibility and effectiveness of the proposed solutions. The
cost-benefit analysis will help identify the return on investment (ROI) and the payback
period. Finally, research on the bucket fan model will help to determine its Design,
Efficiency, and cost-effectiveness.
Reducing Energy Usage and Cost for Environmentally 269
5 Conclusion
In this study, we attempted to find the best solution for every institution, making them
green initiatives and eco-friendly. Here, we proposed a model that will make the insti-
tution a better place. Our main goal was to reduce carbon immunity and digitalize our
classroom by making it paperless for every work and developing an intelligent con-
trol system to reduce electric costs. We proposed another model, which is called the
Bucket Fan Model. On our model Based on the exchange rate of 1 USD = 106.40
Taka, the estimated cost savings for households in Dhaka city would be 4,052,736 Taka
(38064 USD × 106.40 Taka/USD), and for businesses, it would be 6,423,488 Taka
(60320 USD × 106.40 Taka/USD), per month. These savings could potentially have a
significant impact on the budgets of households and businesses in Dhaka city, leading
to increased disposable income and potentially contributing to economic growth in the
region.
It is a new cooling system model for data centers. It is very eco-friendly and envi-
ronmentally friendly, which our country needs. The limitation of our project is that we
cannot calculate it practically because it is a new model. If this project is successfully
applied, it will bring good results for any institution.
Acknowledgments. This paper researched the passive cooling solution, electricity cost reduction,
and classroom automation. We were motivated after attending a seminar by Mr. Palash Roy,
Managing Director of Bright-I Systems Ltd. We got ideas from analyzing many research papers
and online materials.
References
1. Climate and Average Weather Year-Round in Dhaka Bangladesh [Online]. Available: https://
[Link]/y/111858/Average-Weather-in-Dhaka-Bangladesh-Year-Round
2. Energy Efficiency and Renewable Energy [Online]. Available: [Link]
3. Ismail, B.I., Ahmed, W.H.: Thermoelectric power generation using waste-heat energy as an
alternative green technology. Received: 1 Aug 2008; Accepted: 20 Nov 2008; Revised: 24
Nov 2008
4. Purohit, D., Singh, G., Mamodiya, U.: A review paper on solar energy systems. Int. J. Eng.
Res. Gen. Sci. 5(5) (n.d.). [Link]
5. Roslizar, A., Alghoul, M.A., Bakhtyar, B., Asim, N., Sopian, K.: Annual energy usage reduc-
tion and cost savings of a school: end-use energy analysis. Sci. World J. (2014). [Link]
org/10.1155/2014/310539
6. Ghaffarianhoseini, A., Ghaffarianhoseini, A., Tookey, J., Omrany, H., Fleury, A., Naismith,
N., Ghaffarianhoseini, M.: The essence of smart homes: application of intelligent technologies
towards a brighter urban future, pp. 79–121 (2016)
7. Brush, A.J.B., Lee, B., Mahajan, R., Agarwal, Saroiu, S., Dixon, C.: Home automation in the
wild: challenges and opportunities. In: Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, pp. 2115–2124 (2011)
8. Global CO2 Emissions from Energy Combustion and Industrial Processes [Online]. Available:
[Link]
270 M. J. Hossain et al.
9. Kumar, M., Arora, A., Banchhor, R., Chandra, H.: Energy and exergy analysis of hybridization
of solar, intercooled reheated gas turbine trigeneration cycle. World J. Eng. ahead-of-p(ahead-
of-print). [Link]
10. Kudelin, A., Kutcherov, V.: Wind ENERGY in Russia: the current state and development
trends. Energy Strategy Rev. 34 (2021). [Link]
11. [Link]. Dhaka Climate, Weather by Month, Average Temperature (Bangladesh)—
WeatherSpark (n.d.). Retrieved 24 May 2022, from [Link]
rage-Weather-in-Dhaka-Bangladesh-Year-Round#:~:text=The%20windier%20part%20of%
20the,September%206%20to%20March%2029
12. [Link]: LOYAL HEART DIY Wind Turbine Generator, 12 V 400 W Portable Vertical
Helix Wind Power Turbine Generator Kit with Charge Controller—White: Patio, Lawn &
Garden
13. What is Renewable Energy? [Link]
rgy?gclid=Cj0KCQiA0oagBhDHARIsAI-BbgfLlhSUlivz48z1TwWP5xtiB4lHyr3zJXSR-
iK2M3OJfiuazQG7yOUaAqviEALw_wcB
14. Solar Energy Basics. [Link]
15. Electricity Bill per Unit in Bangladesh in 2023. [Link]
unit-in-bangladesh/#:~:text=Electricity%20Bill%20Per%20Unit%20Price%202023,-Now%
20come%20to&text=An%20average%20increase%20of%205,%25%20from%20January%
2013%2C%202023
16. Yeasmin, S., Afrin, N., Saif, K., Reza, A.W., Arefin, M.S.: Towards building a sustainable
system of data center cooling and power management utilizing renewable energy. In: Vas-
ant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent
Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569.
Springer, Cham (2023). [Link]
17. Liza, M.A., Suny, A., Shahjahan, R.M.B., Reza, A.W., Arefin, M.S.: Minimizing e-waste
through improved virtualization. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A.,
Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture
Notes in Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
978-3-031-19958-5_97
18. Das, K., Saha, S., Chowdhury, S., Reza, A.W., Paul, S., Arefin, M.S.: A sustainable e-waste
management system and recycling trade for Bangladesh in green IT. In: Vasant, P., Weber,
G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing &
Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
19. Rahman, M.A., Asif, S., Hossain, M.S., Alam, T., Reza, A.W., Arefin, M.S.: A sustainable
approach to reduce power consumption and harmful effects of cellular base stations. In:
Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent
Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569.
Springer, Cham (2023). [Link]
20. Ahsan, M., Yousuf, M., Rahman, M., Proma, F.I., Reza, A.W., Arefin, M.S.: Designing a
sustainable e-waste management framework for Bangladesh. In: Vasant, P., Weber, G.W.,
Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Opti-
mization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
21. Mukto, M.M., Al Mahmud, M.M., Ahmed, M.A., Haque, I., Reza, A.W., Arefin, M.S.: A sus-
tainable approach between satellite and traditional broadband transmission technologies based
on green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas,
J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and
Systems, vol. 569. Springer, Cham (2023). [Link]
Reducing Energy Usage and Cost for Environmentally 271
22. Meharaj-Ul-Mahmmud, Laskar, M.S., Arafin, M., Molla, M.S., Reza, A.W., Arefin, M.S.:
Improved virtualization to reduce e-waste in green computing. In: Vasant, P., Weber, G.W.,
Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Opti-
mization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
23. Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S.: Developing an energy
cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo,
E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in
Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
19958-5_75
24. Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustain-
able and profitable IT infrastructure of bangladesh using green IT. In: Vasant, P., Weber,
G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing &
Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
25. Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for
improving e-waste management in Bangladesh. In: Vasant, P., Weber, G.W., Marmolejo-
Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO
2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
org/10.1007/978-3-031-19958-5_95
Investigation of the Stimulating Effect
of the EHF Range Electromagnetic Field
on the Sowing Qualities of Vegetable Seeds
Russia
Abstract. The paper presents the results of a study of the stimulating effect of
the electromagnetic field of the EHF (Extremely High Frequency) range on the
sowing qualities of vegetable seeds. Vegetable seeds, a tomato of the “Wonder
of the Market” variety, a pepper of the “Tenderness” variety and an eggplant of
the “Black Beauty” were subjected to electromagnetic radiation with an EHF
electromagnetic field in three versions: radiation 7 days before sowing dry seeds,
radiation on the day of sowing dry seeds, radiation on the day of sowing wet seeds.
In the course of the study, such characteristics of crops as germination, dynamics
of germination, plant height were assessed. Studies have shown that the dynamics
of tomato seedlings showed the best efficiency of the method according to the
second option, i.e. Radiation of dry seeds before sowing. Eggplant responded bet-
ter to the processing of soaked seeds. For pepper, treatment of soaked seeds also
showed better, and analysis of the dynamics of tomato growth showed the greatest
stimulation effect when processing soaked seeds. Eggplant also showed a better
effect when treating soaked seeds, but for pepper, treating soaked seeds gave a
depressing effect on the growth rate and this option was significantly inferior to
all other options. The investigated effects of the stimulating action of the electro-
magnetic field of the EHF range can be widely used in the accelerated cultivation
of seedlings of vegetable crops in a short agricultural period. Accelerating the
growth of seedlings allows you to reduce their cost and increase profitability.
1 Introduction
With the growth of the world’s population, methods of increasing the productivity of
agricultural crops are currently relevant [1–3]. These parameters include seed germina-
tion, because when sowing, part of the seeds die at the initial stage, and therefore, on
the same sown area, you can get different yields, and the growth and development of
vegetable crops, which is due to the fact that with an increase in the rate of growth and
development, you can get a crop in the shortest possible time [4].
Starting in the 70-s, began to attract the attention of experts effects of electromag-
netic fields (EMF), the short-millimeter (EHF) (40–60 GHz) and microwave (MW)
(1–3 GHz) wavelengths on different biological systems [5]. It was found that under
certain conditions, there are marked effects of incentive-based, including improved crop
characteristics of seeds of different crops [5]. This is due to the fact that working on the
seeds of the short-EMF millimeter (EHF): active biochemical process contributes to a
more rapid absorption of nutrients in the processed seeds, most clearly manifested in the
effects of old and not certified [6].
It is important to note that these effects are the result of exposure to EMF is very
low intensity (less than 10 mW/cm2), have the frequency dependence of the resonant
nature and are characterized by intensity thresholds at which the effect begins to show
jump like manner [6].
The aim of the research is to study the effect of processing vegetable seeds with EHF
radiation on the growth and development of vegetable crops.
In connection with this goal, the following tasks were solved [7–9]:
to carry out phenological observations in closed ground;
to take into account biometric indicators (dynamics of seed germination and dynamics
of linear growth of a seedling).
The research was carried out in the nursery of horticultural crops of the Department
of Horticulture, Botany and Plant Physiology, Samara State Agrarian University.
Seeds of the following vegetable crops were selected for research: tomato of the
“Miracle of the Market” variety, pepper of the “Tenderness” variety and an eggplant of
the “Black Beauty” variety (Fig. 1).
Seeds are divided into two groups: control (without radiation) and experimental
(radiation with EHF waves (42.25 GHz) for 30 min.).
Radiation was carried out on a setup that is an EHF oscillator based on a Gunn diode
with a horn section of 15 × 15 mm. The installation created in the irradiated volume of
the material an electromagnetic field of the EHF range, characterized by the following
parameters: wavelength 7.1 mm (42.25 GHz); energy flux density, 10 mW/cm2 [4].
The studies assessed the following characteristics of crops: germination,%; dynamics
of germination, (days after sowing); plant height, cm.
Germination was assessed by the percentage of germinated seeds from the number
of sown seeds [10–14].
The dynamics of germination was determined by daily counting of the number
(percentage) of germinated seeds in the period from the appearance of the first shoots to
the termination of the increase in the number of seedlings [7, 8, 15].
274 P. Ishkin et al.
The seeds were processed with low-intensity electromagnetic waves of the EHF
range. The waves are modulated. Linear frequency modulation with a depth of 50% was
used. Presowing treatment provides 4 variants in 3 times repetition:
1. Control without radiation.
2. Radiation 7 days before sowing dry seeds.
3. Radiation on the day of sowing dry seeds.
4. Radiation on the day of sowing wet seeds after 3 h soak.
Pre-sowing treatment of seeds and sowing of the studied crops (tomatoes—vari-
ety “Miracle of the market”, pepper—variety “Tenderness”, eggplant—variety “Black
beauty”) with an electromagnetic field of the EHF range according to the variants of the
experiment was carried out on March 20. According to variant 1, where dry seeds were
treated 7 days before sowing, the treatment was carried out on March 13.
For each treatment variant and each replication, 80 seeds of each culture were selected
and sown. By variants, the distribution was maintained for all studied crops.
Stages of growing plants:
1. Treatment of seeds with a modified EHF irradiator for 30 min.
2. Soaking seeds in phytosporin;
3. Sowing seeds in boxes (size 40–50cm) (Fig. 1).
4. Picking into separate vessels with a volume of 0.5 l in the phase of 1–2 true leaves.
Fig. 1. Investigations of the effect of EHF-Radiation on the dynamics of entry and growth of
vegetable crops
The counts and observations are carried out in accordance with the method of field
experiment in vegetable growing. The method was compiled by Doctor S.-kh. Sci.,
Professor, Academician of the Russian Agricultural Academy Litvinov [15].
1. Phenological observations are carried out on all experimental plots daily, usually in
the morning. The beginning of each phase is marked when it is observed in 10% of
plants, and a massive onset—in 75% of plants.
2. Biometric indicators (accounting for linear dimensions and accumulation of biomass)
are determined from the seedling phase every 2 days.
The results of all measurements were recorded in the observation diary, after which
the data were processed and, using the MicroSoft Excel office application, graphical
dependencies of changes in the biometric parameters of plants were built: the dynamics
of seedlings and the dynamics of growth.
Fig. 2. Dynamics of tomato seed germination (the number of germinated seeds on the specified
date)
Analysis of the graph for changes in the germination of eggplant seeds (Fig. 3) allows
us to conclude that the best effect was obtained by variant-3, i.e. as a result of radiation
276 P. Ishkin et al.
on the day of sowing wet seeds. In this variant, the largest number of seeds emerged—73
sown seeds emerged on the 14th day. Variant-2 is slightly inferior to the best variant,
where 68 seeds grew on the 14th day. For the rest of the variants, the final germination
did not exceed 49.
Analysis of the graph for the change in the germination of pepper seeds (Fig. 4)
allows us to conclude that the best effect was also obtained by variant-3, i.e. as a result
Investigation of the Stimulating Effect of the EHF 277
of radiation on the day of sowing wet seeds. In this variant, the largest number of seeds
emerged—44 sown seeds emerged on the 18th day. Variant-2 is slightly inferior to the
best variant, where 34 seeds emerged on the 18th day. For the rest of the variants, the
final germination did not exceed 19.
After the appearance of the first shoots, the dynamics of the linear growth of seedlings was
measured every two days to assess the effect of the treatment method on this indicator.
Analysis of the graphical dependencies of the growth dynamics of tomato seedlings
(Fig. 5) allows us to conclude that the plant develops best according to variant-3, i.e.
as a result of Radiation on the day of sowing wet seeds. In this variant, the average
plant height exceeded all other variants. It is also worth noting that the height of plants
according to variant-3 reached the required height of 150 mm a week earlier and were
ready for sale as seedlings. The difference is clearly visible in the photograph (Fig. 6).
showed almost the same results and almost simultaneously reached the required height
of 150 mm and were ready for sale as seedlings.
Thus, as a result of the studies carried out, the following conclusion can be drawn
that the seeds of tomatoes and eggplants should be radiated in soaked form immediately
before sowing and the seeds of pepper should be radiated only in dry form and also
immediately before sowing.
280 P. Ishkin et al.
5 Conclusion
Based on the results of experimental studies, the following conclusions can be drawn:
1. The dynamics of tomato seedlings showed the best efficiency of the method according
to the second variant, i.e. radiation of dry seeds before sowing. Eggplant responded
better to the processing of soaked seeds. For pepper processing of soaked seeds also
showed better.
2. Analysis of the dynamics of tomato growth showed the greatest stimulation effect
when processing soaked seeds. Eggplant also showed a better effect when treated with
soaked seeds. But for pepper, the treatment of soaked seeds gave a depressing effect
on the growth rate and this variant was significantly inferior to all other variants.
In this regard, it is recommended to radiate the seeds of tomatoes and eggplants
soaked immediately before sowing and the seeds of pepper only dry and also immediately
before sowing.
References
1. Tokarev, K., Lebed, N., Prokofiev, P., Volobuev S., Yudaev, I., Daus, Y., Panchenko, V.:
Monitoring and Intelligent Management of Agrophytocenosis Productivity Based on Deep
Neural Network Algorithms. Lecture Notes in Networks and Systems, 569, 686–694 (2023)
2. Yudaev, I., Eviev, V., Sumyanova, E., Romanyuk N., Daus, Y., Panchenko, V.: Methodology
and Modeling of the Application of Electrophysical Methods for Locust Pest Control. Lecture
Notes in Networks and Systems, 569, 781–788 (2023)
3. Petrukhin, V., Feklistov, A., Yudaev, I., Prokofiev P., Ivushkin D., Daus, Y., Panchenko,
V.: Modeling of the Device Operating Principle for Electrical Stimulation of Grafting
Establishment of Woody Plants. Lecture Notes in Networks and Systems, 569, 667–673
(2023)
4. Klyuev, D.S., Kuzmenko, A.A., Sokolova, Y.V.: Influence of millimeter-wave electromagnetic
waves on seed quality. In: Proceedings of International Conference on Physics and Technical
Applications of Wave Processes. Samara (2020)
5. Morozov, G.A., Blokhin, V.I., Stakhova, N.E., Morozov, O.G., Dorogov, N.V., Bizyakin, A.S.:
Microwave technology for treatment seed. World J. Agric. Res. 1(3), 39–43 (2013). https://
[Link]/10.12691/wjar-1-3
6. Swicord, M., Balzano, Q., Sheppard, A.: A review of physical mechanisms of radiofrequency
interaction with biological systems. In: 2010 Asia-Pacific Symposium on Electromagnetic
Compatibility, APEMC 2010, 21–24 (2010). [Link]
7. Pilyugina, V.V., Regush, A.V.: Electromagnetic stimulation in crop production. All-Union Sci-
entific Research Institute of Information and Technical and Economic Research in Agriculture.
Moscow (1980)
8. Vasilev, S.I., Mashkov, S.V., Syrkin, V.A., Gridneva, T.S., Yudaev, I.V.: Results of studies of
plant stimulation in a magnetic field. Res. J. Pharmaceutical Biol. Chem. Sci. 9(1), 706–710
(2018)
9. Litvinov, S.S.: Methodology of Field Experience in Vegetable Growing. All-Russian Research
Institute of Vegetable Growing. Moscow (2011)
10. Klyuev, D.S., Kuzmenko, A.A., Trifonova, L.N.: Influence on germination of pre-sowing
treatment of seeds by irradiation with waves of different lengths. Phys. Wave Processes Radio
Syst. 23(1), 84–88 (2020)
Investigation of the Stimulating Effect of the EHF 281
11. Ivushkin, D., Yudaev, I., Petrukhin, V., Feklistov, A., Aksenov, M., Daus, Y., Panchenko,
V.: Modeling the Influence of Quasi-Monochrome Phytoirradiators on the Development of
Woody Plants in Order to Optimize the Parameters of Small-Sized LED Irradiation Chamber.
Lecture Notes in Networks and Systems, 569, 632–641 (2023)
12. Mashkov, S., Vasilev, S., Fatkhutdinov, M., Gridneva, T.: Using an electric field to stimulate
the vegetable crops growth. Int. Trans. J. Eng. Manage. Appl. Sci. Technol. 11(16), 1–11
(2020)
13. Baev, V.I., Yudaev, I.V., Petrukhin, V.A., Prokofyev, P.V., Armyanov, N.K.: Electrotechnol-
ogy as one of the most advanced branches in the agricultural production development. In:
Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural
Development. IGI Global, Hershey, PA, USA (2018)
14. Yudaev, I.V., Daus, Y.V., Kokurin, R.G.: Substantiation of criteria and methods for estimating
efficiency of the electric impulse process of plant material. IOP Conf. Ser. Earth Environ. Sci.
488(1), 012055 (2020)
15. Seeds of agricultural crops. Methods for determining germination. GOST 12038-84.
Standartinform. Moscow (1984)
Prototype Development of Electric Vehicle
Database Platform Using Serverless
and Microservice Architecture with Intelligent
Data Analytics
Abstract. To fully realize the benefits of EVs, this paper proposes a prototype
platform that can perform essential functions as an EV data center for storing and
centralizing the data acquired from EVs and chargers directly. Using serverless
and microservice architecture, this platform is purposely designed and built for
scalability and includes intelligent data analytics capabilities with two different
tasks: calculation of CO2 emission, and intraday spatial charging demand analysis.
The results confirm that the communication between EVs, chargers, and the plat-
form using the REST API protocol satisfies the response time of the Open Charge
Point Protocol (OCPP) 2.0.1 standard, and the backend performance is flexible
in terms of scalability and database optimization to accommodate the number of
EVs. Given real-time data, the data analytics functions can display CO2 emissions,
and predict an intraday power demand.
1 Introduction
It is well known that the global development of electric vehicles (EVs) has directly
affected the energy sector and power system management in terms of electric infras-
tructure because EVs are another sort of electric equipment that requires a substantial
amount of demand from the grid for charging. The existing electric infrastructure may
need upgrading, and therefore requires significant investment. To improve the bene-
fits of EV usage while avoiding system reinforcement more than necessary, numerous
approaches have been proposed for controlling and monitoring EV user behaviors. EVs
and their charger data can be utilized by system operators and researchers to enhance
generation planning and power system operation.
In general, there are many participants in the EV/EVSE data collection business. EV
communication begins at the charger and continues to the controller of the charge point
operator. From there, it is connected to other actors involved, such as the distribution
system operator and demand response aggregators. Communication at different layers is
based on the predefined protocol [1]; for instance, Open Charge Alliance (OCA), which
is a global cooperation of public and commercial electric car infrastructure [2].
With the fast-growing development of Internet of Vehicle (IoV), numerous advan-
tages can be captured by EV technology and can be effectively integrated into power
system operation and planning, including Vehicle to Grid (V2G), and Demand Side
Management (DSM) and electricity market such as Peer to Peer energy trading. The
implementation of IoV utilizes cloud computing infrastructure [3] and there are four
distinct levels that make up the architecture of an IoV platform: sensor layer (or device
layer) for data gathering, communication layer for data transmission from the device
layer to the internet, processing layer for data analysis, and application layer for end-
user interface [4, 5]. The collected data from EVs can be evaluated and mined in the
computational layer to uncover useful information in a variety of areas, as demonstrated
in Fig. 1 [6].
have proposed wireless charging methods and systems for electric vehicles. For exam-
ple, [9] introduced a wireless charging system and method for EVs, including charging
stations via mobile application. However, the development of applications for analyzing
EV usage data on a large scale with analytical systems on the impact of EV deployment
in different areas is still lacking.
This research presents a novel design and develops a real prototype of an EV data
platform that uses the most recent data gathering, processing, architecture, and analysis
capabilities. The platform’s core components and functional features are described along
with the opportunities connected with EV data management. We also offer a case study to
demonstrate the effectiveness of our platform in displaying CO2 emissions and predicting
an intraday power demand in an area of interest. The performance of the platform has
been tested with a simulated data transfer in the JSON (JavaScript Object Notation)
format through a REST (Representational State Transfer) API communication from
EVs and chargers.
2 Platform Architecture
To demonstrate the benefits of EV data analysis in a more realistic context, we have
designed a prototype platform for an EV data center, which functions as an administrator
to link and store data between different businesses, relevant agencies, and future software
products. In addition, this platform can analyze collected data from EVs to identify CO2
emissions, geographical energy demand required by EVs being charged, and intelligent
optimization of EV charging load profiles. The designed platform includes 4 different
layers: frontend layer, backend layer, communication layer, and data analytics layer, all
of which require a proper technology to improve the platform’s performance.
The primary objective of the EV data center is to collect and centralize data already
stored by service providers and to utilize that information in various dimensions, such as
charging station development planning at the national level, including real-time inves-
tigation of the availability status of charging stations and the position and status of EVs
at the application level. It also drives standardization and control for data collection and
usage, including a mechanism to protect data privacy.
The software development paradigm is built on the microservices architecture, which
breaks the system into smaller sections known as services. Each service is easily scalable
as each component may be scaled independently and separately and can be modified or
repaired without affecting other services [10] as shown in Fig. 2.
The front-end layer communicates with the back-end layer using an API gateway, which
routes the requests to the appropriate microservices. The application’s data are stored in
Amazon S3, a highly scalable object storage service that can handle large amounts of
data. The Amazon S3 provides durability, availability, and scalability, and is a popular
choice for storing static assets, multimedia files, backups, and log data.
To display the application’s data to users, CloudFront is used as a content delivery
network (CDN) service. CloudFront caches the content at edge locations around the
Prototype Development of Electric Vehicle 285
world, reducing latency and improving the user experience. It also provides security
features such as SSL/TLS encryption, access control, and origin authentication [10].
Fig. 2. Overview of developed prototype platform architecture using serverless and microservice
technologies for EV data management
In this layer, the AWS API Gateway employs the REST format, which is a method of
Server-Client data communication. It transmits data collected from the use of electric
vehicles and electric vehicle chargers to a data storage system via the HTTP protocol
and displays it on the user interface.
AWS API Gateway serves to expose API endpoints and accept incoming queries.
May separate Endpoints to fire to distinct services by context path or other techniques,
then continue routing with security mechanisms such as AWS Identity and Access
Management (IAM) or a bespoke authentication solution [11].
This layer interfaces with numerous components using serverless database processing
technologies to develop and administer the backend platform. The cloud computing event
trigger type can alleviate most server management responsibilities. Because workflows
are designed to be scalable and can be executed automatically, we can focus exclusively
on administering the application, and there is no concern about system scalability during
peak hours [12]. The developed backend of Microservices covers three services:
• Data storage for information gathered from EV chargers,
• Data storage for information gathered from EVs,
286 S. Sirisumrannukul et al.
• Data analytics for intelligent computations, in which the processed data is transmitted
to storage and displayed on the frontend for users to access the platform.
For data storage obtained from the use of EVs and EV chargers, non-relational storage
is utilized. It is a model that does not require obvious pattern relationships and has a
flexible schema, making it appropriate for big data and real-time online applications
[13].
The analytical layer of this platform is considered the most significant layer. On this
layer, the vast quantities of data collected from EVs can be analyzed and investigated in
a variety of domains. As the number of EVs increases, the amount of EV data that needs
to be collected and analyzed also increases, thus making use of a platform that can be
scalable becoming necessary. While data collected can be usefully analyzed in various
ways for different purposes as shown in Fig. 1, the three main functions are of interest in
this research: CO2 reduction and energy consumption analysis, day-ahead optimal load
profile management, and intraday spatial charging demand analysis.
EVcon, i = SOCstart, i −SOClast, i × Batti (2)
Prototype Development of Electric Vehicle 287
Input GPS location of EVs, SOC, battery size, and boundary GIS data
Output Carbon dioxide emissions and energy consumption in each area
1 FOR i = 1 to NEV
1-1 FOR j = 1 to NAREA
1-1-1 IF EV GPS, i is in AREAj
1-1-1-1 EV con, i = (SOC start, i – SOC last, i ) × Batt i
1-1-1-2 CO2, i = EV con, i × CO2, Gen
1-1-1-3 IF EV con, j and CO2, j is existed
1-1-1-3-1 EV con, j = EV con, i + EV con, j
1-1-1-3-2 CO2, j = CO2, i + CO2, j
END IF
1-1-1-4 ELSE
1-1-1-4-1 EVcon, j = EVcon, i
1-1-1-4-2 CO2, j = CO2, i
END ELSE
END IF
END FOR
END FOR
where
NEV Number of EVs
NAREA Number of areas of interest based on the GIS data (i.e., provinces in Thailand)
EV GPS, i Position (latitude and longitude in the GIS data) of the ith EV
AREAj jth area
EV con, j Energy consumption in the jth area
CO2,j CO2 emission in the jth area (kg-CO2 ).
288 S. Sirisumrannukul et al.
where
where;
NEV charged Number of EVs being charged
NEV charged, j Number of EVs being charged in the area jth
EV power, j Total EV power demand in the area jth.
3 Implementation Results
To assess the efficiency of receiving data, events with a different number of data sets are
simulated and transferred from EVs to chargers using the Request Put method via a REST
API gateway. Figure 3 depicts how to receive and send data from a No SQL database
via the Rest API gateway utilizing the HTTP method based on serverless processing and
an event trigger-driven architecture that provides low processing time and cost savings
due to an on-demand pricing model. From the sample data shown in Fig. 4, the results
of transferring data from EVs and EV chargers are as follows and shown in Fig. 5.
• To save 100 data sets, it takes about 31 s with an average duration of 222 ms per data
set.
• To save 1000 data sets, it takes about 5 min and 5 s with an average duration of 229
ms per data set.
Prototype Development of Electric Vehicle 289
Input GPS location of EVs, SOC, battery size, boundary GIS data and EVs status
Output Number of EVs being charged in each area, power demand for each area
1 FOR i = 1 to NEVcharged
1-1 FOR j = 1 to NAREA
1-1-1 IF EV GPS, i is in AREAj
1-1-1-1 EV power, i = [(SOC last, i – SOC start, i ) × Batt i ] ÷ t change
1-1-1-2 IF NEV charged, j and EV power, j is existed
1-1-1-2-1 NEV charged, j = NEV charged, j + 1
1-1-1-2-2 EV power, j = EV power, i + EV power, j
END IF
1-1-1-3 ELSE
1-1-1-3-1 NEV charged, j = 1
1-1-1-3-2 EVpower, j = EVpower, i
END ELSE
END IF
END FOR
END FOR
• To save 10,000 data sets, it takes about 51 min and 52 s with an average duration of
233 ms per data set.
The above test results indicate that the REST API gateway has acceptable response
speeds of less than 4 s. According to the OCPP 2.0.1 standard [2], the suggested maximum
duration for connecting data is 5 s (300 ms), at which the test results are satisfied. It
can be obviously seen that the average duration of each of the three sets with different
data volumes is slightly different and quite independent on the number of data sets. The
results confirm that the API Gate has successfully managed to continue performing its
functions properly under heavy demand. The requests were appropriately processed, and
the platform operated smoothly with less stress during peak hours due to the REST API
gateway designed being scalable and able to sustain traffic increase.
The backend manages data obtained from EVs and chargers, performs analytic functions,
and controls access to the frontend. The backend performance can be determined from
two communication and data connection based on the following criteria.
• Scalability: The platform can automatically scale up or down in response to fluctu-
ating demand; namely, the platform has managed to withstand traffic spikes without
slowing down or crashing. Due to the No SQL database optimized for performance
and scalability and a serverless database, the platform was able to retrieve data fast
and effectively as shown in Fig. 6. Figure 7 is a test of scalability of the No SQL
database with 1 data set at the beginning of the test. The PUT method was requested
to the No SQL database of 200 data sets, which took about 5 min. The graph shows
the No SQL database can comfortably support the increased size of data sets.
• Database optimization: The key-value data storage format utilized by the No SQL
database made data querying and connecting far simpler and more efficient as shown
in Fig. 7, which is an example of retrieving data in the No SQL database. The advan-
tage is that this type of data storage uses a cache to improve performance, making
the operation speed significantly higher than other types of databases.
Prototype Development of Electric Vehicle 291
Fig. 7. Example of available data queries with key-values stored in No SQL database.
This section presents the data analytics results on the three different functions embedded
in the platform: namely, carbon reduction and energy consumption, day-ahead optimal
load profile management, and intraday spatial analysis of the power demand. Given
a rate of carbon dioxide emissions per amount of electricity generation of 0.407 kg-
CO2 /kWh [19], the analysis of reducing carbon dioxide and energy consumption can
be calculated based on the algorithm in Table 1. The amount of energy consumption
and CO2 emissions from electric vehicles in 5 example areas (5 provinces) is shown in
Table 3. Obviously, the amount of emission normally varies with the number of EVs.
In this case, real-time data emission can be evaluated and could be useful for carbon
markets.
The forecast results of spatial power demand based on the number of EVs being
charged in the area of the five provinces are shown in Table 4. With a sampling rate of
every 15 min, the platform has succeeded in identifying at the time of data recording
how many of them were being charged at public stations or at homes and the charging
power rate of each EV. It is seen that the forecast power demand for the next 15 min
depends generally on the number of EVs being charged.
292 S. Sirisumrannukul et al.
Table 3. Result of carbon emission and energy consumption for five example areas.
Area No. of EVs in area (unit) No. of EVs being charged Power demand (kW)
(unit)
Chon Buri 57,600 2880 113,346
Rayong 96,060 4803 190,308
Chanthaburi 15,660 783 30,544
Trat 13,200 660 25,766
Chachoengsao 17,480 874 32,360
4 Conclusion
This article has presented the architecture and prototype of an EV database platform
that can serve as a data center for storing and centralizing EV and EV charger data
from several available sources with different stakeholders. The developed platform was
integrated with GIS-based maps and is capable of visualizing data such as the real-time
locations of EVs. This platform contains data analytics functionalities featuring three
different tasks: calculating the carbon emissions from EVs and forecasting intraday
spatial charging demand. The experimental results demonstrate that the platform can
communicate with EVs and EV chargers in real time with good response times satisfying
the OCPP standard protocol. In addition, the architecture of the platform is based on
serverless and microservices, allowing for scalability and independent modification of
functionality.
The platform’s potential can further be extended to include, for example, data secu-
rity, additional functionalities, and associated technologies like blockchain and smart
contacts. Sharing data with an EV data center requires policies to promote and incentivize
data sharing for specific purposes of analytics without imposing an excessive burden on
data sharers. In addition, because a substantial amount of data can be obtained from
EVs, a system capable of handling big data analytics becomes necessary for optimizing
computation time. The EV data platform implemented in this research can be used in
practice by policymakers and electric power utilities for effective operation and planning
of power system infrastructure to accommodate the wide adoption of EVs in the future.
Prototype Development of Electric Vehicle 293
References
1. Driivz.: EV Charging Standards and Protocols. Retrieved January 2023, from [Link]
com/blog/ev-charging-standards-and-protocols/. 2021, August 12.
2. Open Charge Alliance. (n.d.). Home. Retrieved January 2023, from [Link]
[Link]/
3. Rimal, B., Kong, C., Poudel, B., Wang, Y., Shahi, P.: Smart electric vehicle charging in the
era of internet of vehicles, emerging trends, and open issues. Energies 15(5), 1908 (2022).
[Link]
4. Sharma, S., Kaushik, B.: A survey on internet of vehicles: applications, security issues & solu-
tions. Vehic. Commun. 20, 100182 (2019). [Link]
5. Lv, Z., Qiao, L., Cai, K., Wang, Q.: Big data analysis technology for electric vehicle networks
in smart cities. IEEE Trans. Intell. Transport. Syst. 22(3), 1807–1816 (2021). [Link]
10.1109/TITS.2020.3008884
6. Li, B., Kisacikoglu, M.C., Liu, C., Singh, N., Erol-Kantarci, M.: Big data analytics for electric
vehicle integration in green smart cities. IEEE Commun. Mag. 55(11), 19–25 (2017). https://
[Link]/10.1109/MCOM.2017.1700133
7. Palensky, P., Widl, E., Stifter, M., Elsheikh, A.: Modeling intelligent energy systems: Co-
Simulation platform for validating flexible-demand EV charging management. In: 2014 IEEE
PES General Meeting. Conference & Exposition, National Harbor, MD, USA, 2014, p. 1.
[Link]
8. Ping, J., Chen, S., Yan, Z., Wang, H., Yao, L., Qian, M.: EV charging coordination via
blockchain-based charging power quota trading. In: 2019 IEEE Innovative Smart Grid Tech-
nologies—Asia (ISGT Asia), Chengdu, China, 2019, pp. 4362–4367. [Link]
ISGT-Asia.2019.8881070
9. Qian, Y.M., Ching, T.H., Abidin, Z.M.B.Z.: Mobile application system for chargEV charging
stations. In: 2022 IEEE 2nd International Conference on Mobile Networks and Wireless
Communications (ICMNWC), Tumkur, Karnataka, India, 2022, pp. 1–5. [Link]
1109/ICMNWC56175.2022.10031669
10. Fowler, M.: Microservices. Retrieved January 2023, from [Link]
ices/ (2014)
11. Amazon Web Services. (n.d.). Amazon API Gateway Developer Guide. Retrieved January
2023, from [Link]
12. Serverless, Inc. (n.d.). Serverless Framework. Retrieved January 2023, from [Link]
[Link]/framework/
13. Amazon Web Services. (n.d.). Amazon Web Services: NoSQL databases. Retrieved January
2023, from [Link]
14. Kumar, R., Lamba, K., Raman, A.: Role of zero emission vehicles in sustainable transforma-
tion of the Indian automobile industry. Res. Transp. Econ. 90, 101064 (2021). [Link]
10.1016/[Link].2021.101064
15. Ehsani, M., Singh, K.V., Bansal, H.O., Mehrjardi, R.T.: State of the art and trends in electric
and hybrid electric vehicles. Proc. IEEE 109(6), 967–984 (2021). [Link]
JPROC.2021.3072788
16. Smith, W.J.: Can EV (electric vehicles) address Ireland’s CO2 emissions from transport?
Energy 35(12), 4514–4521 (2010). [Link]
17. Brenna, M., Longo, M., Zaninelli, D., Miceli, R., Viola, F.: CO2 reduction exploiting RES for
EV charging. In: 2016 IEEE International Conference on Renewable Energy Research and
Applications (ICRERA), Birmingham, UK, 2016, pp. 1191–1195. [Link]
ICRERA.2016.7884521
294 S. Sirisumrannukul et al.
18. Heymann, F., Pereira, C., Miranda, V., Soares, F.J.: Spatial load forecasting of electric vehicle
charging using GIS and diffusion theory. In: 2017 IEEE PES Innovative Smart Grid Tech-
nologies Conference Europe (ISGT-Europe), Turin, Italy, 2017, pp. 1–6. [Link]
1109/ISGTEurope.2017.8260172
19. CO2 statistic. (n.d.). [Link]. Retrieved March 6, 2023, from [Link]
[Link]/en/en-energystatistics/co2-statistic
A Cost-Effective and Energy-Efficient Approach
to Workload Distribution in an Integrated Data
Center
Obaida Jahan1 , Nighat Zerin1 , Nahian Nigar Siddiqua1 , Ahmed Wasif Reza1(B) ,
and Mohammad Shamsul Arefin2(B)
1 Department of Computer Science and Engineering, East West University, Dhaka 1212,
Bangladesh
wasif@[Link]
2 Department of Computer Science and Engineering, Chittagong University of Engineering and
Technology, Chattogram 4349, Bangladesh
sarefin@[Link]
1 Introduction
Demand increased for online services due to the exponentially growing number of ser-
vices being distributed from large to small, and every data center has remarkably craved
electric power usage. Meeting those demands for keeping the environment alive does
not become an inevitable job. Due to such excess waste for the rising data center and
less management for those waste, the world is slowly moving towards a much more
polluted one. Hence some techniques and options have been made to work on efficiency
in both energy and cost of data centers in the past few years. Renewable energy sources
include solar cells, windmills, waterwheels, and more [1]. This usage of green energy
more and more will lead to excess excretion of carbon footprints. However, there has
been research conducting groups considering the electricity market, Internet Data Cen-
ters (IDC), and solutions to the problems of minimizing the cost of electricity [2]. They
have proposed various frameworks, models, and solutions to reduce electricity costs. The
proposed model reduces the power consumption cost and enhances the on/off schedule.
However, the only thing that almost all of them still need to include is that they do not
consider using renewable resources and demonstrate the environmental advantages of
this. In our research, one more issue is also focused on where the saving of renewable
energy can be used later. As per all these investigations, a new workload distribution
strategy is investigated, while green energy and brown energy have different impacts on
the environment, including the cost.
Moreover, our proposed algorithm is not only based on when the local green energy
is less than the local power consumption; instead, it takes accordingly the situation where
the local green and brown energy has never been going off as in non-peak time in the data
center, the extra renewable energy is saved, which in peak-time used or utilized before
burning coal, gas, or oil (Brown Energy) [2]. This saves the cost and power consumption a
lot. We propose a framework to show the BETTER algorithm by taking each data center,
their hourly services, and the whole process. The data center utilizes green energy in our
algorithm as much as possible. When the data center is relatively small and in non-peak
time, it saves green energy for further utilization and is less dependent on brown energy
[3]. After those two steps, when there is no adequate renewable energy, they will step
down to brown energy. Moreover, we have kept a variable for saved brown energy to
show how much brown energy is less needed due to green energy maximization.
Green versus Brown: We proposed the concept of green workload and green service
against brown workload and brown service. The concept bolds the boundary of green
energy utilization and maximization and the cost reduction of the total workload. The
motive is to design an almost or near-green workload and low-cost strategy [1, 4].
2 Related Work
Areekul et al. [2] have discussed a better solution in their paper to address the problem
of energy price forecasting. They have proposed an improvised version called Hybrid
ARIMA, a cross between ARIMA and ANN architecture. They tried to bring on the
Artificial Intelligence (AI) aspect in the prediction mechanism powered by Autoregres-
sive Integrated Moving Average (ARIMA). So, they are bringing on a new architec-
ture of ARIMA + ANN. They have worked on predicting short-term electricity prices.
Here, they have used a bare ARIMA model and composed that with Deep Learning
Mechanisms. The Deep Learning aspect had a significant role in deciding the prices.
In [2], the authors proposed a model that works on the hybrid time series data com-
posed of linear and non-linear relational variables to improve forecasting performance.
They have found that their Adaptive Wavelet Neural Network (AWNN) model performs
better than the ARIMA model. According to them, AWNN is better at modeling non-
stationary electricity prices which would be a revolution for data centers. Again, they
argued how good it is in predicting electricity price hikes. They have claimed that their
model is better than the literature they quoted.
A Cost-Effective and Energy-Efficient 297
In this following paper, the power distribution and the impact of renewable energy
sources are talked about. Byrnes et al. [5] have shown the changing dynamics in the
renewables economy and industry. They mentioned all the aspects that affect the market
and industry related to this concept. Though the concept was incepted many decades
ago, awareness and interest are getting popular lately due to the visibility of the negative
changes in the climate. They have shown the market share for energy sources, including
renewables. The distribution is like this, 10% market is under Renewables, 1% is for Oil
Products, 20% Gas, 22% Brown Coal, 46% Black Coal, and 1% is occupied by other
sources. Australia is investing highly in this sector with the ARENA, the RET, and the
CEFC designs.
In [6], Soman et al. have argued in their paper that wind energy price prediction
can be more accurate and better. They have considered different time horizons and
tried to consider all the edge cases. They have tried to convince us that if wind power
consumption and wind speed can be forecasted, supply and demand management will
be more efficient. This way, power distribution will be more efficient, and businesses
will grow substantially. They have applied conventional methods, such as wind speed
and power estimations of wind speed and power based on statistical data and numeric
weather information (NWP). They put ANN in different time scales to get the optimal
value. They provided some evidence in support of their research.
In [7], Ghamkhari et al. have taken a unique approach to address the energy and per-
formance management issue of Green DCs. They have argued that profit maximization
can be achieved through internal system optimization. They emphasized the notion of
profit maximization by improvements and optimizations. They tried to lessen the effect
of trade-offs that come with optimization and costing. They have considered some prac-
tices, such as Service-Level Agreements (SLA) between the vendors and clients in a
particular period. They also studied different pricing models such as Day-Ahead Pric-
ing (DAP), Time-Of-Use Pricing (TOUP), Critical-Peak Pricing (CPP), and Real-Time
Pricing (RTP). Furthermore, they have also given some ideas on implementing smart
grids.
In [8], Khalil et al. discussed the workload distribution. They proposed an essential
algorithm to address distributing workload and managing individual components. They
have given an algorithm named Geographical Load Balancing (GreenGLB) based on
Greed Algorithm design techniques. Here, they have talked about taking pre-measures
for the impending energy crisis. Big techs such as Google, Amazon, and Microsoft pay
vast amounts of their operating cost only on account of power bills.
Researchers tried to address many issues related to green computing [9–18] to make
our planet safe for living.
one on Kaggle, titled “Hourly Energy Demand Generation and Weather” [19]. It was
not a dataset for competition. Instead, the publishers published it as an open-source
contribution.
3.2 Datasets
The dataset includes comprehensive statistics on electricity output, consumption, and
pricing for four years in Spain. Red Electrica Espaa and ENTSOE are two public sources
from which this information was gathered. Red Electrica Espaa is the company running
the electricity market in Spain. ENTSOE is a website that gives access to information
about the nation’s electrical transmission and service activities. The dataset also contains
data on the weather in Spain’s five largest cities, in addition to the electricity statistics.
It is crucial to remember that this weather data should be kept from being analyzed or
interpreted because it should not be included in the primary dataset. The information,
made available to the public through the ENTSOE and REE websites, is designed to give
a thorough overview of the Spanish electrical market over four years [19]. It is important
to remember that ENTSOE is a gateway that offers access to data from Transmission
Service Operators (TSOs) throughout Europe in the context of the previously defined
dataset. Europe has 39 TSOs, and ENTSOE acts as a common clearinghouse for data on
the service operations and electrical transmission offered by these companies. Table 1
shows some samples from the dataset.
consumption. The authors elaborated on how they collected the data. While compiling
the dataset, they took Energy consumption-related data from the European TSOs. They
have mostly quoted ENTSOE and REE as their prime source of data.
We have gone through a bunch of valid assumptions while sampling the data. According
to the dataset: Spain had 43 data centers in 2021, with a combined power capacity of over
300 MW. These data centers are distributed across the country, with the majority located
in the Madrid and Barcelona regions. According to a report by the same organization
(ASCADIC), data centers in Spain consume approximately 3% of the total electricity
demand in the country. However, it is essential to remember that this amount is only an
estimate, and the actual energy consumption of data centers in Spain may be different.
Hence, we assumed that there have to be 43 functioning data centers in Spain, and they
consume 3% of the total electricity. By assuming the dataset of Spain regarding their
data center we are estimating it for any integrated data centers for any such countries
worldwide.
From the beginning, our approach always respected the law, whether it was data collec-
tion or processing. We churned the data very carefully and avoided any unlawful activity.
The dataset we had chosen was not even tightly licensed. The author was very generous
about the licensing and work principle. The dataset has a very generous license tier:
CC0: Public Domain. Despite this, we were not disrespectful to any activity that may
harm the community and society.
3.6 FrameWork
In Fig. 1 Green energy is saved when it is greater than the total incoming workload else
brown energy is utilized [1, 20]. Resources for green energy are solar, wind, and water
whereas brown energy is coal, oil, and gas.
In Fig. 2 Green energy is both used and saved while it is furthermore greater than
the total incoming workload. Brown energy is utilized [1] and excess brown energy is
saved while a shortage of green energy is there.
3.7 Algorithm
Better Algorithm
Inputs:
Total Incoming Workload = li .
Total Green Incoming Workload = lgi
Total Brown Incoming Workload = lbi .
300 O. Jahan et al.
Outputs:
The allocated workload to each data entry.
Green workload, Ygi .
Brown workload, Ybi .
Total workload, Y
# Assume we have in datacenter
# Assume we are optimizing for each datacenter and the time slot is divided into hours.
# Assume we have N divisions in time slots.
# For each time slot we will calculate the Y.
# Per day savings
SAVE_G = 0
SAVE_B =
A Cost-Effective and Energy-Efficient 301
SAVE = 0
# Total workload, Y (per day)
Y=0
# For test
Li = 0
Li = Total incoming workload
# Per hour savings.
Save = 0
Save_g = 0
Save_b = 0
# Per hour workload,
Yi = 0
just 2015-01-06 [Link] + 01:00 and 2015-01-28 [Link] + 01:00 which shows less
consumption of energy where the total workload was optimized.
As we can see in Fig. 7, multiple drops in the total incoming load per month were
down on just 2015-01-05 [Link] + 01:00, 2015-01-28 [Link] + 01:00 and so many
times due to a shortage of energy.
Figure 8 is to introspect the available total incoming load against the required total
actual load of a DC. The turbulence means there are some cases we do not meet the
requirement, so after utilizing the energy the actual load was minimized whereas the
incoming load increased for an abundance of energy.
A Cost-Effective and Energy-Efficient 303
Figure 9 describes the ups and downs of prices for green energy for 3 consecutive
years. (2015–2018). After the utilization of energy through the allocation of workload
by the proposed algorithm there was a reduction of cost in the total cost.
304 O. Jahan et al.
4 Result Analysis
Each megawatt of Green Energy = 51.19 e
Each megawatt of Brown Energy = 51.19 e
Cost of electricity ci = 51.19 e
Total workload per day = yi.
Results have been taken from the 1st and 2nd day and estimated for 1 year.
Saved cost for Green Energy = 226.77 e
Cost for 7 days = 1587.40 e
Cost for 30 days = 47622 e
Cost for 1 year = 619086 e.
A Cost-Effective and Energy-Efficient 305
4.1 Comparison
5 Conclusion
It is suggested to separate the challenges of brown energy cost reduction and green
energy utilization maximization by dividing the task between the two types of energy,
green and brown, severally. In this context, the words green workload and service rate,
as opposed to brown workload and service rate, have been introduced for data centers.
As a result, it has been demonstrated that a new distribution algorithm outperforms the
most effective methods for allocating workloads currently in use in terms of power. To
lower the green data center’s overall energy costs, we employ a variety of workloads and
power management techniques. We investigated the challenge of reducing total energy
expenses while accounting for varying electricity pricing, on-site renewable energy, and
the number of active servers. In this paper, we investigated the challenge of reducing
total energy expenses while accounting for varying electricity pricing, on-site renewable
energy, and the number of active servers. To address the issue, we developed a n algo-
rithm to design a strategy for allocating the incoming workload. The studies’ findings
demonstrated that our suggested method would significantly reduce data centers’ overall
energy costs. We expect that our suggested effort will positively impact lowering power
consumption costs, utilizing renewable resources, burning coal, gas, or oil at peak times,
utilizing green energy to the greatest extent possible, and becoming less reliant on brown
energy.
References
1. Kiani, A., Ansari, N.: Toward low-cost workload distribution for integrated green data centers.
IEEE Commun. Lett. 19(1), 26–29 (2015)
2. Areekul, P., et al.: Notice of violation of IEEE publication principles: a hybrid Arima and
neural network model for short-term price forecasting in a deregulated market. IEEE Trans.
Power Syst. 25(1), 524–530 (2010)
306 O. Jahan et al.
16. Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S.: Developing an energy
cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo,
E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in
Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
19958-5_75
17. Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustain-
able and profitable IT infrastructure of Bangladesh using green IT. In: Vasant, P., Weber,
G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing &
Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
18. Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for
improving e-waste management in Bangladesh. In: Vasant, P., Weber, G.W., Marmolejo-
Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO
2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
org/10.1007/978-3-031-19958-5_95
19. Jhana, N.: Hourly Energy Demand Generation and Weather. [Link]
asets/nicholasjhana/energy-consumption-generation-prices-and-weather
20. EPA Report to Congress on Server and Data Center Energy Efficiency. [Link]
[Link]/ia/partners/prod_development/downloads/EPA_Report_Exec_Summary_Final.pdf
Optimization of the Collection
and Concentration of Allelopathically Active
Plants Root Secretions
1 Introduction
The relevance of the work is due to the fact that obtaining root secretions of plants
is important for the study of allelopathy and the use of plant allelochemicals to cre-
ate biogerbicides. This direction in biology is relevant from the point of view of the
development and implementation of new eco-friendly techniques in agriculture [1–4].
The aim of the work is the development of the device that allows optimizing the
collection of the concentrated root exudates of allelopathically active plants.
To achieve this aim, it is necessary to perform a number of tasks:
when molecules of colored impurities are not removed, but chemically turn into colorless
molecules), due to the presence of various types of sorption in activated carbon and the
possibility of its regeneration, it is a very suitable substance for extraction root exudates.
3 Results
To achieve the aim of the research, a schematic diagram and a 3D model of a device
for root exudates collecting were developed (Fig. 1). The device allows extraction,
concentration and study of the secretions of the root system of various plants.
Fig. 1. The developed device for root exudates collecting: (a) schematic diagram of the device; (b)
3D model of the device; 1 – container for plants; 2 – adjustable fastening device; 3 – root discharge
filter; 4 – fine particle filter; 5 – peristaltic pump; 6 – large particle filter; 7 – compartment with
an absorbent; 8 – pump control unit; 9 –support; 10 – filtered water outlet; 11 – power supply
The device consists of the container for plants 2, fixed on the station support 9.
The number and size of containers for plants can be changed depending on the needs
due to the types of plants from which root exudates must be obtained, as well as their
required quantity. Depending on this, the number of containers can be increased, or a
larger capacity can be taken. The device uses 1 L containers for the convenience of
experimental work.
The adjustable fixing device 2 allows the necessary configuration of the device for
the operation convenience.
To exclude the ingress into the solution of root secretions of various pollutants of
large and small fractions, filters for small 4 and large particles 6 are provided in the
device.
The constancy of the water current inside the device is ensured by the inclusion of
a peristaltic pump 5 in its device, the uniform operation of which allows us to obtain a
flow of solution at the required preset speed, which will ensure high-quality isolation of
root exudates from the surface of the root system of plants, and will also prevent possible
injury to the plants used.
312 A. N. Skorokhodova et al.
Depending on the type and number of plants, it may be necessary to provide and
maintain the required microclimate [10, 11], or to change the flow rate of water through
the device.
Depending on the type and number of plants, the need to change the speed of the
water current through the device may occur. For simplicity and convenience, the device
has a pump control unit, where you can digitally specify the required rotation speed of
the peristaltic pump and, accordingly, set the required water flow rate.
Root exudates are collected in the root extraction filter 3, which has a compartment
with adsorbent 7 (Fig. 3). This is the key link of the device, since this part of it collect and
concentrate the root secretions of plants, with which it is possible to carry out subsequent
work.
The water passes through all the above units of the device, it exits through a special
hole 10 - the outlet of filtered water. The device has a closed circuit of operation, so after
use, water is re-supplied to the root system of the plant and the process is closed in a
cycle.
On the one hand, the closed principle of operation of the device determines its
economy – it is only necessary to ensure that the water level in the device does not fall
below the limit values, and also, a closed cycle allows the most complete collection of
dissolved root secretions, thereby avoiding significant losses.
The device is powered by an electrical network, so its design provides for the presence
of a power supply 11.
If necessary, the device can be equipped with modern plant lighting sources [12, 13].
Based on the schematic diagram and 3D model, the device was installed (Fig. 2) and
experimental studies were carried out (Fig. 3).
4 Discussion
The operation of the device is confirmed by experimental data (Fig. 4) showing the effect
of root exudates of allelopathically active plants obtained using a device for collecting
and concentrating root secretions of allelopathically active plants on the growth and
development of test plant seedlings.
The obtained data show the effect of root exudates of lupine and sunflower con-
tained in an aqueous solution obtained using a device for collecting and concentrating
root secretions of allelopathically active plants on the pre-sowing treatment [14] of ger-
minating cucumber seeds had a stimulating effect on the 3rd day of the experiment. The
amount of substances obtained contained in the root system of allelopathically active
plants is maximal, this is confirmed by experimental data.
The device works according to the following mechanism:
The donor plants for root secretions are planted in the container for plants, which
is filled with a neutral substrate – perlite or expanded clay. The substrate must be pre-
washed and disinfected. After planting, the peristaltic pump is turned on, which causes
the water inside the device to circulate in a closed system. Macro- and microelements
necessary for plants for normal growth, which will not be sorbed, are added to the
water. The water washes the root system of plants and enters the coarse and fine filters,
where various mechanical impurities are removed from it. Then the water with the root
Optimization of the Collection and Concentration 313
Fig. 2. Appearance of the device for collecting and concentrating root secretions of allelopathi-
cally active plants
Fig. 3. The compartment with the absorbent - the key link of the device
secretions dissolved in it gets to the root secretions filter filled with sorbent, where their
314 A. N. Skorokhodova et al.
0.8
0.7
0.6
0.5
lenght, sm
0.2
0.1
0
control variant (growing in the
root excudates)
а)
0.8
0.7
0.6
lenfht, sm
0.5
0.1
0
control variant (growing in the
root excudates)
b)
Fig. 4. The effect of root exudates on the length of the root and shoot of cucumber seedlings: (a)
lupine root exudates; (b) sunflower root exudates
absorption takes place. The remaining water is supplied to the root system of plants for
further irrigation. This closes the cycle.
To obtain a solution of root secretions, it is necessary to remove the sorbent cartridge
from the root secretions filter and rinse it with the appropriate solvent (depending on the
Optimization of the Collection and Concentration 315
type of sorbent). Thus, we get a solution of root exudates of plants, with which we can
continue to work.
Currently, laboratory studies on the study of root secretions of such crops as oats,
lupin, hogweed and other allelopathically active crops are being conducted in the Russian
State Agrarian University - Moscow Timiryazev Agricultural Academy. As an upgrade of
the device, its optimization and automation, it is planned to develop a remote application
that allows you to output the results to a computer and smartphone.
5 Conclusion
The device for optimization of the process of obtaining root secretions of allelopathically
active plants has been developed. The device has the following advantages in compar-
ison with analogues: simplicity of design, reliability – any unit of the device can be
quickly replaced with a similar one if necessary, high efficiency, automation, low cost
of construction and environmental friendliness.
According to the research results, the positive effect of root exudates of lupine and
sunflower on the growth and development of 10-day cucumber seedlings was revealed.
There was an increase in the length of the root system by 12.5% (lu-pin) and by 16.6%
(sunflower). This reaction was also observed on the aboveground part of cucumber
seedlings by 44.4% (lupin) and 56.1% (sunflower).
The use of this device opens up the prospect of new studies of root exudates of
plants, and will also be able to help in the development of new environmentally friendly
herbicides – biogerbicides and growth stimulants based on root secretions of plants.
Acknowledgements. This research was conducted with the support of the Ministry of Science and
Higher Education of the Russian Federation in accordance with agreement № 075-15-2022-317, 20
April 2022, and a grant in the form of subsidies from the federal budget of the Russian Federation.
The grant was provided for state support for the creation and development of a world-class scientific
center: “Agrotechnologies for the Future”.
References
1. Opender, K., Suresh, W.: Comparing impacts of plant extracts and pure allelochemicals and
implications for pest control. Perspect. Agric. Vet. Sci. Nutr. Nat. Resour. 4(049), 1–30 (2009)
2. Reiss, A., Fomsgaard, I.S., Mathiassen, S.K., Stuart, R.M., Kudsk, P.: Weed suppres-
sion by winter cereals: relative contribution of competition for resources and allelopathy.
Chemoecology 28(4–5), 109–121 (2018). [Link]
3. Zhou, L., Fu, Z.S., Chen, G. F., et al.: Research advance in allelopathy effect and mechanism
of terrestrial plants in inhibition of Microcystis aeruginosa. Chin. J. Appl. Ecol. 29(5), 1715–
1724 (2018). [Link]
4. Gerasimov, A.O., Polyak, Yu. M.: Assessment of the effect of salinization on the allelopathic
activity of micromycetes in sod-podzolic soil. Agro-chemistry 3, 51–59 (2021). [Link]
org/10.31857/S0002188121030078
5. Bertin, C., Yang, X., Weston, L.A.: The role of root exudates and allelochemicals in the
rhizosphere. Plant Soil 256, 67–83 (2003)
316 A. N. Skorokhodova et al.
6. Vorontsova, E.S.: Description of methods of influence associated with allelopathy and allo-
chemical substances in agriculture. Sci. Electron. J. Meridian 6(40), 261–263 (2020)
7. Kondratiev, M.N., Larikova, Yu.S., Demina, O.S., Skorokhodova, A.N.: The role of seed and
root exudates in interactions between plants of different species in cenosis. Proc. Timiryazev
Agric. Acad. 2, 40–53 (2020). [Link]
8. Cunningham, W., Berkluff, F.A., Felch, C.L., et al.: Patent No. 2723120 C1 Russian Fed-
eration, IPC B01D 61/16, C02F 3/12, C02F 9/14. Systems and methods for cleaning waste
streams that make direct contact between activated carbon and the membrane possible: No.
2019104869: application 20.07.2017: publ. 08.06.2020. Applicant Siemens Energy, Inc.
9. Thomson, A.E., Sokolova, T.V., Navosha, Y., et al.: Composite enterosorbent based on peat
activated carbon. Nat. Manag. 2, 128–133 (2018)
10. Ignatkin, I., et al.: Developing and testing the air cooling system of a combined climate control
unit used in pig farming. Agriculture 13, 334 (2023). [Link]
0334
11. Ignatkin, I.Y., Arkhiptsev, A.V., Stiazhkin, V.I., Mashoshina, E.V.: A method to minimize the
intake of exhaust air in a climate control system in livestock premises. In: IOP Conference
Series: Earth and Environmental Science, Michurinsk, 12 Apr 2021, Michurinsk, p. 012132
(2021). [Link]
12. Erokhin, M.N., Skorokhodov, D.M., Skorokhodova, A.N., et al.: Analysis of modern devices
for growing plants in urban farming and prospects for its development. Agroengineering
3(103), 24–31 (2021). [Link]
13. Skorokhodova, A.N., Anisimov, A.A., Skorokhodov, D.M., et al.: Photosynthetic activity of
wheat - wheatgrass hybrids and winter wheat under salinization. In: IOP Conference Series:
Earth and Environmental Science, Ussurijsk, 20–21 June 2021, Ussurijsk, p. 022134 (2021).
[Link]
14. Vasilyev, A.N., Vasilyev, A.A., Dzhanibekov, A.K., Samarin, G.N.: On the development of
model for grain seed reaction on pre sowing treatment. In: Vasant, P., Zelinka, I., Weber,
G.-W. (eds.) ICO 2019. AISC, vol. 1072, pp. 85–92. Springer, Cham (2020). [Link]
10.1007/978-3-030-33585-4_8
Potential Application of Phosphorus-Containing
Micronutrient Complexates in Hydroponic
System Nutrient Solutions
Abstract. In this study, for the first time, the composition of a nutrient solution
for hydroponic growing of plants with chelated forms of 4 essential trace elements
(Fe2+ , Zn, Cu, Mn) with an organophosphorus ligand hydroxyethylidene diphos-
phonic acid (HEDP) was tested on the example of ‘Ivolga’ variety summer wheat
and ‘Azart’ variety lettuce. Obtained results were compared with plants grown
using pure water and a solution with chelates of the same trace elements with
a carboxyl-containing ligand - EDTA. The replacement of micronutrients with
chelated forms with bisphosphonate led to a number of effects. On the one hand,
the experimental seedlings were characterized by lower growth and biomass com-
pared to the nutrient solution with EDTA-chelates (36–55%). On the other hand, an
increase in the root system mass (18–26%) was observed. Also, resistance to stress
conditions (lack of nutrients) of plants cultivated in a solution with metal bispho-
phonates was increased. The absence of bacterial films on surfaces in contact with
the solution was observed, too. Thus, phosphone-containing chelate components
have great potential for application in hydroponic growing systems and require
further extensive and systematic study in various aspects.
1 Introduction
Due to the global trend of cropland reduction and the complexity of modern natu-
ral resource management, there is an urgent need for the development and further
improvement of sustainable cultivation systems, which include hydroponic methods
[1].
Although research in the field of hydroponics has been conducted for more than
70 years, recently, due to new scientific knowledge, new methods and approaches aimed
at obtaining maximum yield and product quality indicators have been introduced into
practice [2]. Optimization of the crop nutrition regime in hydroponic conditions remains
one of the key factors of intensification. At the same time, the issues of not only deter-
mining the optimal amounts of the necessary nutrients for specific crops, but also the
search for formulas that can additionally activate the internal reserves of plants, which
ultimately should affect the target indicators of cultivation, become relevant [3, 4]. In
this context, the use of biostimulating components in nutrient solutions is undoubtedly a
progressive step. However, in hydroponic systems, in comparison with traditional meth-
ods of agriculture, for a number of reasons (low buffering of the aquatic environment
compared to the soil, high concentrations of nutrients in the compositions, etc.), the use
of biostimulants has been little studied [5].
In the course of our research, the use of components containing biologically active
phosphonic groupings in nutrient solutions is considered for the first time. It is known
from previous studies that phosphonates are not metabolized by plants into phosphates
and thus cannot enhance growth through the mechanism of nutrition [6–8].
But their effect is unusual: it improves the architecture of the root system, increases
the level of cis-zeatin (cytokinin), the activity of nitrate reductase and improves the dis-
tribution of nutrients and water, resistance to abiotic stress [8, 9]. For example, successful
results (100%) were obtained on the rhizogenesis of hard-to-root stone rootstocks on
the example of VC-13 when all inorganic salts of trace elements were replaced in the
Murashige and Skoog medium with the corresponding chelated forms with bisphos-
phonate as a ligand [10]. In this study, 4 essential trace elements (Fe2+ , Zn, Cu, Mn)
were introduced in the form of chelated soluble compounds with an organophospho-
rus ligand – oxy-ethylidenediphosphonic acid (HEDP). Practically, chelated forms of
nutrients are used relying solely on known carboxyl-containing ligands (EDTA, DTPA,
EDDHA, etc.) and solely for the purpose of increasing the availability of nutrients and
the solubility threshold [3, 5, 11–13]. However, organophosphate complexes are able not
only to provide an accessible soluble form of metal ions, but also to have a pronounced
physiological, regulatory effect on plant development. Another important advantage of
HEDP is the ability to form a stable complex with the ferrous cation, which is directly
available for metabolic reactions and incorporation into molecular structures [14, 15].
In our study, we aimed to study the effect of phosphorus-containing chelates of trace
elements on the growth and development of wheat and lettuce in hydroponic conditions
[8, 10, 16, 17].
Fig. 1. Plant growing solution variants (control (H2 O), S2 and S3 solutions) placed in the
phytochamber
320 E. A. Nikulina et al.
Statistic processing of the obtained data was carried out using the methods of varia-
tion statistics in the form of a one-factor analysis of variance, calculations of the arith-
metic mean, standard deviation, coefficient of variation and Fisher’s criterion using
Microsoft Office Excel 2007 software.
Table 1. Plant growth indicators for ‘Ivolga’ variety wheat on 14th day of cultivation (July, 2021)
Nutrient Plant height, Fresh leaf Dry leaf Root fresh Leaf surface
solution cm mass, g mass, g mass, g area, cm2
Control (H2 O) 17.4 ± 1.3 0.12 ± 0.01 11.9 ± 0.9 0.08 ± 0.02 3.9 ± 0.5
S2 25.0 ± 3.1 0.27 ± 0.05 8.9 ± 0.9 0.21 ± 0.05 9.2 ± 1.6
S3 34.6 ± 4.2 0.42 ± 0.08 8.4 ± 0.8 0.17 ± 0.05 13.8 ± 2.8
Least 3.6 0.07 0.99 0.04 2.2
significant
difference (p
< 0.05)
Fig. 2. Photos of wheat samples of ‘Ivolga’ variety at the final stage of the experiment (on 14th
day of cultivation) with a decrease in the nutrient content in the hydroponic solution: (a) S1
control solution (H2 O); (b) S2 solution - trace elements Fe2+ , Zn, Mn, Cu in chelate form with
a phosphorus-containing ligand (HEDP); (c) S3 solution - trace elements Fe3+ , Zn, Mn, Cu in a
chelate form with a carboxyl-containing ligand (EDTA).
Similar trends in the accumulation of leaf and root fresh mass, as well as the increase
in leaf surface area, were also observed on the 14th day of ‘Azart’ variety lettuce (Table 2,
Fig. 3). The accumulation of dry mass when using solution S2 compared with solution
S3 in lettuce plants was somewhat higher (18.6%) than in wheat plants (5.6%), which
is associated with different growth rates of these crops and species features.
Based on the data obtained, we can also assess the phytotoxicity of various ligands,
specifically bisphosphonates for plants grown on hydroponics. Moreover, replacement
of trace elements with chelated forms with a phosphorous-containing ligand led to a
positive side technological effect – the absence of an increase in bacterial films and
mucus on surfaces in contact with the solution, which is an undoubted advantage in
exploitative terms. Together with an increase in the resistance of cultivated crops to
water and nutrient stress, the absence or very slow growth of bacterial films significantly
expand the possibilities for optimizing and intensifying hydroponic crop production.
322 E. A. Nikulina et al.
Table 2. Plant growth indicators for ‘Azart’ variety lettuce on 14th day of cultivation (August,
2021)
Nutrient solution Fresh leaf mass, Dry leaf mass, % Root fresh mass, Leaf surface
g g area, cm2
Control (H2 O) 0.015 ± 0.003 7.5 ± 1.5 0.007 ± 0.003 0.46 ± 0.09
S2 0.219 ± 0.068 5.4 ± 0.5 0.051 ± 0.024 6.98 ± 2.48
S3 0.481 ± 0.174 4.3 ± 0.7 0.037 ± 0.018 15.23 ± 6.35
Least significant 0.13 1.21 0.02 4.59
difference (p <
0.05)
Fig. 3. Photos of lettuce plants of ‘Azart’ variety at the final stage of the experiment (on 14th day of
cultivation) with a decrease in the nutrient content in the hydroponic solution: (a) Control solution
(H2 O); (b) S2 solution - trace elements Fe2+ , Zn, Mn, Cu in chelate form with a phosphorus-
containing ligand (HEDP); (c) S3 solution - trace elements Fe3+ , Zn, Mn, Cu in a chelate form
with a carboxyl-containing ligand (EDTA).
4 Conclusion
Improving the composition of solutions and preparing them for the cultivation of various
crops is the most urgent task for the technological development of the cultivation of closed
ground crops. Depending on the ligand used, the availability of elements will change,
as well as the biometric characteristics of plants. It has been shown that the valence of
trace elements and the use of different forms of phosphorus-containing and carboxyl-
containing ligands play an important role in the germination, growth and development
of plants at different phases of vegetation.
Therefore, experimental tests have shown that phosphon-containing chelated com-
ponents have a great potential for use in hydroponic growing systems and require further
extensive and systematic study in various aspects: from the selection of individual com-
ponents and their concentrations of the various crop requirements to the water regime
optimization and engineering and technological equipment parameter improvement.
Chemical compounds with phosphone-containing groups, such as HEDP in this
case, have a pronounced regulatory effect, which can be effectively used in the practice
of hydroponic cultivation of agricultural crops. For example, the use of phosphonate-
containing microelement chelates for growing leafy greens, where biomass accumulation
is important, is not a suitable decision because of growth inhibition on the first growing
Potential Application of Phosphorus-Containing Micronutrient 323
stage during the treatment. But the situation is changing with regard to fruit and veg-
etable crops, such as tomatoes, strawberries, etc. In this case, it is possible to increase
productivity, reduce ripening, regulate flowering and ripening cycles, and achieve a more
saturated color of fruits. Therefore, further research will be focused on growing various
fruit-bearing crops, selecting the concentrations of phosphon-containing chelates and
determining the phase of their use in nutrient solutions to achieve the maximum effect.
References
1. Orsini, F., Kahane, R., Nono-Womdim, R., Gianquinto, G.: Urban agriculture in the
developing world: a review. Agron. Sustain. Dev. 33(4), 695–720 (2013)
2. Kozai, T., Niu, G., Takagaki, M.: Plant Factory: An Indoor Vertical Farming System for
Efficient Quality Food Production. Academic Press, San Diego (2015)
3. Jones, J.B.: Hydroponics: A Practical Guide for the Soilless Grower. CRC Press, Boca Raton
(2005)
4. Yakhin, O.I., Lubyanov, A.A., Yakhin, I.A., Brown, P.H.: Biostimulants in plant science: a
global perspective. Front. Plant Sci. 7, 2049 (2017)
5. Raviv, M., Lieth, J.H., Bar-Tal, A.: Soilless Culture, Theory and Practice, 2nd edn. Academic
Press (2019)
6. Rossall, S., Qing, C., Paneri, M., Bennett, M., Swarup, R.: A “growing” role for phosphites
in promoting plant growth and development. Acta Hort. 1148, 61–68 (2016)
7. Verreet, J.A.: Biostimulantien – schlummerndes Potenzial? TopAgrar 8, 56–60 (2019)
8. Bityutskii, N.P.: Effects of carboxylic and phosphonic fe-chelates on root and foliar plant
nutrition. Russ. J. Plant Physiol. 42, 444–453 (1995)
9. Thao, H.T.B., Yamakawa, T.: Phosphite (phosphorous acid): fungicide, fertilizer or bio-
stimulator? Soil Sci. Plant Nutr. 55(2), 228–234 (2009)
10. Tsirulnikova, N.V., Nukulina, E.A., Akimova, S.V., Kirkach, V.V., Glinushkin, A.P., Pod-
kovyrov [Link].: In vitro effect of replacement mineral salts of trace elements with P-containing
chelates to improve rooting of cherry rootstock (cv. VC-13). In: All-Russian Conference with
International Participation Economic and Phytosanitary Rationale for the Introduction of Feed
Plants 2020, IOP Conf. Series: Earth and Environmental Science, vol. 663, p. 012042. IOP
Publishing (2021)
11. Marschner, H.: Mineral Nutrition of Higher Plants. Academic Press, London (1986)
12. Savvas, D.: Nutritional management of vegetables and ornamental plants in hydroponics.
In: Dris, R., Niskanen, R., Jain, S.M. (eds.) Crop Management and Postharvest Handling of
Horticultural Products, Quality Management, vol. 1, pp. 37–87. Science Publishers, Enfield
(2001)
13. Sonneveld, C.: Composition of nutrient solution. In: Savvas, D., Passam, H. (eds.) Hydroponic
Production of Vegetables and Ornamentals, p. 179. Embryo Publisher, Athens (2002)
14. Pierson, E.E., Clark, R.B.: Ferrous iron determination in plant tissue. J. Plant Nutr. 197,
107–116 (1984)
15. Walker, E.L., Connolly, E.L.: Time to pump iron: iron-deficiency-signaling mechanisms of
higher plants. Curr. Opin. Plant Biol. 11(5), 530–535 (2008)
16. Diatlova, N.M., Lavrova, [Link]., Temkina, V.Y., Kireeva, [Link]., Seliverstova, I.A., Rudakova,
[Link]., Tsirulnikova, N.V., Dobrikova E.O.: The use of chelating agents in agriculture.
Overview of Ser. “Reagents and Highly Purified Substances”. NIITEKHIM, Moscow (1984)
17. Nikulina, E., Akimova, S., Tsirulnikova, N., Kirkach, V.: Screening of different Fe(III) and
Fe(II) complexes to enhance shoot multiplication of gooseberry. In: ECOBALTICA “FEB”
2020, IOP Conference Series: Earth and Environmental Science, vol. 578, p. 012015. IOP
Publishing (2020)
324 E. A. Nikulina et al.
18. Grishin, A.P., Grishin, A.A., Semenova, N.A., Grishin, V.A., Knyazeva, I.V., Dorokhov, A.S.:
The effect of dissolved oxygen on microgreen productivity. In: II International Symposium
“Innovations in Life Sciences” (ILS 2020), vol. 30, p. 05002. BIO Web Conf. (2021)
Impact Analysis of Rooftop Solar Photovoltaic
Systems in Academic Buildings
Bangladesh
1 Introduction
Energy is the key to the economy. There is a correlation between the growth rate of
electricity GDP and the growth rate. Today, the number of uses of technology is gradually
increasing. Smart technology is now taking over the world. As there is an increasing
energy demand, there is a chance of running out of energy as we are dependent on limited
sources. For this reason, renewable energy has become a very important part of our time,
as the cost of energy has increased rapidly in recent years. Our earth receives a large
amount of sunlight (1366 W nearly). This energy source is limitless to us and this is
no-charge energy.
For this, we use a solar panel system to transform sunlight into energy. The solar cell,
which is the fundamental part of a solar power system, produces energy by absorbing
photons that the sun emits. The primary advantage of solar energy over other conventional
power sources is that it can be produced using the smallest photovoltaic (PV) solar cells,
allowing sunlight to be directly transformed into solar energy [1].
Solar panels are considered one of renewable energy sources because excess energy
from solar panel systems can be easily stored in a battery when not used. The solar
cell is the most significant component of a solar power system, which gets energy from
the sun’s photons and turns it into electricity [2]. As one of the renewable, CO2-free
energy sources, it will have the least environmental impact on any system. On a daily
basis, we receive an infinite amount of energy from the Sun. Using solar panel systems,
we can generate additional energy from the Sun with relative ease. Any location where
direct sunlight is available can be used to generate energy. In some systems, such as the
production of power from renewable resources, fossil fuels are utilized [3].
However, as we use fossil fuels every day, the supply is decreasing. Using wind or
the Sun to create power is a smart concept if we want to reduce our reliance on fossil
fuels because the Sun will always shine on the Earth’s surface. After converting sunlight
into energy, an endless amount of sunlight remains for the future. This is what makes
solar energy a renewable source of energy. Furthermore, solar energy is green since,
unlike other systems, it does not emit greenhouse gases. As the need for electricity
increases, people are investing more money in solar panel systems to make the panels
work better and save more energy. Using certain methods, it is possible to make PV
systems work better. Increasing the efficiency of solar cells, using maximum power
point tracking (MPPT) control algorithms, and implementing solar tracking systems are
the three main ways to get the most energy from the Sun. Solar panel systems can be
made more powerful by buying more concentrated photovoltaic (CPV) cells and more
efficient variants of solar panels. This saves more energy than before [4].
As the price of fossil fuels fluctuates, renewable energy is quickly gaining promi-
nence as an energy source. The most abundant source of energy is solar energy. Further-
more, in this paper, we have discussed cost-effectiveness analysis and economics, where
simulations of the results are presented and examined from a financial point of view.
2 Literature Review
In this section, we focus on existing and related work on solar photovoltaic-based power
generation. To enhance our understanding of the application and sustainability of solar
photovoltaic energy in the pursuit of a greener environment, we have investigated a
variety of supplementary materials. In the majority of previous publications, the authors
have investigated and proposed several models based on various parameters.
Islam et al. [5] address the potential effect of using solar energy to resolve the
current demand for power imbalances in the world. They also state that Bangladesh’s
geographical location is considered to be an ideal place for the utilization of solar energy.
The discussion is about 3 types of solar power technology, such as concentrated solar
power (CSP) technology, grid-connected solar PV systems, and solar home systems.
Ahmed et al. [6] have presented their research on rooftop solar power generation
in Bangladesh and its potential. The authors argue that Bangladesh might be one of
the leading rooftop solar installation proponents. The authors of the study asserted that
the rooftop solar potential in urban Bangladesh is very high. They suggested imple-
menting the solar energy plant on the roofs of industrial and commercial buildings,
Impact Analysis of Rooftop Solar Photovoltaic 327
residential buildings, etc. They reported that 47.2%, 30.5%, and 4.85% of Bangladesh’s
total energy consumption was consumed by the industrial, residential, and commercial
sectors, respectively. This can be reduced by implementing solar energy on the roofs of
respective structures.
Podder et al. [7] have modeled a rooftop photovoltaic system (RPS) for an academic
institution in Bangladesh. The authors asserted that their proposed strategy may create a
moderate amount of electricity and reduce the exorbitant price of electricity. The authors
built four configurations of 46, 64, 91, and 238 kW solar (PV) systems and investigated
their economic, ecological, and sensitivity benefit analysis to determine that the 91 kW
setup is the most optimal. Furthermore, they showed that 91 kW delivers 97% of the
total PV-generated usage, while the grid delivers only 3%. In fact, the 91 kW RPS can
export 26% of the surplus electricity to the grid.
Hasapis et al. [8] have installed large-scale solar electricity production systems (PV)
on university campuses in an effort to achieve energy independence. They created a model
that first calculates power usage with a real-time metering system and then collects solar
irradiation data to evaluate solar potential. If the prospect is high, a PV system design
is proposed based on the identification of suitable regions. Researchers then select the
right technology for electrical design, such as photovoltaic module, inverter, mounting
structure, and layout. Furthermore, they demonstrated that a 2 MWp on-grid solar power
plant can supply 1899 MWh of yearly power, which meets 47% of campus consumption,
saves 1234 tons of CO2 , and has a projected cash flow of 4.2 years with only an LCOE
of 11 cents per kilowatt hour.
Paudel et al. [9] have examined the techno-economic and ecological impact of a
1 MW rooftop solar photovoltaic installation on a college campus using PVsyst. The
authors anticipate that the plant will generate approximately 1660 MWh of sustainable
AC power per year, of which 95% can be sent to the grid. The repayment period for this
solar photovoltaic project was predicted to be 8.4 years.
Baitule et al. [10] demonstrated the viability of creating an academic campus powered
entirely by solar photovoltaics. In the study, the study illustrates that a 5-MW solar
photovoltaic system can generate 8000 MWh of power annually while reducing its
carbon footprint by 173,318 tons annually.
Chakraborty et al. [11] presented their solar photovoltaic technical mapping strategy
for the green campus initiative. The authors analyzed the performance of nine commer-
cially available solar panels and concluded that “a-Si” is the best in terms of power losses,
land requirement, PR, CUF, YF, and cost. Furthermore, they demonstrated that the total
energy produced by solar photovoltaic technology is approximately 8 MWh/day, which
supplies 40% of the campus’s net energy demand/day. They also claimed that the use of
solar photovoltaic energy could reduce electricity costs by approximately 27.4% of the
current annual energy bill.
Barua et al. [12] investigate the feasibility report and the relationship between a
rooftop solar photovoltaic system. When the project area is provided with PVsyst and
the software, the model is carried out with NASA surface meteorological data through
the geographic project location coordinator. The PVsyst software was used to simulate
the results. The simulation results show that using a solar photovoltaic system to generate
328 P. N. Nayan et al.
power could save 42 tonnes of CO2 . The work in [13–22] shows good contributions and
guidelines.
To construct this proposed methodology, the above-mentioned research publica-
tions were surveyed. In this paper, we have done a feasibility study to implement solar
photovoltaic panels on East West University’s (EWU’s) rooftop.
a peak sun hour when the amount of solar irradiance (or sunlight) averages 1000 (W)
of energy per square meter (roughly 10.5 feet) [23]. Early mornings and late afternoons
are approximately to have less than 500 W/m2 of sunlight. In contrast, under optimum
conditions midday on a bright, sunny day it is possible to receive more than 1000–
1100 W/m2 . Despite the fact that solar panels receive an average of 7 h of daylight each
day, the average peak solar hours are often between 5 and 6 which varies from region to
region. At solar noon, when the sun is at its highest in the sky, solar radiation peaks.
330 P. N. Nayan et al.
Rooftop index Position Area (m2 ) Usable percent (%) Usable area (m2 )
1 South-West 377 70 264
2 West 520 80 416
3 West 351 70 246
4 North-West 550 70 385
5 North 882 30 265
6 North-East 570 90 513
7 East 288 0 0
8 South-East 240 90 216
9 South 580 30 174
Total usable area 2479
In our work field, East West University is situated in Dhaka, which is located in the
Bangladesh region. Therefore, the peak hour of the Sun in this region is given below in
Fig. 4:
The average peak hours of sunlight in the annual year are 7 h. The solar path of
the university area is presented in Fig. 5 which is imported from the Global Solar Atlas
website. The average produced energy can be calculated using Eq. (1). The peak hours
of sunlight on a daily basis are presented in Table 2.
The installation of solar panels in Bangladesh is a great step toward mitigating catas-
trophic issues such as climate change. Given the current power situation in Bangladesh,
it is also preferable to produce one’s own electricity at home. What is great about solar
panels is that they don’t cost much money to install and do not need to be maintained.
On-demand rooftop or exterior solar panels are placed, with standard batteries installed
for use when the sun goes down. A charge controller is included to ensure that the bat-
tery is not overloaded and therefore cannot damage itself. As soon as the battery is fully
charged, the controller will stop sending power from the PV array to the storage device.
We are using a photovoltaic solar panel in our model.
To produce 1-W energy from the solar panel will cost 75 BDT. So, producing 244
kW energy from the solar panel will cost 18300000 BDT (Table 3).
Impact Analysis of Rooftop Solar Photovoltaic 331
In our region, the maximum peak hour of sunlight is 5 h and the average peak hour of
sunlight is 7 h (Table 2). The maximum peak hour provides sunlight of about 80%. Our
proposed solar panel will produce 244 kW per hour. From Eq. (1), we get an average
produced energy of 139.43–140 kWh. Therefore, our proposed solar panel will produce
140 kWh.
Here, in this (Table 4), we show the load demand calculation of our institution.
As our model works only in daylight, we will not store any amount of energy in the
battery because cutting off the cost of the battery. Therefore, we will compare our solar
energy with the off-peak energy consumption.
In off-peak hours, the energy consumption is 266.67 units 267units.
Impact Analysis of Rooftop Solar Photovoltaic 333
Here, in (Table 5), we show the cost of the load demand of our institution.
From Tables 2 and 4, we see that the monthly energy charge (off-peak) is 192,000
units. The monthly unit cost is 1,461,120 BDT. The cost per unit is 7.61 BDT.
334 P. N. Nayan et al.
Load demand Variation Monthly (unit) Daily (unit) Per hour (unit)
Total 264000 8800 366.67
Off-peak 192000 6400 266.67
Peak 72000 2400 100
The university consumes 267 units and our model is producing 140 units, which is
52.43% of the consumed energy. So, 766,065 BDT will be saved from the university
electricity bill per month. To adjust this set-up cost, we need approximately 2 years.
The lifetime of the PV panel used is 25 years. Therefore, our proposed model will save
17,619,495 BDT in its lifetime (Fig. 6).
The overall rooftop area is 4 358 m2 , and we are allowed to utilize 2479 m2 , or
56.89% of the total area, due to other circumstances. If we could utilize the entire roof,
we could generate more energy, which could be used to replace the demand for power
during the day. It could be more sustainable and efficient for the university.
Impact Analysis of Rooftop Solar Photovoltaic 335
5 Conclusion
The article illustrates a feasibility study of the implementation in an academic building
in Dhaka, Bangladesh, to become self-sufficient, reduce the huge cost of electricity,
and create a greener environment. Bangladesh is facing a fuel scarcity for electricity
generation and due to the world’s limited fuel supply, the entire planet may soon suffer
the same problem. It is high time that the world should focus on producing renewable
energy to overcome the issue. As a result, we chose solar photovoltaic cells to generate
solar energy as a secondary source of energy. The advantage of our proposed system is
(1) less dependence on grid supplies, (2) less carbon emission, and (3) building a green
energy infrastructure.
Acknowledgment. The authors are thankful to Mr. Nur Mohammad, Executive Engineer and
System Planner of the Bangladesh Power Grid Company for providing vital statistics.
References
1. Shaikh, M.: A review paper on electricity generation from solar energy. Int. J. Res. Appl. Sci.
Eng. Technol. V(IX), 1884–1889 (2017). [Link]
2. Ronay, K., Dumitru, C.: Technical and economical analysis of a solar power system supplying
a residential consumer. Procedia Technol. 22, 829–835 (2016). [Link]
tcy.2016.01.056
3. Choifin, M., Rodli, A., Sari, A., Wahjoedi, T., Aziz, A.: A Study of Renewable Energy and
Solar Panel Literature Through Bibliometric Positioning During Three Decades (2021)
4. Keskar Vinaya, N.: Electricity generation using solar power. Int. J. Eng. Res. Technol. (IJERT)
02(02) (February 2013). [Link]
5. Islam, M., Shahir, S., Uddin, T., Saifullah, A.: Current energy scenario and future prospect of
renewable energy in Bangladesh. Renew. Sustain. Energy Rev. 39, 1074–1088 (2014). https://
[Link]/10.1016/[Link].2014.07.149
336 P. N. Nayan et al.
6. Ahmed, S., Ahshan, K., Nur Alam Mondal, M.: Rooftop solar: a sustainable energy option for
Bangladesh. IOSR J. Mech. Civ. Eng. (IOSR-JMCE) 17(3), 58–71. [Link]
1684-1703025871
7. Podder, A., Das, A., Hossain, E., et al.: Integrated modeling and feasibility analysis of a
rooftop photovoltaic systems for an academic building in Bangladesh. Int. J. Low-Carbon
Technol. 16(4), 1317–1327 (2021). [Link]
8. Hasapis, D., Savvakis, N., Tsoutsos, T., Kalaitzakis, K., Psychis, S., Nikolaidis, N.: Design
of large scale prosuming in universities: the solar energy vision of the TUC campus. Energy
Build. 141, 39–55 (2017). [Link]
9. Paudel, B., Regmi, N., Phuyal, P., et al.: Techno-economic and environmental assessment of
utilizing campus building rooftops for solar PV power generation. Int J Green Energy 18(14),
1469–1481 (2021). [Link]
10. Baitule, A., Sudhakar, K.: Solar powered green campus: a simulation study. Int. J. Low-Carbon
Technol. 12(4), 400–410 (2017). [Link]
11. Chakraborty, S., Sadhu, P., Pal, N.: Technical mapping of solar PV for ISM -an approach
toward green campus. Energy Sci. Eng. 3(3), 196–206 (2015). [Link]
3.65
12. Barua, S., Prasath, R., Boruah, D.: Rooftop solar photovoltaic system design and assessment
for the academic campus using PVsyst software. Int. J. Electron. Electr. Eng. 2017, 76–83
(2017). [Link]
13. Yeasmin, S., Afrin, N., Saif, K., Reza, A.W., Arefin, M.S.: Towards building a sustainable
system of data center cooling and power management utilizing renewable energy. In: Vas-
ant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent
Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569.
Springer, Cham (2023). [Link]
14. Liza, M.A., Suny, A., Shahjahan, R.M.B., Reza, A.W., Arefin, M.S.: Minimizing e-waste
through improved virtualization. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A.,
Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture
Notes in Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
978-3-031-19958-5_97
15. Das, K., Saha, S., Chowdhury, S., Reza, A.W., Paul, S., Arefin, M.S.: A sustainable e-waste
management system and recycling trade for bangladesh in green IT. In: Vasant, P., Weber,
G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing &
Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
16. Rahman, M.A., Asif, S., Hossain, M.S., Alam, T., Reza, A.W., Arefin, M.S.: A sustainable
approach to reduce power consumption and harmful effects of cellular base stations. In:
Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent
Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569.
Springer, Cham (2023). [Link]
17. Ahsan, M., Yousuf, M., Rahman, M., Proma, F.I., Reza, A.W., Arefin, M.S.: Designing a
sustainable e-waste management framework for Bangladesh. In: Vasant, P., Weber, GW.,
Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Opti-
mization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
18. Mukto, M.M., Al Mahmud, M.M., Ahmed, M.A., Haque, I., Reza, A.W., Arefin, M.S.: A sus-
tainable approach between satellite and traditional broadband transmission technologies based
on green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas,
J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and
Systems, vol. 569. Springer, Cham (2023). [Link]
Impact Analysis of Rooftop Solar Photovoltaic 337
19. Meharaj-Ul-Mahmmud, Laskar, M.S., Arafin, M., Molla, M.S., Reza, A.W., Arefin, M.S.:
Improved virtualization to reduce e-waste in green computing. In: Vasant, P., Weber, G.W.,
Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Opti-
mization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
20. Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S.: Developing an energy
cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo,
E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in
Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
19958-5_75
21. Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustain-
able and profitable IT infrastructure of Bangladesh using green IT. In: Vasant, P., Weber,
G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing &
Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham
(2023). [Link]
22. Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for
improving e-waste management in Bangladesh. In: Vasant, P., Weber, G.W., Marmolejo-
Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO
2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). [Link]
org/10.1007/978-3-031-19958-5_95
23. Megantoro, P., Syahbani, M., Sukmawan, I., Perkasa, S., Vigneshwaran, P.: Effect of peak sun
hour on energy productivity of solar photovoltaic power system. Bull. Electr. Eng. Inform.
11(5), 2442–2449 (2022). [Link]
24. Hernandez, R., Hoffacker, M., Murphy-Mariscal, M., Wu, G., Allen, M.: Solar energy devel-
opment impacts on land cover change and protected areas. Proc. Natl. Acad. Sci. 112(44),
13579–13584 (2015). [Link]
A Policy Framework for Cost Effective
Production of Electricity Using Renewable
Energy
Sazzad Hossen1 , Rabeya Islam Dola1 , Tohidul Haque Sagar1 , Sharmin Islam1 ,
Ahmed Wasif Reza1(B) , and Mohammad Shamsul Arefin2(B)
1 Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
{2019-1-60-096,2019-1-60-156,2018-3-60-078}@[Link],
wasif@[Link]
2 Department of Computer Science and Engineering, Chittagong University of Engineering and
Technology, Chattogarm 4349, Bangladesh
sarefin_406@[Link], sarefin@[Link]
Abstract. The amount of energy a country uses can be used to measure its level
of development. Bangladesh has struggled to develop sustainably, which calls
for a reliable electricity source. Our nation has a wealth of natural resources,
but because of its reliance on fossil fuels, it has been going through an energy
crisis for a while. The only way to ensure a better overall electricity supply is
to use renewable energy sources in combination with current energy sources.
Renewable energy sources including solar power, solar photovoltaic (PV) cells,
wind energy, and hydroelectricity can be workable alternatives to supply ongoing
energy needs while ensuring complete energy security. The power problem, which
has turned into a substantial obstacle to future economic expansion, is currently
posing significant challenges to the nation’s energy business. The purpose of the
article is to lower energy prices from the consumer’s perspective by choosing the
first renewable energy plant that can best offer a necessary load for a given length of
time. The first analytical model of a solar, wind, and hydroelectric power plant was
given in order to construct an objective function. Cost limitations were added later.
Due to the government’s solely dependent energy policy and the declining state of
the natural environment, finding alternative energy sources has become essential
for the country. The settings are ideal for producing electricity, particularly in
the winter when PV or diesel hybrid technology is used. In order to balance the
operational expenses of hybrid renewable energy sources such as solar, wind, and
hydroelectric electricity while meeting consumer demand for electricity load.
1 Introduction
“Clean energy” refers to energy generated from renewable, non-polluting resources that
emit no emissions, as well as energy saved through energy-saving techniques. Renewable
resources are used to generate energy because they can be replenished gradually over
time (Rahman et al. 2022). It comes from a variety of sources, including the sun, breeze,
rain, tides, waves, and geothermal heat. Solar electricity is the most eco-friendly and
renewable source of energy. Solar power is the conversion of sunlight or sun rays into
energy using photovoltaics or mirrors. It can also convert both electrical and thermal
energy (Rahman et al. 2022). When using solar energy, no noise is generated. Regular
energy is generated by burning fossil fuels, which causes air pollution, greenhouse gas
emissions, and increased CO2 production (Zhu et al. 2022). It also causes water pollution
and noise pollution by producing irritating sounds, while solar power is pollution-free.
Solar batteries can store power to be used when it is demanded. It can be installed
anywhere. Renewable energy sources account for just 3.5% of overall energy generation
in Bangladesh. Bangladesh’s energy sector is primarily based on fossil fuels. Natural
gas provides 75% of the main commercial energy. Day by day Bangladesh has become
increasingly reliant on natural gas. Bangladesh is also heavily dependent on oil. The
country’s total annual electricity production is 423 megawatts (Tajziehchi et al. 2022).
In order to transform Bangladesh into an energy-secure nation with sustainable energy
sources (solar, wind, hydro, biomass, etc.), the government has given renewable energy
a high priority. Solar power is used more in villages of Bangladesh than in cities. More
than a quarter of the rural population in Bangladesh still lacks access to power. After
sundown, daily tasks like cooking, working, and learning become difficult. Solar energy
to areas where the traditional grid does not reach. Over 4 million homes and 20 million
people are now powered by small-scale solar home systems in rural areas, making for
roughly one-eighth of the total population of the nation. Bangladesh’s use of solar energy
is expanding quickly and will eventually supply a larger portion of the nation’s energy
needs. Bangladeshi garment manufacturers will install solar panels on factory rooftops
as part of an effort to “green” the sector. The United States has certified over 100
clothing manufacturers as green. There are many types of solar panels-polycrystalline,
monocrystalline, and thin-film solar. All of them work differently from each other on
uses, efficiency, and setup cost. All these have their own assets and weaknesses. In 2022
because of the energy crisis, rooftop solar energy is becoming popular. Our objective is
to use alternative energy sources to green all organizations in Bangladesh. We will be
scaling up net-metered rooftop solar for an organization (Barlev et al. 2011). Bangladesh
has experienced a serious electricity problem in recent years. Day by day, fossil fuels are
being depleted. Bangladesh should explore other sources of power. Wind could be able
to solve this problem. A good source of renewable energy is wind. Long coastlines may
be found in Bangladesh. Seasonal variations in wind patterns can be seen. Wind turbines
should be capable of surviving wind of up to 250 km/h in coastal zones. (Emrani et al.
2022). The potential for wind energy has been assessed using previously gathered data
in various locations in Bangladesh. A brief description of the viability of installing wind
turbines on a large scale in various zones for irrigation and energy production is provided.
Also included are the operating concept and design factors for installing wind turbines
(Chu 2011). In Bangladesh, the Kutubdia Hybrid Power Plant and Feni Wind Power
Plant, which have a combined wind power capacity of more than 20,000 MW, produce
just 3 MW of energy. We anticipate that Bangladesh will enhance the output of wind
energy, and BPDB has set up several efforts to produce 1.3 GW of wind energy. While
BPDB has launched numerous projects to produce 1.3 GW of electricity from wind, we
340 S. Hossen et al.
believe that Bangladesh would enhance its wind power. In the Bay of Bengal, there are
several little inhabited islands, and Bangladesh is surrounded by 724 km of coastline. In
coastal locations, the yearly average wind speed is more than 5 m/s at a height of 30 m
(Yu et al. 2022). In the northeastern region of Bangladesh, the wind velocity is higher
than 4.5 m/s2 , although, in other areas, it is only around 3.5 m/s. Installations of small
wind turbines might aid in the research of wind energy production in coastal regions in
real-time (Archer and Jacobson 2005).
Hydropower has the lowest levelized cost of electricity of all the reactionary energy
and renewable energy sources and is actually less expensive than alternatives for energy
efficiency (McKenna et al. 2022). Compared to other significant sources of electri-
cal power that rely on fossil fuels, hydropower is better for the environment (Sen
et al. 2022). Hydropower shops don’t emit the waste heat and feasts common with
reactionary-energy-driven installations—which are major contributors to state pollu-
tion, global warming, and acid rain. The only hydroelectric plant in the nation, Karnafuli
Hydro Power Station, is situated at Kaptai, about 50 km from Chittagong, a major port
city. This plant is one of Bangladesh’s largest water coffers development systems and
was built as part of the Karnafuli Multipurpose Project in 1962. Most countries have
easy access to significant amounts of water via gutters and conduits. Electricity can be
produced using this renewable resource without polluting the environment. Because of
the increasing demand for energy, it is critical to forecasting the future of hydropower.
It would also be possible to plan growth using a mix of energy and to implement mea-
sures to control the development of the requested electricity using sustainable small
hydropower systems. We will keep energy from the three systems, and we will use this
energy for an organization. We see that this idea would be more efficient than using
electricity (Archer and Jacobson 2005). That’s why we want to check this system with
this project. Is it efficient for any organization or not? We can provide electricity to any
organization through these three clean energies when we face load shedding at various
times. The main objective of the project will be to make this clean energy much more
efficient than the amount of electricity we pay monthly or annually (Celikdemir et al.
2017). Hydropower is one of the cleanest forms of energy among those available as
renewable sources that may be used to meet the need for electricity in an educational
institution.
2 Related Work
In this section, we will look at alternative ways to produce electricity, optimize energy
costs and develop renewable energy for an organization for future exciting and related
work. most of the authors proposed a model and researched how to minimize energy
consumption costs and use green technology.
Rahman Abidur addressed electrical power facilities that use renewable energy
sources in Rahman et al. (2022). Even though RES are considered to be environmen-
tally friendly, they have some negative effects on the environment. They examined a
few RES-based power plants and tallied the results for each plant individually. As it was
discovered that inappropriate use of RES could affect the environment, a selection guide-
line is offered. They suggested that RES should be carefully chosen and appropriately
implemented when used in electrical power plants.
A Policy Framework for Cost Effective Production of Electricity 341
Zhu et al. (2022) analyzed the effects of green tag prices on investments in renewable
energy. Additionally, they examined the effects of green tag prices on carbon emissions.
It was determined that the price of green tags had a non-monotonic impact on investments
in renewable energy and carbon emissions.
Tajziehchi et al. (2022) discussed a study that looks at the relationship among the
aspects of the environment, business, and community facets in order to determine if
massive hydropower plants will be profitable in the long run. The Environmental Costs
Analysis Model (ECAM) for hydropower was described as a renowned user-friendly
variant of the model with depth information for the benefit plan prediction model.
Barlev et al. (2011) presented the advancements in CSP technology that have been
introduced over the years. The reflector and collector design and construction, heat
transmission and absorption, power generation, and thermal storage have all improved.
While keeping in mind the benefits that reduced fossil fuel consumption provides for
the environment, several applications that may be connected with CSP regimes to save
(and occasionally create) power have been developed and implemented.
Emrani et al. (2022) aimed to enhance the technical and economic competitiveness
of a hybrid PV-Wind power plant by deploying a large-scale GES (Grid Energy Storage)
system. The study found that the GES system outperformed battery energy storage in
terms of its high depth of discharge (DOD) and longer lifespan, as well as its superior
efficiency. The study’s findings indicate that incorporating GES into a hybrid PV-Wind
power plant can improve performance and cost-effectiveness.
Chu (2011) wish to comprehend and compare the fundamental working principles
of numerous extensively researched solar technologies in order to select the best solar
system for a given geographic area. This study also assists in reducing future long-term
switching costs and improving the performance of solar systems. They analyze each
technology and determine how likely it is to be implemented commercially.
Yu et al. (2022) investigated the growing issues regarding climate change and the
requirement to cut greenhouse gas emissions in order to lessen its effects. According
to the authors, the use of solar-based renewable energy can be extremely important
for lowering CO2 emissions and combating climate change. Additionally, they analyze
how solar energy might lower CO2 emissions in many industries, including building
construction, transportation, and power generation.
McKenna et al. (2022) gives an overview of the many approaches and models used
to calculate the potential for wind energy, as well as the data sources needed to complete
these calculations. The availability and quality of data, the effects of climate change on
wind energy supplies, and the possible environmental effects of large-scale wind farms
are some of the difficulties the writers look at when doing these analyses.
Sen et al. (2022) research indicates that there is a significant chance that small or
micro-hydropower plants might be established using indigenous technologies, making
it possible to electrify a sizable area of the Chattogram Hill Tracts.
Celikdemir et al. (2017) goal is to assess the hydropower potential with regard to
the technical and financial feasibility of building medium and large hydropower plants.
Similar to many other nations, Turkey’s mini and micro hydropower potential is not
completely assessed. This work aims to facilitate economic analysis by developing an
342 S. Hossen et al.
empirical formula for tiny and micro hydropower plants, which are becoming more and
more significant in this context.
de Barbosa et al. (2017) expects power networks for South and Central America in
2030 using a 100% renewable energy scenario. The model’s objective was to reduce
the energy system’s yearly total cost Sen et al. (2022). According to the findings of this
study, the development of a renewable electricity grid in these areas in the near future can
be sped up by present laws governing renewable energy and low-carbon development
methods.
Zalhaf et al. (2022) created models for a 100 km transmission system line and a High
voltage direct Current (HVDC) transmission line.
Eero Vartiainen, Christian Breyer, David Moser, Eduardo Román Medina, Chiara
Busto, Gatan Masson, Elina Bosch, and Arnulf Jäger-Waldau address the cost of solar
photovoltaics(PV) in Vartiainen et al. (2022). They displayed both the global cumulative
PV volume and the levelized cost of hydrogen (LCOH). The author also investigates the
use of green hydrogen in transportation, industrial processes, and electricity generation.
Al-Quraan and Al-Mhairat (2022) proposed using several mathematical models to
estimate wind farm extracted energy. Five turbine models and cost analyses were created
by the authors. They also show Jordan’s wind energy capacity.
Table 1 expressed the various types of solar panels and their comparisons. MSSP
is the most efficient, but it is also the costliest. The efficiency and lifespan of PSSP are
lower than those of MSSP. TFSP are intended for light users. We can see from Table 1
A Policy Framework for Cost Effective Production of Electricity 343
that MSSP is the best option if the organization’s area and temperature are both low.
MSSP is the best option when space and temperature requirements are high but the
budget is limited.
3 Methodology
The power system model used in this research is based on the direct optimization of
power distribution variables under specified initial conditions and includes a number of
technologies for energy production, collection, and manufacturing, as well as processed
water and the production of composite natural gas (SNG). Gas (PtG) for artificial usage
that is operable in a flexible manner as needed (Müller and Fichter 2022).
A completely integrated script that also takes into consideration the need for warmth
and motion must be modeled, even though this is outside the purview of this study, in
order to grasp the entire energy system. A detailed explanation of the applied energy
system model, its input data, and the relevant technologies is not provided in the following
sections because it has already been covered in and. Yet, it offers a thorough justification
for each additional piece of information that the model in the present investigation
presupposed. It is possible to create additional specialized and monetary hypotheses in
the area of this work devoted to supporting data (de Barbosa et al. 2017).
Modeling a fully integrated scenario that includes accounts for heat and mobility
needs is important to fully comprehend the overall energy system, albeit this is outside the
purview of this study. The applied model of the power system was previously described
and, hence a full description of the model, its foundational information, and its utilized
technology is not provided in the following parts. It is a thorough summary, though, and
any new material is taken into account for the model in this study. In the background
portion of this article, more detailed financial hypotheses can be formed (de Barbosa
et al. 2017).
Grid structure and region subdivision are shown in Fig. 1, The South American con-
tinent and the Central American nations that link it to North America were examined
in this study (Panama, Costa Rica, Nicaragua, Honduras, El Salvador, Guatemala, and
344 S. Hossen et al.
Belize). The superregion has been broken up into 15 subregions, including Central Amer-
ica (representing Panama, Costa Rica, Honduras, El Salvador, Nicaragua, Guatemala,
and Belize), Colombia, Venezuela (representing Guyana, French Guiana, Venezuela,
and Suriname), Ecuador, Peru, Latin South Central (including Paraguay and Bolivia),
South Brazil, Brazil, So Paulo, Southeast Brazil, North Brazil, Northeast Brazil, North-
east Argentina (including Uruguay), East Argentina, West Argentina, and Chile. Brazil
and Argentina, the two countries with the most folks and homes, were split into five or
three sub-regions, each with its own area, population, and access to the public grid. This
document presents four concepts for energy generation possibilities. (Zhou et al. 2022).
• Indigenous energy power, in which all areas are autonomous (without links to the
HVDC network) and the required amount of power must be produced locally to meet
demand (Liang and Abbasipour 2022).
• An internal high voltage direct current (HVDC) grid connecting indigenous power
systems to the country’s main power grid (Zalhaf et al. 2022).
• A vast energy network that links together rural energy systems.
• Network situation Using SWRO desalination and creating fake natural gas demand,
a power plant scenario is shown for the entire region. In this environment, nodal
sources coupled with PtG technology are employed as interconnection technologies
in the power industry to satisfy desalination and artificial gas demands, enhancing
the stiffness of the system.
Figure 1 Illustrates the branch and network configuration in both South and Central
America. HVDC links for rural power systems are depicted as dotted lines. The HVDC
network structure is based on the network configuration of South and Central America
(de Barbosa et al. 2017).
The model was optimized in terms of its technical and financial status in 2030 in
monetary terms at the time of 2015. The late structure approach, which is common
in nuclear power, was considered. Table A in chain S1 presents fiscal hypotheses for
capital costs (cost of capital), functional costs, and the duration of all factors for all
guidance documents. In all scenarios, the weighted average capital cost (WACC) is
established at 7, however, for domestic PV utilization, the WACC is assigned at 4 due
to reduced tax revenue conditions (Vartiainen et al. 2022). Tables A, B, and C for line
S1 contain technical hypotheses for power systems and energy storage technologies, as
well as the efficiency of production and transmission technological advances and power
dissipation in HVDC transformers and lines. When evaluating the project’s electricity
model, the prices of residential, commercial, and industrial electricity consumers in
aggregate by region for the period 2030 are required, with the exception of Suriname,
Ecuador, Venezuela, Guyana and French Guiana, whose energy bills are obtained from
the original versions.
On the S1 train, Price increases are displayed in Table E. Electricity prices vary by
country; for example, electricity costs are the same in Argentina subregions and Brazil.
Because the production and consumption of electricity are not simultaneous, consumers
do not consume all of the electricity generated by their solar photovoltaic systems.
Surplus electricity from prosumers is surely sold on the grid at a transfer price of 2 cents
per kWh (Al-Quraan and Al-Mhairat 2022). Consumers are not allowed to sell more
energy than they utilize on the grid.
A Policy Framework for Cost Effective Production of Electricity 345
In developing countries in Fig. 2 like Bangladesh, such small states and institutions
are typically offline. According to the proposed studies (Zalhaf et al. 2022), these villages
can be connected or act, establishing an energy center point and a distribution line that
is a group of all organizations. Renewable resources can be integrated with solar energy,
oil, and other energy conversion institutions, among those that are available, depending
on the local conditions of the territory and the possibility of permission (Al-Quraan and
Al-Mhairat 2022).
Solar energy (thermal and photovoltaic), wind energy, and hydraulic energy are all
part of the proposed system. It is suggested that solar energy be used only for hot water
supply and that other renewable energy sources be used only for electricity generation
(Celikdemir et al. 2017). Figure 2 shows how these resources are used to generate energy.
In the context of renewable energy, Particle Swarm Optimization (PSO) used to
optimize the operation of renewable energy systems, such as wind turbines, solar panels,
and hydroelectric generators. The objective function in this case maximizes the power
output of the renewable energy system while minimizing the cost of operation. PSO
works by simulating the behavior of a swarm of particles in a search space. Each particle
represents a potential solution to the optimization problem, and the swarm collectively
searches the solution space by adjusting its position and velocity according to a set of
rules. Identify the decision-making factors that have an impact on the goal function. The
availability and capacity of renewable energy sources, the location of power plants, the
price of fuel, and the price of electricity are a few examples. By computing the value of the
objective function for each particle’s location in the search space, determine the fitness
of each particle. Using the PSO algorithm, modify the particle locations and velocities.
Each particle’s new position combines its present location, its previous best location,
and the best location of every particle in the swarm. Evaluate the PSO algorithm’s output
to identify the best set of decision factors that maximizes the use of renewable energy
sources while minimizing the cost of producing electricity.
346 S. Hossen et al.
4 Implementation
Bangladesh experiences vast periods of sunlight with a 4–6.5 kWh/m2 /day solar intensity,
yet the majority of the solar energy is not used. The months that get the highest and least
solar energy respectively are March and April. Solar energy technology comes in a
variety of forms, including concentrated solar power, solar home systems, and solar PV.
Compared to other solar energy systems, solar provides a number of advantages for
pastoral living in developing countries like Bangladesh. Things are developing swiftly
in the sector of wind energy, which is a renewable resource (Li et al. 2022). If we install
solar in an educational institution then it can be implemented by following ideas.
Because more power is needed for cooling and ventilation during the summer, the
amount of electricity used fluctuates depending on the month of the year. The quantity
of connected users’ equipment in use also affects consumption. How fast the mini-grids
reach their maximum capacity is predicted by how rapidly the load is taken on in the
first year.
Energy required for PV systems(per year)
Annual Cost = (4)
Energy output of PV system per year
Table 2. Solar installation cost and per day produced electricity and cost (unit per cost = 8.41)
Inverter and battery cost Installation cost (tk) Electricity producing Producing cost (tk)
(tk) (kWh)
205,000 290,000 5250 44,152.50
values are planned to be utilized to construct a compliance factor for the establishment
of the wind power plant. 1 unit (1000 watts) of wind energy can be generated if the wind
speed is 2.3–2.5 m/s. However, the wind turbine generates profitable wind energy if the
wind speed is 5–6 m/s.
Turbine produce electricity
= (turbines capacity × produce electricity) (5)
Table 3. Wind installation cost and per day produced electricity and cost.
Turbine cost (TK) Installation cost (TK) Electricity producing (kWh) Producing cost (TK)
395,500 300,500 4583 33,845.83
Using turbines, wind energy is transformed into electricity. In Table 3, there are
Turbine Costs, Electricity production per day, and Electricity Producing Costs.
In spite of having an abundance of water resources, Bangladesh’s ability to construct
a sizable hydroelectric power plant is now restricted. Bangladesh has the lowest hydro-
electric output in the whole South Asian area, at 230 MW. So, for building a hydroelectric
plant in an educational institution the following cost will be counted. The capacity factor
is the proportion of the maximum power output to the actual power output (it is around
0.5 for a micro hydro power plant).
Monthly Energy Output = (Max power output × hour ∗ 30) (8)
Table 4. Hydroelectricity installation cost and per day produced electricity and cost.
Hydro turbine cost (tk) Installation cost (tk) Electricity producing Producing cost (tk)
(kWh)
325,900 250,500 2935 24,557.20
5 Mathematical analysis
The goal of the mathematical optimization model is to combine the many renewable
energy sources that are present in an area while taking into consideration both their
strengths and weaknesses (Aboagye et al. 2021). Solar and wind energy are utilized in
accordance with their daily availability and are temporarily stored.
where Is = total solar radiation(kWh/m2 ); (Ib , Id ) = the two types of solar radiation are
direct and diffuse (kWh/m2 ); (Fb , Fd , Fr ) = factors for the beam, diffuse and reflected
parts of solar radiation.
where Pwt = obtaining electric power by a wind turbine (kWh), Pw = power of the wind
generator, Aw = total area, Rw = overall efficiency generator.
1000
0
Electricity
In Fig. 4, we can see the total installation cost using the pie chart (obtained from
Fig. 3 and Table 5). The need for energy fluctuates throughout the day in Bangladesh.
Peak hours are usually counted from 5 p.m. to 11 p.m. since this is when there is the
most demand for power. As a result, the price of power is high from 5 to 11 p.m. to
persuade users to use less.
A Policy Framework for Cost Effective Production of Electricity 349
Table 5. Total installation cost and monthly produced electricity and cost.
Renewable energy Total installation cost Monthly producing Monthly saving Cost
(tk) electricity (kWh) (tk)
Solar 4,350,000 157,500 1,324,575
Wind 7,512,500 137,490 1,015,375
Hydroelectricity 2,505,000 88,050 736,716
Table 6. After collecting data (electricity bill) from East West University.
Common/check meter use Unit (kWh) Amount (TK) Per Unit (kWh)
Energy charge (off peak) 192,000 1,461,120 7.61
Energy charge(peak) 72,000 760,320 10.56
Total: 264,000 2,221,440 8.41
After collecting data from East West University to Table 6, we got Energy charges
for peak hours and off-peak hours. Finally, we show in Table 6: the total amount, total
unit, and per unit cost.
By comparing the Electricity cost for East West University (EWU) from Table 7.
According to our below Eq. (12), we will get a cover-up installation cost for EWU.
In this Fig. 5, we can see that Renewable Energy saves more cost except for Local
Electricity costs after comparing EMC and RE. Below mentioned Eq. (12) Cover up
installation Cost for EWU:
Cover up Installation Cost for EWU
= Total Installation Cost/Montly saving Cost (15)
From Eq. (12): we have to assure by Eq. (13) that how much annually saved money
using renewable energy.
Annually saved money using renewable energy
= (EAC − (Montly saving Cost × 12)) (16)
The total clean energy cost will be less than the local electricity cost. That would
be our expected result. The annual cost of electricity may be lower than the total clean
energy cost initially. But considering 5 years total clean energy cost will be profitable
after 5 years. The annual cost of electricity may be lower than the total clean energy cost
initially. But considering 2 years total clean energy cost will be profitable after 2 years.
7 Conclusion
Developing nations’ ability to electrify themselves is seriously under threat. One of the
biggest challenges to obtaining 100% energy availability throughout these nations is
getting power to remote areas from the main grid. This problem exists in Bangladesh,
A Policy Framework for Cost Effective Production of Electricity 351
a South Asian developing country with low per-capita energy usage. Bangladesh’s eco-
nomic expansion is being stopped by an energy problem in the nation. A problem that
cannot be handled without addressing the issue of powering remote areas is the approxi-
mately 4% of the population that still does not have access to electricity. People who live
in electrified areas are also impacted by load shedding. Renewable energy may greatly aid
in sustainably supplying Bangladesh’s needs. In order to implement these, Bangladesh’s
outdated electrical power infrastructure needs to be reorganized, including attempts to
ensure energy efficiency within the context of environmental sustainability as well as the
development of an electricity market that encourages competition. A reliable, efficient,
and responsible energy production, transmission, and distribution system is necessary;
these problems compelled electrical power corporations, governmental organizations,
and the community to discover an appropriate solution. Renewable energy sources affect
occupational health and safety across the entire system but provide relatively small aver-
age harm to specific workers. As a result of lower energy and labor intensity per unit of
energy provided, a typical coal energy cycle has more systemic effects than almost all
of the examined renewable energy sources. Yet, for almost half of the renewable energy
sources, the average risk to one employee is still lower. The objective of this paper is to
reduce energy costs from the consumers’ point of view. We applied the Particle Swarm
Optimization (PSO) optimization method in this paper to satisfy the objective function.
References
Aboagye, B., Gyamfi, S., Ofosu, E.A., Djordjevic, S.: Status of renewable energy resources for
electricity supply in Ghana. Sci. Afr. 11, e00660 (2021)
Al-Quraan, A., Al-Mhairat, B.: Intelligent optimized wind turbine cost analysis for different wind
sites in Jordan. Sustainability 14, 3075 (2022). [Link]
Archer, C.L., Jacobson, M.Z.: Evaluation of global wind power. J. Geophys. Res. D: Atmos. 110,
1–20 (2005). [Link]
Barlev, D., Vidu, R., Stroeve, P.: Innovation in concentrated solar power. Sol. Energy Mater. Sol.
Cells 95, 2703–2725 (2011)
Celikdemir, S., Yildirim, B., Ozdemir, M.T.:Cost analysis of mini hydro power plant using bacterial
swarm optimization. Int. J. Energy Smart Grid 2, 64–81 (2017). [Link]
IJESG.2017.2.2.05
Chu, Y.: Review and Comparison of Different Solar Energy Technologies. Global Energy Network
Institute (2011)
de Barbosa, L.S.N.S., Bogdanov, D., Vainikka, P., Breyer, C.: Hydro, wind and solar power as
a base for a 100% renewable energy supply for South and Central America. PLoS ONE 12,
e0173820 (2017). [Link]
Emrani, A., Berrada, A., Bakhouya, M.: Optimal sizing and deployment of gravity energy storage
system in hybrid PV-Wind power plant. Renew. Energy 183, 12 (2022). [Link]
1016/[Link].2021.10.072
Li, S., Gong, W., Wang, L., Gu, Q.: Multi-objective optimal power flow with stochastic wind
and solar power. Appl. Soft Comput. 114, 8328 (2022). [Link]
108045
Liang, X., Abbasipour, M.: HVDC transmission and its potential application in remote com-
munities: current practice and future trend. In: IEEE Transactions on Industry Applications
(2022)
352 S. Hossen et al.
McKenna, R., Pfenninger, S., Heinrichs, H., et al.: High-resolution large-scale onshore wind energy
assessments: a review of potential definitions, methodologies and future research needs. Renew.
Energy 182, 659–684 (2022)
Müller, M., Fichter, C.: Zeitschrift für Energiewirtschaft 46(1), 21–26 (2022). [Link]
1007/s12398-021-00314-z
Rahman, A., Farrok, O., Haque, M.M.: Environmental impact of renewable energy source based
electrical power plants: Solar, wind, hydroelectric, biomass, geothermal, tidal, ocean, and
osmotic. Renew. Sustain. Energy Rev. 161, 112279 (2022)
Sen, S.K., al Nafi Khan, A.H., Dutta, S., et al.: Hydropower potentials in Bangladesh in context of
current exploitation of energy sources: a comprehensive review. Int. J. Energy Water Resour.
6 (2022). [Link]
Tajziehchi, S., Karbassi, A., Nabi, G., et al.: A cost-benefit analysis of Bakhtiari hydropower dam
considering the nexus between energy and water. Energies 15, 871 (2022). [Link]
3390/en15030871
Vartiainen, E., Breyer, C., Moser, D., et al.: True cost of solar hydrogen. Solar RRL 6 (2022).
[Link]
Yu, J., Tang, Y.M., Chau, K.Y., et al.: Role of solar-based renewable energy in mitigating CO2 emis-
sions: evidence from quantile-on-quantile estimation. Renew. Energy 182, 216–226 (2022).
[Link]
Zalhaf, A.S., Zhao, E., Han, Y., et al.: Evaluation of the transient overvoltages of HVDC trans-
mission lines caused by lightning strikes. Energies 15, 1452 (2022). [Link]
en15041452
Zhou, H., Yao, W., Ai, X., et al.: Comprehensive review of commutation failure in HVDC
transmission systems. Electr. Power Syst. Res. 205, 107768 (2022)
Zhu, Q., Chen, X., Song, M., et al.: Impacts of renewable electricity standard and renewable energy
certificates on renewable energy investments and carbon emissions. J. Environ. Manage. 306,
114495 (2022). [Link]
Technology of Forced Ventilation of Livestock
Premises Based on Flexible PVC Ducts
Abstract. Modern dairy farming has a number of urgent problems. In the most
relevant works, the authors suggest using natural ventilation in cowsheds, but an
insufficient number of exhaust and supply shafts create unfavorable microclimate
conditions throughout the year. Today in Russia and Belarus, duct-type supply
ventilation systems have been put into operation piece by piece and actively con-
tinue to be implemented by «RusAgroSystem», «Continental Technologies», etc.
The duct circuit was developed in Compass-3D. Modeling of air movement was
carried out in the SolidWorks 2020. The authors have developed a functional and
structural scheme of forced ventilation in livestock premises based on flexible
PVC ducts. Two modes of operation are proposed depending on the time of year
(winter and summer). In summer, wide openings located at the bottom of the bag
are used, through which air passes. In winter, the upper part of the PVC bag with
small holes is used.
1 Introduction
Modern dairy farming has a number of urgent problems. These include the control of
optimal conditions for keeping animals [1–4]. Due to the high concentration of livestock
on farms, indoor air quality on most domestic farms, especially those actively under
construction in the 80s, exceeds all maximum permissible concentrations (MPC) (estab-
lished by the results of their own research), especially in winter. In the summer, animals
often experience heat stress. Indicators exceeding the norms include the content of car-
bon dioxide, hydrogen sulfide, ammonia, dust content, and concentrations of pathogenic
microflora in the air. Such farms still contain most of the livestock of cows, while there
is no widespread technical solution to improve the situation with air quality inside the
livestock premises. It is not possible for most farms to rebuild new premises in order to
improve air quality, so this work is aimed at offering a solution for the modernization of
microclimate systems, which, when implemented, will improve the air quality situation.
In addition to the fact that it is necessary to reduce the concentration of the gases
already listed, it is also necessary to additionally monitor the following parameters:
temperature, humidity, air flow velocity—their control will allow you to build the correct
operating modes of actuators to normalize the indoor microclimate. The results obtained
in the course of many years of research by both our own and domestic researchers indicate
that non-compliance with the above standards reduces the productivity of livestock,
which leads to financial losses.
In the most relevant works, the authors propose to use natural ventilation in cowsheds
[5, 6]. Analysis of the results of the study [7] makes it possible to assert that in cowsheds
of loose and tethered type of content, with a natural ventilation system, changes in
microclimate parameters directly depend on the season of the year and the area of the
room. Insufficient number of exhaust and supply shafts create unfavorable microclimate
conditions throughout the year.
The authors of the article speak for the benefit of natural ventilation [8], citing
statistics that to purify the air in livestock premises from microorganisms formed in it,
about 2 billion kWh of electricity is spent on ventilation alone per year, additionally
1.8 billion kWh, 0.6 million m of natural gas, 1.3 million tons of liquid are spent on
heating the premises and 1.7 million tons of solid fuel.
On motherland farms, natural ventilation often does not work efficiently enough in
the temperate zone of Russia (established on the basis of our own research), therefore it
is necessary to resort to additional forced ventilation. One of the disadvantages of forced
ventilation is that it is necessary to maintain a balance between the efficiency of the
ventilation system, which increases the productivity of animals and the energy expended,
which ensures the operation of the actuators. Let’s consider some of the research results
of foreign scientists who describe the influence of microclimate on productivity.
Microclimate as a constantly acting environmental factor has a great impact on the
health and productivity of farm animals. The nature and intensity of the processes of heat
regulation, gas exchange, physiological and other vital functions of the body depend on
it.
If the microclimate parameters deviate from the established norms, it leads to a
decrease in milk yields in cows by 10–20%, a decrease in live weight gain by 20–30%,
an increase in the case of young animals by 5–40%, a decrease in productivity by 30–
35%, a reduction in service life by 15–20%. The costs of feed and labor per unit of
production are increasing. There is an increase in the cost of repairing the repair of
technological equipment. The service life of livestock buildings is reduced three times
[9, 10].
Foreign authors [11] propose a solution to the problem of heat stress in dairy cows
bred in a free stall using an automatic irrigation system that is installed on a network of
water pipes in which water is constantly located. The system also involves the inclusion
of forced ventilation.
Such a system has been tested by several researchers, for example, Nordlund [12].
In his work, he found that the natural ventilation system, supplemented by a duct-type
supply ventilation system, provides an optimal cooling effect. The study also provides
recommendations on the diameter of pipes, the height of their location above the floor,
the ventilation rate and the optimal distance between the holes.
Technology of Forced Ventilation of Livestock 355
Also, scientists [13] have proved that a properly designed, installed and timely ser-
viced PPTV (positive-pressure tubular ventilation) system can effectively cool each
animal individually.
Researcher from the USA, based on scientific work [14] suggests using a duct-type
supply ventilation system, since such a system is a PPTV system. This system can have
a significant positive impact on the reduction of respiratory diseases of cattle (BRD) in
the premises for keeping calves.
The central part of Russia has weather conditions significantly different from those
in Europe, where PVC duct systems are already widely used.
At the moment, in Russia and Belarus, duct-type supply ventilation systems have
been put into operation by the companies «RusAgroSystem» and «Continental Tech-
nologies» and are actively continuing to be implemented. Such a system is little known
and not very widespread in the territory of the Russian Federation because of its relative
novelty, but despite all the shortcomings, the system is promising because of its efficiency
and cost. When studying the solution of flexible duct ventilation of Rusagrosystem, the
studied solution has disadvantages and there are no functions that cannot provide the
solution with permanent all-season operation.
Fig. 1. Examples of flexible PVC ducts on existing farms in the Russian Federation. A,C—PVC
duct installed above the boxes for keeping calves, B—PVC duct installed in the milking parlor.
With poor ventilation, due to the release of a large amount of water vapor by the cow
when breathing in the room, the humidity of the air increases rapidly.
Thus, the development of technology for the operation of a forced ventilation system
using PVC bags on the territory of the central part of Russia can contribute to the
popularization and widespread use of this type of ventilation system.
The problems described above suggest the development of a forced ventilation sys-
tem technology based on flexible PVC ducts. The technology developed in this work
is a scheme of a flexible PVC duct, the selection of operating modes using simulation
modeling to check the quality of the designed system, the operating modes of the system
and the decision-making algorithm for choosing the operating mode.
The duct circuit was developed in solid-state modeling programs such as Compass-3D.
Modeling of air movement was carried out in the SolidWorks 2020 software package.
The boundary conditions for the theoretical modeling of indoor air movement in the
SolidWorks 2020 software package were set as follows: a section of the building was
taken in a section displaying a window acting as a supply channel.
Figure 2 shows a similar model of a flexible PVC duct, according to which it is a
circuit and according to which simulation modeling was carried out.
Figure 2 shows a diagram of a flexible PVC duct. Under the number 1, an industrial
axial pressure fan is installed, with a blade rotation speed of 500 rpm, the diameter of the
outer sleeve is 0.8 m, the diameter of the central sleeve is 0.1 m, the direction of rotation
is counterclockwise. The established properties of the fan in terms of volume flow and
pressure drop are shown in A3. Air flow guides are installed under the number 2, which
exclude the swirling flow (5) presented on A2 and turn it into a turbulent but directional
flow (6). Under the number 3 is a confuser with a local resistance coefficient of 0.29,
and a pressure loss of 11 Pa. Under the number 4 is a metal duct without holes 6 m long,
7 is a flexible PVC duct with a diameter of 700 mm, 8/V are holes with a diameter of
100 mm for summer mode. The total length of the flexible duct is 27 m.
During calculations, the total air temperature is set to 24 °C, the flow is turbulent,
humidity is 70%, the current medium is air, the ambient pressure is 101325 Pa, gravity
is on.
Fig. 3. Functional and structural diagram of the developed system based on flexible PVC ducts
account the throughput of the PVC duct, where the noise level does not exceed 75 dB.
Simulation modeling has shown that it is impractical to place animals under the first
part of the duct (the first 10 m of the duct with holes), since there is no point blowing
of animals at a speed of 2 m/s. Animals should be placed in the second and third zones
of the duct, but it is necessary to reduce the fan power, since the blowing speed of
animals—6 m/s is excessive. This speed (6 m/s) is acceptable when airing rooms in
summer, especially those rooms where the technology of keeping animals on a leash is
used. In further studies, an increase in the size of the bag will be modeled both in length
and in internal volume, as well as with different supply fans in performance.
Based on the information presented in Fig. 5, it was found that in order to achieve
optimal blowing of animals and reduce the likelihood of heat stress while maintaining
productivity, the following operating modes were established:
Technology of Forced Ventilation of Livestock 359
Fig. 5. The value of the heat stress index depending on the temperature and relative humidity of
the air
– at initial thermal stress, the system operates with an installed capacity of 10–15
thousand m3 /hour of air.
– at mild heat stress—16–20 thousand m3 /hour of air.
– at average heat stress—21–25 thousand m3 /hour of air.
– at strong heat stress—26–30 thousand m3 /hour of air.
– at extreme heat stress—30–36 thousand m3 /hour of air.
It is also important to note that it is planned to use the system based on flexible PVC
ducts in combination with indoor temperature and humidity sensors.
4 Conclusions
1. The operation mode of a PVC flexible duct is proposed and described (L–27 m, D–
0.7 m, fan power 36900 m3 /h), where at the maximum power of the system in the
2nd and 3rd zones, the air velocity at a height of ~0.2 m reaches 6 m/s;
2. The conducted simulation showed that with a PVC duct length of 27 m, the optimal
design is described by the following characteristics: D–700 mm, PVC duct material,
2 holes with a distance of 1, 2 m from each other, located at an angle of 80°, the
diameter of the holes is 100 mm. The speed at the inlet to the duct is 12 m/s. The
noise level is not more than 75 dB.
3. The functional and structural scheme of the developed system based on flexible PVC
ducts has been developed.
References
1. Dovlatov, I.M., Yuferev, L.Y., Mikaeva, S.A., Mikaeva, A.S., Zheleznikova, O.E.: Develop-
ment and testing of combined germicidal recirculator. Light Eng. 29(3), 43–49 (2021)
2. Tomasello, N., Valenti, F., Cascone, G.: Development of a CFD model to simulate natural
ventilation in a semi-open free-stall barn for dairy cows. Buildings 9(8), 183 (2019)
3. Dovlatov, I.M., Rudzik, E.S.: Improvement of microclimate in agricultural premises due to
disinfection of air with ultraviolet radiation. Innovations Agric. 3(28), 47–52 (2018)
360 I. M. Dovlatov et al.
4. Tikhomirov, D., Izmailov, A., Lobachevsky, Y., Tikhomirov, A.V.: Energy consumption opti-
mization in agriculture and development perspectives. Int. J. Energy Optim. Eng. 9(4), 1–19
(2020). [Link]
5. Gay, S.W. Natural Ventilation for Free Stall Dairy Barns (2009)
6. Tikhomirov, D., Izmailov, A., Lobachevsky, Y., Tikhomirov, A.V.: Energy consumption opti-
mization in agriculture and development perspectives. Int J Energy Optim Eng 9(4), 1–19
(2020). [Link]
7. Martynova, E.N., Yastrebova, E.A.: Features of the microclimate of cowsheds with a natural
ventilation system. Vet. Med., Anim. Sci. Biotechnol. (6), 52–56 (2015)
8. Nalivaiko, A.: The system of microclimate regulation on farms and complexes of cattle.
Scientific and Educational Potential of Youth in Solving Urgent Problems of the XXI Century,
vol. 6. pp. 177–180 (2017)
9. Martynova, E.N., Yastrebova, E.A.: The physiological state of cows depending on the
microclimate of the premises. Achievements Sci. Technol. Agro-Ind. Complex. 8, 53–56
(2013)
10. Dovlatov, I., Yuferev, L., Pavkin, D.: Efficiency optimization of indoor air disinfection by
radiation exposure for poultry breeding. Adv. Intell. Syst. Comput. 1072, 177–189 (2020)
11. D’Emilio, A., Porto, S.M.C., Cascone, G., Bella, M., Gulino, M.: Mitigating heat stress of
dairy cows bred in a free-stall barn by sprinkler systems coupled with forced ventilation. J.
Agric. Eng. 48(4), 190–195 (2017). 691
12. Nordlund, K.V., Halbach, C.E.: Calf barn design to optimize health and ease of management.
Vet. Clin.: Food Anim. Pract. 35(1), 29–45 (2019)
13. Mondaca, M.R., Choi, C.Y.: An evaluation of simplifying assumptions in dairy cow
computational fluid dynamics models. Trans. ASABE 59(6), 1575–1584 (2016)
14. Middleton, G.: What Every Practitioner Should Know About Calf Barn Ventilation
Author Index
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2023
P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 361–362, 2023.
[Link]
362 Author Index
P T
Pal, Subham 216 Tarasova, Elizaveta 147
Panchenko, V. 183, 272 Tokarev, K. 183
Pavkin, Dmitry Y. 353 Tsirulnikova, N. V. 317
Phongsirimethi, Nitirut 157 Türkoğlu, Hani Kerem 86
Piamvilai, Nattavit 282
Polikanova, Alexandra A. 353 V
Pradhan, Satish 37 Vasant, Pandian 19
Pranata, Maria Shappira Joever 192
Preeti, S. H. 97 W
Wattanakitkarn, Tanachot 282
R Wiecek, Margaret M. 136
Rahman, Abdur 325
Rani, Sunanda 126 X
Rattananatthawon, Ongorn 282 Xaba, Siyanda 126
Reza, Ahmed Wasif 53, 168, 248, 262, 295,
325, 338 Y
Rodriguez-Aguilar, Roman 19 Yeoh, Kenneth Tiong Kim 209
Rosales, Marife 43 Yurochka, Sergey S. 353
Roy, Manish Kumar 37
Z
S Zaman, Shamsun Nahar 53
Sagar, Tohidul Haque 338 Zerin, Nighat 295
Photovoltaic solar panels provide a renewable, sustainable, and eco-friendly energy source that can significantly contribute to alleviating energy crises in developing countries such as Bangladesh. By harnessing abundant sunlight, solar panels reduce dependency on fossil fuels and lower energy costs. They also offer a scalable solution to meet both current and future power demands, supporting economic growth and energy security .
The integration of AI technologies in design processes is transforming the roles of designers by making tasks more efficient and data-driven while demanding new skillsets. Designers using AI must blend technical abilities in AI tools with creative skills to leverage data sets and algorithms for generating designs, allowing for a more analytical approach compared to conventional methods . This shift enables designers to automate routine tasks and rapidly explore multiple design options, fostering greater innovation . Despite AI’s efficiency in generating design options, it lacks the human element, such as emotional connectivity and creativity, thus requiring designers to balance AI capabilities with human intuition to maintain creativity and originality in their work . Future designers will also need to understand the ethical implications of AI, such as potential biases, and ensure its responsible use to produce inclusive and fair designs . Consequently, designers will need to develop competencies in both AI technology and creative processes to effectively collaborate with AI in producing innovative and effective designs .
Imaging technology enables detailed analysis of seedling characteristics by capturing high-resolution images for evaluating the strength and quality of soybean seeds. Ilastik software and hyperspectral imaging improve classification by focusing on specific visual traits like color and structure, leading to a more accurate assessment of seed vitality and viability . This approach helps differentiate healthy seedlings from those with low physiological quality due to factors such as disease or mechanical injury.
The challenges in managing e-waste within the context of green IT primarily involve the environmental and health impacts associated with inadequate e-waste recycling and disposal methods. Such methods can lead to the release of toxic substances into the environment and pose health risks, particularly in densely populated urban areas where green spaces are limited . Additionally, the rapid advancement of technology and increased consumption of electronic devices have exacerbated the problem, making the recycling and managing of these wastes more complex . Potential solutions include promoting the use of renewable energy sources and energy-efficient systems within IT infrastructures, such as data centers, to reduce dependency on non-renewable resources and to minimize electronic waste . Implementing strategies for better workload distribution in data centers using green energy can also enhance sustainability . Moreover, remote sensing technology can be employed to better manage and monitor green spaces which play a crucial role in mitigating pollution and enhancing urban environmental quality . Effective policy measures and incentives for recycling and using green technologies are also essential to address these challenges comprehensively .
Hyperspectral imaging provides detailed information across spectral bands which, when combined with machine learning techniques, helps in accurately distinguishing between different seed varieties. In particular, Convolutional Neural Networks (CNNs) greatly benefit from this integration, as they can extract and utilize complex features from high-dimensional data to categorize seeds with approximately 99% accuracy .
To improve energy efficiency in cloud data centers, methods such as dynamic workload distribution, server consolidation, and the integration of low-power modes are employed. These efforts lead to reduced energy consumption and operational costs while also minimizing heat generation, thus preventing hardware failures. Energy-efficient practices support sustainable cloud computing growth and align with environmental goals by reducing fossil fuel dependence .
Accurate seed quality classification ensures the selection of viable and robust seeds, directly influencing crop yield and sustainability. It allows for early identification of superior plant characteristics and facilitates the elimination of poor-quality seeds. This leads to more uniform plant stands, better resource use efficiency, and enhanced resilience against environmental pressures, thereby supporting overall agricultural productivity and ecological sustainability .
Neural networks and computer vision use visual features such as color, shape, and texture to classify pepper seeds. Images are captured, and features are extracted and reduced for effective analysis. Neural networks then classify the seeds with improved accuracy by learning patterns from these reduced features, achieving an accuracy rate of 84.94% . This process is enhanced by cross-validation, ensuring reliable classification results.
Deep learning technology simplifies feature extraction through automatically learning representations from raw input data, leading to improved classification accuracy in seed classification tasks compared to traditional machine learning models. While methods like KNN and SVM require manual feature engineering, deep learning models like CNNs can achieve superior accuracy, reported at 99%, by leveraging hierarchical feature extraction .
Multispectral imaging techniques offer significant benefits for classifying eggplant seed varieties by capturing a wide range of spectral features that improve discrimination between different cultivars. From multispectral images, 78 distinct features of individual eggplant seeds were extracted. Using these features with advanced classification models like SVM and CNN, high classification accuracies are achieved, ranging from 90.12% to 100% for SVM and 90.67% for a 2D CNN, indicating robust performance . These techniques allow for the analysis of genetic and environmental factors affecting seed coats, which conventional methods might miss . This method enhances the speed and accuracy of classification compared to manual techniques, thus facilitating more efficient seed sorting and quality control . Multispectral imaging contributes to improving seed quality assessment by allowing detailed morphological analysis, potentially increasing productivity and ensuring better adaptation of seed varieties to local environments .