You are on page 1of 16
Methodsx 10 (2023) 102152 ‘Contents lists available at ScienceDirect Methodsx ELSEVIER journal homepege: wiw-elsevier.comilocatefmex Anon-dominated sorting based multi-objective neural network ® algorithm oh, Deepika Khurana*, Anupam Yadav", Ali Sadollah” * epronent of Matai, Dr. 8 R Amber Naonl at of Technolgy, Jun, 144027, IIA pac of Egnerig Univer of Sece and Cale (USC), Tan an ARTICLE INFO ABSTRACT Meteo: "Neural Neswork Algorithm (SNA) ia recently proposed Metaheurstic that isinspred by the idea Mulojetiv Neral Network Algor ‘of artical neural networks. The performance of NNA on single-objective optimization problems is very promising and effective. In this arile, a maiden attempt is made to restructure NNA for its Swe ae secthn posible ws to adres mlt-objecive optinization problems. To male NNA suitable for MODs earl mtworkag several fundamental changes in the original NA are proposed. A novel concept is proposed to Mtecetve {ntialze the candidate solution, postion update, and selection of target solution. To examine the Non dorinated sorting ‘optimization ability ofthe proposed scheme, itis tested on several benchmark problems and the results are compared with eight state-of-the-art multi-objective optimization algorithms. Inverse ‘generational dstance(IGD) and bypervolume (HV) metrics are also calculated to understand the ‘optimization ability ofthe proposed scheme, The results are stately validated using Wileoxon signed rank test. I is observed that the overall optimization ability ofthe proposed scheme to solve MOPS is very good. + This paper proposes a method to salve multi-objective optimization problems + Amuitiobjective Neural Network Algorithm method is proposed. + The proposed method solves dificult multi-objective epuimization problems Specification table Sibjot ae ‘athematies and Compating Narue and reference of original method: Neural Network Algoitim 1), SGA (2) Introduction ‘An optimization problem that deals with one objective is called a single objective optimization problem, whereas if it deals with ‘more than one objective, it refers to a multi-objective optimization problem. In our daily life, we face many problems which include ‘numerous objectives. For instance, a car buyer wants to minimize its cost but also wants to maximize his comfort zone. Many problems in engineering and science deal with multi-objectives (MOPs), so the study of MOPs will remain a continuous process. Mathematically, * Corresponding author. Email address anpem@alt.c.in (A. Yadav), Intps//do.org/10.1016/) mex.2028,102152 Received 8 February 2023; Accepted 23 March 2023 Available online 26 March 2023 2215-0161/0 2023 The Author(s). Published by Elsevier BV, This isan open access article under the CC BY-NC-ND license (htp://ereativecommons.or/licenses/by-ne-nd/.0/) .Rurana A. Yadav and A. Sal Methods 10 (2025) 102152 the formulation of MOP is as follows: Optimize ®G) = (6:05)... )) subject to WO) S082 1,2 40 « o where yi) and 4,65) are constraints and F= (y,y9.-xpI” isa decision variable vector and 4()) are objectives. One needs to simultaneously optimize all objectives to identify the optimal solution. Unlike single-objective optimization problems, MOPs will have a solution set that satisfies all conflicting objectives, Therefore, the challenge is to identify a solution set that includes the trade-off between a numberof objectives rather than a single optimal solution (5) Early-developed optimization algorithms were aimed to address single-objective optimization problems. Therefore to find the ‘optimal solution for MOPS, researchers converted multi-objective into single objective by means of user-defined parameters and solved it using single objective optimization algorithms (4] in the following way: Optimize ©) = (by (F).426).--(D) @ ‘will be formulated as given below where it is called the weighted sum method 6H = Ye, @ where 1, are weights used as constant parameters. Therefore, G(j) will be optimized instead of @(j). By changing the values of constant parameters, one can find the requized set of solutions. However, being parameter dependent, the results ofthese types of problems fluctuate as parameters fluctuate [4]. To overcome these issues, many researchers suggest that instead of parameter- dependent techniques, one can use population-based techniques to find the solution to these types of problems (3). Population-based algorithms initiate witha set of individuals, thelr result is also a population. This property makes them popular among MOPs. With the help ofa population-based algorithm, one can find a non-dominated solution set of MOP whichis named Pareto Optimal Set(PS) and its mapping onto objective space is called Pareto Front(PP) [3] Unlike, sinle-objective optimization problems, the MOPs have two goals to achieve. One iso find a non-dominated solution set ‘that concurrently satisfies all objectives and it must be evenly distributed aver the approximate PF. Convergence and diversity are two mala factors that show the quality of the solution. The first goal can be achieved by choosing an appropriate external archive that could store non-dominated solutions with high qualitative metre valves (3) while others can be achieved by choosing an appropriate density estimator [3]. The density estimator will choose a target solution from the external archive which will direct the set of solutions towards tue PF and guide i to distribute evenly on approximate PF. The remaining of this paper is organized as follows: Section “Related works” collects recent literature concerning MOPS and corresponding optimizers. Afterward, the standard NNA, its searching strategies, and its steps are explained in detal in Section “Neural network algorithm”. in Section "Multi-objective neural network algorithm’, the proposed modified multi-objective NNA along with its new concept is provided, Section “Numerical results and discussions” describes considered benchmarks, performance metrics, and numerical results for éifferent optimizers accompanied by statistical tests in detail. Finally, Section “Conclusions and future directions" draws a conclusion and provides some future directions. Related works Meta-heuristic algorithms have shown great efficiency and potential in finding solutions for MOPS. Evolutionary algorithms and ‘Swarm based algorithms are meta-heuristic algorithms, they have shown cheir ability towards single objective optimization prob- lems. They also have shown great performance in providing the optimal solution of MOPs. Genetic Algorithm(GA) [5], Differential Evolution(DE) [6] and Particle Swarm Optimization (PSO) (7) are the most used and popular algorithms which have been modified ‘and utilized to solve MOPs. Multi-objective Genetic Algorithm(MOGA) [8], Non-dominated sorting Genetic Algorithm(NSGA) (91, NSGA-II [2], NSGALI [10] are various versions of GA which have been used for tackling the MOP. ‘The MOGA is the first known modification of GAs that have been implemented for solving the MOPs. In 1994, Kalyanmoy Deb introduced the concept of non-dlominated sorting [9] and proposed it with GA to solve the MOPs, The concept of non-dominated sorting i still famous among multi-objective optimization algorithms to separate dominated and non-dominated solutions. However, the density estimator used in NSGA [9] is not well suited in all cases, hence the concept of crowding distance is introduced along ‘with a new technique of non-dominated sorting in NSGA-II (2). The NSGA-II is a very well-known optimizer in solving MOPs, On ‘one hand, non-dominated sorting sorts the non-dominated particles, and on the other hand, crowding distance helps to maintain diversity within particles, Later on, to tackle MOPs with more than three objectives, the NSGA-III has been developed which works ‘on a reference point-based strategy AAs for GAs, the PSO has also shown excellent efficiency in solving MOPs, PSO is inspired by the technique of finding food for various swarms such as birds and animals. tis one of the most prominent members of the family of population-based optimization algorithms. Ithas also given a contribution in the field of MOPs as well. The various variants of PSO in tackling MOPs include but not limited to, are (Multi-Objective Particle Swarm Optimization) MOPSO [11|, (Speed Constrained Particle Swarm Optimization) SMPSO .Rurana A. Yadav and A. Sal Methods 10 (2025) 102152 12}, (Dual-Objective Multi-Objective Particle Swarm Optimization) DMOPSO [15], (Multi-Objective Particle Swarm Optimization based on Two Archive Mechanism) MOPSOTA [14) etc In MOPSO [11], non dominated sorting strategy along with the grid adaptive method is used, while in the SMPSO (12), velocity restricted and mutation operator is utilized as turbulence. In the DMOPSO [13], dual objectives (i.e primary and secondary objectives) are considered, and in the MOPSOTA [14], two archive mechanism is proposed to tackle the multi-objectives. Similar to GA and PSO, there are many recently developed optimizers that have shown their potential for the optimal solutions ‘of MOPs, For instance (Strength Pareto Evolutionary Algorithm) SPEA (15), (Pareto Archived Evolutionary Strategy) PAES [16], (indicator-Hased Evolutionary Algorithm) IBEA [17] are some of the algorithms which have shown their ability to solve the MOPS: ‘These algorithms have played a vital role in our lives as most of the optimization problems we handle in our daily life include _multi objectives. These algorithms have been used to solve those and make our life easy. The applications of multi-objective optimiza- tion algorithms can be seen in Finance (18), Machine Learning [19], Medieal Sciences [20], Transportation Problems [21], Supply Chain problems [22], etc. If we look at the field of Machine Learning, the participation of multi-objective optimization algorithms is prominent. Multi-objective optimization algorithms assist machine learning algorithms to optimize their hyper-parameters, usually ‘under conflicting performance objectives, and selecting the best model for a given task. For instance, [23] conducted a study on different types of multi-objective PSO for Feature Selection and create an FS scheme to tackle classification problems. Acampora cet al. [24] proposed a multi-objective optimization scheme for the training set selection problem. In a similar manner, inthe fields of ‘Medical Sciences, Engineering, etc, the role of multi-objective optimization algorithms is equally important, and many optimization algorithms participate in these fields to improve their efficiency. However, multi-objective optimization still remains an open research area because of various reasons. Hence, the need and development of a better multi-objective algorithm shall remain a continuous process. In this article, we design a non-dominated sorting-based Mult-Objective Neural Network Algorithm(MO-NNA). Neural Network Algorithm(NNA) [1 is a recently proposed ‘meta-heuristie for single objective optimization problems. The NA has proved its ability in solving various types of single objective ‘optimization problems [25-32]. In this paper, the NA has been restructured based on new search mechanisms and converted into a ‘multi-objective optimizer to find the optimal solutions for MOPS. In order to assess the performance of the proposed optimizer, various ‘well-defined MOPs have been considered and a comparison has been made with state state-of-the-art multi-objective optimization algorithms available inthe literature. Neural network algorithm ‘A meta-heuritic optimization algorithm called the Neural Network Algorithm is motivated by the distinctive structure of artificial ‘neural networks (ANNs). The NA employs the concept and structure of ANNs to update the position of individuals to achieve optimal solutions [1]. As for the ANNs, the NNA uses bias and transfer function operators to update the position and weights of individuals. Similar to other meta-heuristic optimization algorithms, the NNA initiates with the initialization of individual positions so-called pattern solutions. In the NNA, along with position, the weight matrix is also initialized and restricted to a constraint. As the ANNs ‘map the input data onto target data iteratively choosing values of weights and reducing error at each step, in the same manner, the NNA chooses weights and performs bias and transfer function operators to the position of pattern solutions to reduce the error and converge the individuals towards the target solution (i.e., so far best solution obtained) at each iteration. The detailed steps of standard NNA are explained in the following sections, ‘Steps of NNA 9 1p the first step, the position matrix of pattern solutions is initialized as given follows: xe o Here, x1 are decision variables of dimension K. The order of the matrix is nPop = K, whet and K is the dimension of the decision variable. Gi) Then, the weight matrix of individuals of size nPop « nPop is initialized. Kach pattern solution will be assigned an array of weights of size nPop. Hence, the matrix of weights of the population of size nPop"nPop will be represented as follows: nPop i the size of the population w o .Rhurana A. Yadav nd A. Sal Methods 10 2025) 102152 Gin ww) o Here, wi & (0,1). Besides, each column of Fg. (5) is restricted to a defined constraint given in the following relation: After creating a random initial population of pattern solution, their positions, and corresponding weights, the cost of each {individual is evaluated using the objective function (C) shown as given in the following equation: 2..mPop 6 Cost, = f(vh.xhe ag) = 12 nPop o ‘The individual having the minimum cost wil be chosen asthe target solution and its corresponding weight will be considered asthe target weight Aterwards, postion and weight matrices will be updated using the following equations. The position of the pattem solution willbe updated as given follows (1 Eee ey L.2mPop 8) Kl Y= KAR + DAs 12 nPop o where tis the iteration index, and accordingly the individual’ weights will be updated using the following equation: Heer 5) = Wil) + 2 rand We) —W, nPop a) Note that the weight matrix must always satisty the constraint given in Ea, (6). To achieve the optimal solution, improve convergence, and maintains diversity within the population, certain potion of indi- viduals within the population is modified using the bias operator. The position and weight matrices are updated using the bias ‘operator given in Table 1 Here, dis the modification factor and acts as a decision maker. @ isa pre-defined factor. Initially, its set to one (Le,, the entire population is modified at first iteration) and its value reduces adaptively using the following equation: (0 +1) = aC") + 0.99,1" = 1,2,3... Maximumt trations ay Tablet “The performance of Biased operator used inthe standard NNA. for i=1 to nPop if rand4) Apply Transfer Function Operator end if end for here isthe iteration index. Its worth mentioning that any other reduction equation could also be used instead of Fg (11) for this purpose. (vi) Not only the bias operator but the transfer function operator also plays a vital role in developing the searching strategy in the NNA. Indeed, the bias operator handles the exploration phase in the NA, while the transfer function operator plays @ vital role in the exploitation phase. The transfer function operator helps in reducing error and improving the convergence of pattern solutions. It updates the position of individuals using equation (12) as suggested in [1]: xt) 2 Fx) (0) 42 « rand « (XC) X)),=1,2,3...mPop 2) [As seen in the bias operator, some percentage of the population is modified using the transfer function operator, and @ is the deciding factor given in Table 2: (wil) Now, the objective values of all updated pattern solutions will be calculated using the objective Function, (ill) Using the comparison and selection, update the target solution and its corresponding target weight. (ix) The value of @ will also be updated using the suggested reduction formula (see Fo. (11) (%) Finally, check the stopping criterion (e.., the maximum aumber of function evaluations). If satisfied the NNA stops, otherwise return to Step 4. Furthermore, for the sake of clarity, Fg. 1 illustrates the detailed steps of stendard NNA in a graphical manner. In the literature, the NNA has demonstrated its capacity to address single-objective optimization problems.Some of the applications are: 3D printed acrylonitrile butadiene styrene polymer [25], PEM fuel cells (25,27), engineering optimization problems (27-29), robot manipulator arms [30], shale reservoir [31], and natural gas liquefaction plant [32]. However, so far its multiobjective version ‘and its applications for solving the MOPs have not been investigated. In this paper, to convert the standard NNA into a multi-objective ‘optimizer, some modifications have been made to searching operators. In the next section, detailed explanations of modified NA are given, Multi-objective neural network algorithm {As explained in Section “Neural network algorithm”, The NNA is @ newly designed algorithm based on the working of the human nervous system. It uses bias and transfer function operators to alter pattern solutions to explore the unvisited solutions. The bias ‘operator increases the exploration of the algorithm, while the transfer function operator increases the exploitation ability in the NNA. At early iterations, the bias operator plays a vital role to alter the pattern solution and explore the objective space. As iteration continues (mostly affected in Final iterations), the transfer function operator comes to the rescue and avoids premature convergence. However, this concept doesn’t work well in the case of MOPs in which premature convergence needs to be avoided at a very early stage. Also, the exploration of objective space is much needed due to a number of objectives. Hence, bias and transfer function ‘operators need to work parallelly and begin to perform at a very early stage. Therefore the following key changes are proposed in INNA to make it suitable for MOPs. Initiation of candidate solutions Similar to the standard NNA, the proposed modified NNA also starts with the initialization of pattern solutions and their corre- sponding weights. Positions of individuals willbe initialized from Gauss distribution and weights are initialized from (0, 1) restricted to the constraint given in Eq. (6). The primary distinetion between a MOP and a problem of single-objective optimization is the number of objectives. Therefore, the challenge is how to choose the target solution which plays an important role in finding the final .Rhurana A. Yadav and A. Sal Methods 10 (2023) 102152 rand 3 — Report 1s stopping com- | Vex the «ition satisfied target soliton. Update Update Ww o-@ GD GD a> @& Fig. 1, Block diagram of NA. @ Ranks @ Rank? @ Ranks Lo, A 2, The concept of Non-dominated sorting strategy for multi-objective optimizers. ‘optimal solution. To select the best patter solution (i.., Target solution), the concept of non-dominated sorting is used which was Introduced in the NSGALI [2]. With the computational complesity of O(MN*), this notion presents a novel way to distinguish be ‘tween dominated and non-dominated particles, where M is the number of objectives and N isthe size ofthe population. The concept of a non-dominated sorting strategy is explained in Fig. 2 However, the non-dominated sorting method by itself can not help us to choose the best pattem solution, lence, we use the crowding distance mechanism as density estimator to reduce the crowd among obtained Solutions on PF. Finding a non-dominated solution set that is evenly dispersed around the PF curve is the aim of MOPs. The concept of eronding distance is also introduced in the NSGAAI (21. It displays the non-dominated solution distribution around a specific non-dominated solution. Vg. 3 depicts the schematic view ofthe crowding distance operator. Crowding distance forth individual i calculated based on the average length of the cuboid formed by combining the neighborhood individuals. Lower crowding distance increases the distribution of solutions in a certain area while recognizing the high value of crowding distance . Khurana A. Yoda and A. Sadak Methods 10 (2025) 102152 do = inf inf Ki 1. Schematic view of how crowding Distance performs R(t + 1) = Rift) + XP + 1),8 = 1,2..nPop We 0 +1) = We) +2» rand « (WE() = Wi),i = 1,2..nPop XP(0-+ 1) = Bias(Xi(t)),i = 1,2...nPop Who +1) = Bias(Wi(t)),i = 1,2..nPop Here Bias is used from table [I] Xe(e + 1) = Xie) +2 rand * (Reereet(r) — X()),¢ = 1,2,3...nPop Fig. 4. Block Diagram of Modified Neural Network Algorithm used for solving the MOPs After applying non-dominated sorting and crowding distance operators, the NNA will choose the non-dominated pattern solution hhaving the maximum crowding distance as the target solution and proceeds to the next population (Le., iteration). Besides, the ‘external archive also plays a vital role in multi-objective optimizers, It stores the non-dominated solutions to retain the past history. Ithelps to avoid unnecessary disturbances which may occur during the optimization process, Postion and weight updating “The target solution chosen in Section “Initialization of candidate solutions” will act asa leader for all individuals inthe population. Positions and weights of pattern solutions (Le, individuals) are updated in such a way that all pattern solutions converge toward the target solution, As in the NNA, positions, and weights of individuals are updated using Eqs. (8), (9) and (10). In the proposed MO-NNA scheme, they updated in a similar manner. However, in MO-NNA, the bias and transfer function operators take part at very cary stages. Unlike the standard NNA, the bias and transfer function operators alter the whole population in the MO-NNA. Infact, the “objective space of MO-NNA is higher dimensional than the single-objective used in the original NNA. To avoid premature convergence ‘and unnecessary disturbances, bias and transfer function operators act parallelly and begin to perform atthe initial stages. Indeed, the bias operator creates mutation among individuals to maintain randomness and on the other hand, the transfer function ‘operator helps to approach pattern solutions towards the chosen target solution. At each iteration, they work paralelly and create a set of non-dominated solutions which are evenly distributed reaching the optimal PP faster with more accuracy. Pig. 4 explains how these two operators work in the proposed MO-NNA scheme. .Rhurana A. Yadav and A. Sal Methods 10 2025) 102152 Selection of target solution After updating positions and weights of pattern solutions, the new population takes part in non-dominated sorting, and further in ‘crowding distance operator. The non-dominated PF is kept in the external archive to preserve its historical data. Mostly, the external archive stores the non- Mora eons 23697 one 2.5835, ous 25898 0002 8.0001 zn o1120 o7sss o99e 0.2700 nous ase 01736 0.401 zor oun0 ‘0092 992 .0s3 ous donee 01736 0037 zor 0.2809 1.0706 6126 0.4008 1910 aot oan? 0.2622 201% 0.1790 01896 L679 16388 0.0029 07808 150 osi26 Me se 68 0.0078 oes 0087 3698 0007 0013 was 6.080 O64 0061 osas2 0081 asa 0.0008 0.008 Nes 0.0069 0968 0052 0.5356 1.045: 05969 0.0002 0.008 ried ome 02628 oo. 0282 8.0092 0.2896 0.0008 ons MMe? 0.0082 08600 0.0057 0.8679 0087 608 0.003 8.008: srs airs 0.0060 0458 1.0088 oaia8 0.0003 0.009 atr10 ra.gs4e 0223 129490 no. 129898 028 2.0048, MMELL 144998 ooase taaoat nous 145048 0018 0.0036 ri 5686 0.0068 1636 ‘0040 1s7m 0010 0.028 Me 8.3708 0492 192940 028 162985 0.0061 028 Omnitest 49202 oie 149192 nou 148307 0010 8.0048 SYMPART siple 20.9509 0267 20.9468 ours 209706 0.025 8.068 SYMPART rotted 2807 0662 202768 oan 209683 o0n7 07st “SD represents Standard Deviation, ‘where y is the Lebesgue measure and itis sed to find the volume of hyper-cubes. 8, denotes the hyper-cube formed by i th solution ‘and the corzesponding reference point. Using the ideal point zi, and the nadir point zy,q,, the objective values of the derived rnon-dominated solutions are first normalized before being used to calculate the HV. The ideal point and nadir point of the real PF are, respectively, Zui ANd Zyqi- Following that, (1.1 1-1, 1-1) is the reference point. Obtained numerical results using the MO-NNA In this section, the MO-NNA is compared with the eight existing state-of the-art optimizers inthe literature over studied benchmark problems described in Section “Multi-objective benchmark problems”. NSGA II [2), (Novel Decomposition based NSGAIDDN-NSGAI [98], (Multi-Objective Particle Swarm Optimization for Multi-Model Problems)MOPSO.MM [39], (Multi-Objective Particle Swarm Optimization using Ring Topology)MO.RING,PSO [40], (Multi-Objective Ant Lion Optimization) MALO (41), (Multi-Objective Grey Wolf Optimization) MOGWO [42), (Multi-Objective Multi-Verse Optimization) MOMVO (43) and (Multi-Objective Particle Swarm Optimization) MOPSO [11] are multi-objective optimizers used to compare with the obtained results of MO-NNA. These algorithms have already shown their potential and abilities in solving MOP. The numerical optimization results obtained by these algorithms are compared using the performance metries (Le., IGD and HV) discussed in Section “Performance metries”. All considered algorithms ‘are performed using a population size of 100 and 100 is the maximum number of iterations for all reported optimizers i.e 10000 function evaluations are evaluated to verify the results. The parameters used in MO-NNA are the probability of bias operator and transfer function operator wiiich is taken as py = 0.3 and p,= 0.2 and parameter settings are set as default for all other algorithms. For fair comparisons, parameter settings are set as default for all algorithms. Twenty independent runs are executed to avotd any type ‘of randomness for every studied optimizer. The stopping criterion is decided over the maximum number of iterations applied to all ‘optimization methods. First, statistical optimization results (Le., best, worst, mean, and standard deviation) of studied performance ‘metrics for the proposed MO-NNA forall benchmark test problems are provided in Table 5 ‘As explained in Section “Performance metrics", the minimum value of IGD and the maximum value of HV is required for any algorithm to show its excellent performance. By observing Table 5, itcan be interpreted that the MO-NNA acts well against CEC 2020 benchmark test problems. However, when comparing an algorithm to the other optimization methods listed in the following sections, the true nature of the algorithm can be revealed, Comparison with the other opemizers For drawing a better conclusion about the MO-NNA, the same set of benchmarks is performed on the eight diferent optimization ‘methods with various natures and the obtained results are compared with the proposed MO-NNA. Comparisons have been made over performance metrics (ie., IGD and HV) in terms of quantitative (tables) and qualitative manners (igures). Concerning the quantitative ‘analysis ofall applied algorithms, the average results accompanied with the SD of IGD for each benchmark function are tabulated in ‘Table 6. "rable 6 represents that the MO-NNA performs better than the other algorithms with respect to the IGD values. Looking at the last row in Table 6 represents the best values of algorithms over the given 20 studied benchmarks. The MO-NNA surpasses 14 out of 20, benchmarks compared with the other optimizer (see Table 6), while some methods did not perform well in any benchmark. Although .Rhurana A. Yay an A. Salt Methoaex 10 (2023) 102152 Table 6 ‘The obtained opsimization results using nine optimizers against benchmarks with respect to the IGD indicator. Tet Problem MOANA DNASGAI SGA MONSOMM—_-MORINGISO MALO _MOGWO _MOMVO __MOPSO SYMMPART simple 0.0215 0.0257, 0229 0290 ones 0.060 00a? §— 04a azaa0 SYMPART outed 0026500286 002% 9.0827 0.0890 01503064 aoeta.area Besa 42000 1203/20 0720 12010 oyna NA represents not available MONA on MoP2 DN-NSGAII on MOP2 NSGAII on MOP2 08 fo counter 08 goa woe: o7 o7 i 7 * 0 oz os 08 gy Od OS MOPSO Ms on MOP2 Mo RING PSO on MOP2 MALO on MoP2 Sire oer tee, twee 7 ae 7 wong eg panacea gos: a 08 or or bad os, 06 "02 gy ons. i Sy " ocwo on wore MoMvo on MoP2 mores on ue ™S a ‘ ton al Summ saa) gg ot 5, a o7 or 6, 06, i rr a Fig. 5. Approximated PRs vs. true PFs obtained by MO-NNA and other algorithms for the MOP2 benchmark. performance metrics are enough to identify the best-performing algorithm, graphical representations have their own importance. Along with quantitative manner, qualitative studies (equally important) are also conducted to show the prominence of described algorithms. The PP graphs with respect to the true PP of some used algorithms are represented in Figs. 5 and 6 for benchmark test problems MOP2 and ZDT1, repectively. From Fig. 5, the MO-NNA is compared with the MALO and MOMVO against the MOP2 It is apparently visible in Fig. 5 that the MO-NNA is superior to its peers. Similarly, Pig. 6 represents the performance of various algorithms on ZDT1. Approximated PF ‘obtained by the MO-NNA shows excellent accuracy against its competitors (see Fig. 6) .Rurana A. Yay an A. Sal Methoaex 10 (2023) 102152 ig MONA 011 ‘4 BNNBCAH on 207 5, N8eAtonz074 aa + nan + Sima \ com wos oat ae | oat 0s om 1 "yas os ars “ " ‘MOPSO MM on 2071 ‘Mo RING P50 onz0T4 ‘MoMvo on 2071 Fig. 6. Approximated PFs s. rue PP obtained by the MO-NNA and its peers fr the ZDT1 benchmark, ‘Table? ‘The obtained optimization results using nine optimizers against studied benchmarks with respect tothe HV indicator. Tes Problem MOANA _DNNSGAI _NSGAI—~MORSOMM_-MORINGISO MALO MGGWO _-MOMVO _MOMSO e12 1s 14912 sea 1.9706, 15696 14630 1.2063 1S46S 1.5668 Nae wedges 18311940. 183011 193398 yeu157 14206810899 1.3847 Omnitest aaana 147882148135 14.6471 14.6851 14.3668 146897147752 ASH SWPART simple 209589 209410 «29.9799 20.181, 203697 208636 20.4764 2048909 19.2988, SYWPART rotted 208677209218 ——«20.9625 20.8572 208177 201865 16.2888 204678 20.6659, Beet“ i320 ono 40 ono on on amma NA represents not available, AAs discussed aforementioned, the IGD by itself cannot judge convergence and diversity, simultaneously. As a result, the conver- ‘gence and variety ofa given algorithm are assessed using the HV measure. The more the value of HY, the better the algorithm. Table 7 lists the HV values forall algorithms on the chosen benchmark functions. The higher value of HV represents a larger volume covered by hyper-cubes. Hence, a larger value of HV is desired by any algorithm to show its potential, Looking at Table 7, it can be concluded that the proposed MO-NNA performs much better than the other eight algorithms on a given set of benchmarks. From the last row in Table 7, t can be coneluded that the MO-NNA surpasses 13 benchmarks ‘out of 20 in terms of HV values, while the NSGA-II, MOGWO, and MOMVO performs better on 4, 2, and 1 out of 20 benchmarks, os oa MONNA on FON ao s 04 02 Q 02 04 08 08 ‘MoGWo on FON 02 04 a6 08 4 DN-NSGAII on FON 02 04 06 08 MO RING PSO on FON ce = ama 0 02 04, 06 08 “e___ NOMVO on FON — ta 08 “oa Methods 10 2025) 102152 NSGAIL on FON cq = See © ot 002 04 08 a ato on FoN ra na, [esate © a , 002 04 06 oB a MOPSO on FON 02 Fig. 7. Approximate PFs ws tue PRs obtained by the MO-NNA and its peers against FON benchmark MONNA on MIME 025 08 075 4 4 MOPSO MM on MMFIO # MoGWO on MMFI0 DN-NSGAII on MMF10 0 025 0s 075 + # to, MORINGPSO on ur 1o as as . a wo \ "5028 05 076 4 SGA on MMFt0 Ty {= Soounere es | 0025 05,075 4 MALO on MMFIO wee, 'MoPSO on MMIFIO Fig. 8. Approximate PFs vs. tue PFs obtained by the MO-NNA and its competitors against MIMO benchmark. .Rhurana A. Yadav and A. Sal Methods 10 2025) 102152 Tables “The Wilcoxon signed rank test value of various algorithms vs, the MO-NNA with respect to the IGD metre TestPcblen __DNASGAI NSGAT _MORSOMM MORINGRSO MAIO _MOGWO _MONVO _NOPSO SYOIPART simple SYOMPART rotted +h won 180s 162 18/0 1502 WAAR 1sAA 15/00 NA represents not available ‘Tables "The Wilcoxon signed rank test values of various algorithms vs, the MO-NNA with respect to the 1V metre. FON Mont Mop2 nes ‘ M7 » XA : Na NANA NA 60" 18 7A AoA / 1 1 S/4 18072 19/071 6727 TINA represents not available, respectively. It is worth mentioning that the rest multi-objective optimizers have shown the worst results. Furthermore, graphical representations of various algorithms against FON and MMELO benchmarks are given in Figs. 7 and 8, respectively. Looking at the reported results, the MO-NNA shows superiority with respect to the IGD and HV metrics compared with the con- sidered comparison pool. In light of obtained results, using the MO-NNA for challenging multi-objective nature, real-life applications ‘ean be a suitable alternative optimizer, Wilcoxon signed rank test To obtain more comprehensive comparative statisties and to show how the attained results are statistically significant, Wileoxon signed rank test has been conducted with a 95% confidence interval. Using this test, MO-NNA is compared with its comparison ‘pool. The alternative hypothesis assumes that there is a significant difference across algorithms, in contrast to the null hypothesis’ assumption that all algorithms perform equally well. The results are recorded in Tables 8 and 9 in the form of +/ ~ /-, where “+” represents that the MO-NNA is superior, '~' represents both algorithms have equivalent performance, and ‘~" represents that the .Rhurana A. Yadav nd A. Sal Methods 10 2025) 102152 [MO-NNA has inferior performance than a competitive algorithm. The values ofthe Wilcoxon signed rank test are judged on the basis of p- values. If the p-value is less than 0.05, the MO-NNA is superior (i.e. +" sign), the p-value lies between 0.05 and 0.10, both ‘comparing algorithms have equivalent performances represented by ‘~' symbol, and ifthe p-value is greater than 0.10, it represents that the MO-NNA is inferior from that algorithm represented by ‘~’ sign. Tabie 8 shows the results of the statistical test attained by the Wileoxon signed rank test for all reported benchmarks w.r.t IGD indicator and ‘Table 9 shows the results w.r.t HV indicator. ‘Table 8 Is evident to show the excellent efficiency of MO-NNA over its peers by looking ‘+ sign for each optimizer. Conclusions and future direetions This paper proposes a modified form of neural network algorithm (NNA) for solving multiobjective optimization problems (MOPS), ‘Twenty benchmark problems reported by CEC 2020 have been considered for evaluation purposes along with well-used performance ‘metrics. The proposed algorithm has shown its ability to solve them and beat the existing state-of-the-art algorithms. As in single ‘objective optimization problems, NNA has shown remarkable results, so in solving MOPs. Fora fair assessment, eight multi-objective ‘optimizers from different types and natures have been included in the comparison pool. To prove that obtained optimization results are statistically significant, Wilcoxon signed rank test has been utilized between the proposed algorithm and its comparing algorithms. Numerical results through performance metrics verify that the proposed algorithm could find approximated Pareto front very close to the true one for the majority of studied cases. Therefore, MO-NNA can be utilized as an alternative multi-objective optimizes for solving challenging real-life applications. In future research, the standard NNA can be improved using different search operators for solving MOPS. Except for the non- dominated sorting method for handling MOPs, other strategies can be investigated using MO-NNA. Constrained and real-life applica- tions with multi-objective nature can be accounted as future directions using the proposed method. Besides, hybridizing the modified INNA with other multi-objective optimizers may lead to better results [ificency and limitations Although MO-NNA has shown great results in dealing with multi-objective optimization problems, it has some limitations as well Which can be improved in future research, The complexity of MO-NNA is more than NSGAII, also the technique used in MO-NNA on the account of diversity can be replaced to improve its performance for many-objective optimization problems. {As far a the applicability of MO-NNA is concerned, it can be used as a multi-objective optimization algorithm in machine learning, transportation problems, support vector machines, ete. From the results proved in Section “Numerical results and discussions”, it can be concluded that the use of MO-NNA in major applications will be much better than other state of art algorithms. Ethies statements Not Applicable. Declaration of Competing Interest ‘The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. CRediT authorship contribution statement Deepika Khurana: Data curation, Formal analysis, Writing - original draft. Anupam Yadav: Conceptualization, Supervision, Writing ~ review & editing, Ali Sadollah: Software, Validation, Writing - review & editing. Data availability [No data was used forthe research described in the article. Acknowledgments The authors are thankful to Dr. B.R, Ambedkar NIT Jalandherthe Council of Scientific & Industrial Research (CSIR-HRDG Grant ‘No: 09/1127(0008)/2020-EMR-D) and Science & Engineering Research Board (SERB) Grant No: MTR/2021/000503 for providing necessary support to this work, References 11 A Sado, H. Suyynad, A. Yad, A dynamic metaheuriticopinization del inspired by bislogel nervous ystems neural network algorithm, Appl Soft Comput 77 (2018) 747-782 1a) K Deh A. Pratap, 5 Agora. Meyarivan, A fast and ets multiobjective genetic algorithm; NSGAW IEE Trans. ol, Comput 6 (2) (2002) 182-197 [31 K beh, Multabjecive optimisation using evolutionry algorithms an inocncton, i: Mult-Objctive Evolutionary Optimisation for Product Design and Manufacturing Springs, 2011, pp. 2-36 .Rhurana A. Yadav and A. Sal Methods 10 (2025) 102152 1H) 1. Das, 4.8 Denis, A closer look at awhacs of minimizing welghted sums of objectives for Pareto se generation In mulireria optimization problems, Struct Gpuin, 26 0) 1997) 6-68, [5) DLE Goldberg, Genetic Algorithms, Pari Education indi, 2013. 16) R Stor, K Price, Difereta evoltion-a simple and efclent here for slotal optimization over continuous see, J. Gbsl Opti. 11 (4) (1987) 341-359, 171 4 Kennedy, & Bberhar, Panicle swarm optimiaton, in: Procedings of ICNN95-Interational Conference on Newal Networks, Vo. 4, IEEE, 1995, pp. 1942-1948, [sy T Murata 4 Ishibuei el, MOGA: mul-objective genetic algorithns, in: IBEE Internationa Conference on Evolutionary Computation, Vol 1, IEEE Psat ‘vay, USA, 1995, p.289-296, {a} N Staves, Deb hliobjective optimization using nondominated sorting in genet algorithms, Evol. Comput 2 (2) (1994) 221-248. 1} K. Deb, 1 nin, An evolutionary manyobjetve optimization algorithm using elerencepoin-based nosdominated sorting appronch, Par solving problems with box constraints, THEE Trang, Ev. Comput 18 (6) (2013) 577-601 LUN} Cc. Coco, M.S Lechuga, MOPSO: a proposal for mulpe objective patce swarm optimization, i: Proceedings ofthe 2002 Congress on Rrolutionary Com puttin. CECO2 (Ca No, 0278600), VoL 2, IEHE, 2002, 99. 1051-1056, (02) A. Nebo, 14, Durie, J. Garia-Nie, CC Colo F. Lina &. Alba, SMPSO: a new PSO-based metaheuriic for mut-objciveoptimizason, in: 2009 IEEE ‘Shmposiun oa Computatonal lneligsce la MultCetera Deision-tsking (MCDM, IEEE, 2008, pp. 66-73 (a3 5. Zapotecas Marines, CA. Coll Colla, A muls-objetve parce swarm optimizer based on decmposion, in: Procedings ofthe 13th Annual Conference ‘on Geneti an Evolutionary Cmpuratio, 2011, pp. 68-78. [14 ¥. Cu X Meng, J: Qiao, A multobjerve particle swarm optimization algorithm base on two archive mechanism, Appl, Soft Compu 119 (2022) 108532, 105} & Ziaer, LT, An Bvoltionary Algorithm for Mulobjectve Optimization: Te strength Pareto Approach, TIK=epor 43, 198, [Hy J. Knowle, D. Come, The Pareto archived evolution srcegy: a ne baseline algorithn for Parte mlbjecive option, n: Proceding of the 1999 ‘Congres on Hvlulonary Computatia-CEC9D (Ca. No. 99THE40S), Val. 1, IEEE. 1999, pp 98-105, 117) Di Phan, Suzi, R2IBEA; 2 ndleatr based evolutionary algorithm for muldabjctive plato, ln: 2013 IEEE Congress on Evoluttonary Computation, TEE, 2013, pp. 1896-1846 (08) ¥. Zhow, W. Chen, D Li, Design af optimum poole scheme based on improved NSGALITsigorithm, Comput nel Newosl 2022 2022), {19} RL, ¥. Jn, A wind speed interval prediction system base on mul-bjetive optimization fr machine learning method Appl. Energy 228 2018) 2207-2220. (20) 7. Wen, Zhang, M. Oia, Q. WC Li, Amlthobjctive opimietion method for emergency medial resorts nllcation, J Med. imaging Healt Ine. 7 (2) (2017) 393-399. (2) Hc 184, T Chan, W. Tul, FT. Chan, GT. Ho, KL. Choy, A furzy guided multhobjctiveevlulonay algeitin model for solvag wansporttion probles, Expert Sst App 36 (4) (2009) 8255-8268. {a2} F.Aliparmal, M. Ge, 1 Ln, T,Paksoy, A genetic algorithm approach for multobjectie ot (2006) 1962s. {2 8. Xue, M.Zhang, W.N. Browne, Patil swarm optimiation for festre section in clssfeation: a mult-abjecive approach, IEEE Trans, Cyber. 43 (6) (2012), 26861671, (2m 6. Acampora, Here, Torcra, A. Viello, A mul-obectiveevolonsry approach taining st selection or suppor vector machine, Kaowl Based Sy 147 2018) 94-108, (25) 15. Cohan, N Mil, Kumar, 5. Singh, 5. Shama, J. Singh, KV. Rao, M. Mis, D¥.Piseoy, SP Dwivedi, Mechanical srength enhancement of 3d prised scrylonitiiebutaione styrene plymer components tng neural network optimisation algeitm, Polymers 12 (1) (2020) 2250 {26} MS. AnouOmar, HJ. Zhang, ¥-X. Se, Fractional order fizzy PID control of automotive PEM fel cll ir feed system using neural network optimization algorithm, Hoel 12 (8) (2019) 1435 (27) NC Fava, AA. E-Fergeny, HM. Hasanlen, Effective methodology based on neurel network optimizer for extracting model parameters of PEM fue calls It. J Energy Hes #3 (14) (2019) 8136-8147. {am ¥. Zhang, Zin. Chen, Hybediaing grey wll optimization with neural network algorithm fr global numerical optimization problems, Neural Comput Apel 32 14) (2020) 10151-10470. {29} ¥. Zhang, Chaotic neural newark algorithm with competitive learning fr global optimization, Knowl-Based Syst 291 (2021) 107405. {BO} 1, Fss K: Mahmoud, M Lehtonen, MM, Darwish, An improved neal neswork algorithm to efcenty rack varows trjectaris of abot manipulator arms, eee Acces 9 (2021) 11911-11920 [sit #1 Zia, 44. Sheng, Complex facture network simulation and optinlzson ln aatwallyfectured shale resol based on modified acral nrworkagortha, 4. Nat. is Sl Eng 95 (2021) 104282. {52} K Qader, A Abad, A Naquash MLA. Qyytum, KMjetd, 7. Zhou, T. He, AS. Naam, M. Le, Neural networkinspred performance enhancement of syptheie ratral a iqtsction pnt with diferent minimum approach temperatures, Pel 308 (2022) 121858. {99} 11 Liang, 8 Qs, D. Gong, C. Ye, Problem Desnition and Evalaon Criteria for the CEC 2019 Special Sesion on Mimodal Mulobjctive Opimiation, ‘Computational lteigenceLaboranry, Zhengzhou University, 2019. [94] E Zivler, K-Deb,L Thiel, Comparison of multiobjective evluionary algorithms: Empirical results, Evol. Compt. 8 (2) (2000) 173-196 (35) 5, Muband,P bgt, Bavoe,L. whe, a review of mulcjective test problems anda sealable est problem ool, IEE Tans. Evol, Comput. 10 (5) (@006) 477-506, dasi0"ri09/Teve-2005.851417 (36) {. Wang, X. Pan X Shen, P. Zhao, Q. Qi, alacing convergence and diversity in source allocation statey for decomposton-based multabjctive eel. tionary algorithms, App Saft Comput. 100 (2021) 106868. [37 N- Chen, WeN. Chen, YJ. Gong, 2-H. Zan, J. Zhang, ¥.1i V-S. Tan, An evolutionary algorithm with doubleleve archives for mulobjective opimizaton, IEE Trans. Cybern. 4 ©) (2014) 1851-1863. [98 WL, ¥- Li, Noo, A novel decomposition based multimodal mut-objective evoluionary algortn, in: International Conference on Inellgent Computing, Springer, 2020, p. $71-552 (9) Jiang, Q- Guo, C Yee, Qu, K Yu, A seltorganiang muliobecive perce swarm optimization sloth for multimodal mult-ojective problems, ts ‘nteational Conference om Swarm: Inelligence Springer, 2018, yp. 550-560. (Wo) ©. Yue Bu: Liang, A multiobjective partie svar optimize sing cing topology for solving multimodal multiobjective problems, IBEE Trans vol Compe, 221) 017) 05-817, [At] 5. Mipall,P Jane, S. Saren, Mult-obective ant Hn optimizer & multi-objective optimization algorithm for solving engincerng problems, pl intel () @o17) 73.85. (4a) 5. il. Saread, .M. Mall, Ld. Coelho, seult-bjecve grey wolf optimizes novel lgorithfr muli-eterion optimization, Expert Sst Appl 47 (2016) 106-108. 4a} 5. Mill, P. Jang 87. Mill, 5. Saren, LN. Trivedi, Optimization of problems with muliple objectives using the multiverse opimisation algorithm, row: Base Syst 134 (2017) 30-1. sation of saply hain networks, Comput fd Rng. 51 (1) 6 6

You might also like