Professional Documents
Culture Documents
Department of Civil
Engineering Term/Academic Year:
Talamban, Cebu City First Semester/AY 2018-2019
Philippines 6000
1 Research Proposal Endorsement and Approval
2
3
4I/we have read and agreed to the content of the research proposal entitled
5
6
7 GROUPED GREY WOLF OPTIMIZATION (GGWO) ALGORITHM WITH
8 RANDOM WALK USED ON SIZE OPTIMIZATION OF STEEL TRUSS
9 STRUCTURES
10
11
12Prepared and submitted by:
13
14 BACAY, VINZ MARTINA B.
15
16 GOLO, JOHN EMERALD D.
17
18 NOVAL, GERARD A.
19
20
21I/we affirm that the same complies with the standards prescribed for the research
22proposal requirement.
23
24In view thereof, I/we hereby endorse the said research proposal for review and oral
25presentation.
26
27
28Endorsed By:
29
30
Engr. Nophi Ian Biton
Name and Signature of Date Endorsed
Undergraduate Thesis Adviser
Department of Civil
Engineering Term/Academic Year:
Talamban, Cebu City First Semester/AY 2018-2019
Philippines 6000
31 Research Proposal Approval
32
33
34 GROUPED GREY WOLF OPTIMIZATION (GGWO) ALGORITHM WITH
35 RANDOM WALK USED ON SIZE OPTIMIZATION OF STEEL TRUSS
36 STRUCTURES
37
38
39Proponents:
40
41 BACAY, VINZ MARTINA B.
42
43 GOLO, JOHN EMERALD D.
44
45 NOVAL, GERARD A.
46
47
48Undergraduate Thesis Committee
49The Undergraduate Thesis Committee is constituted by qualified faculty members of the Department of
50Civil Engineering (or coming from other departments) according to the Manual of Regulations for Private
51Higher Education (MORPHE) who have ample track record in research. The committee includes at least
52two senior faculty members, the undergraduate thesis adviser and the undergraduate thesis co-adviser (if
53there is any) and a committee chair (Institutional Guidelines for Thesis and Dissertation 2015).
CE 511GL FORM-1c:
Undergraduate Research Proposal Template
Cover Page
Department of Civil
Engineering Term/Academic Year:
Talamban, Cebu City First Semester/AY 2018-2019
Philippines 6000
Project 5 months
Duration
Project Cost Php 5000.00
54
55
56I. Introduction
57In the field of civil engineering, one of the challenges continually faced by structural designers is
58finding the optimum design of a structure to meet safety, serviceability, and economy
59requirements. As engineers, safety should be of utmost importance among these three and should
60be prioritized in the design process, followed by serviceability of the design to the occupants.
61Lastly, the economy or cost of the structure should be considered. There are several aspects
62where the cost of a project or structure can be economized such as minimizing the material and
63labor cost. Another way would be by optimizing the design of the structure itself without
64compromising safety and serviceability.
65
66One way of optimizing the design of a structure involves using the minimum area or dimensions
67required for the structural members while meeting design specifications and safety and
68serviceability requirements. However, the design of a structure is a complex, iterative procedure
69which includes many variables to consider and specifications to meet. Due to this, the
70researchers propose the use of a metaheuristic algorithm in optimizing the size of a structure.
71
72A metaheuristic algorithm is an optimization technique for solving complex, high-dimensional
73optimization problems in a reasonable time with acceptable results (Talbi, 2009). A metaheuristic
74algorithm does not use mathematical or calculus-based models for minimization or
75maximization, but uses a set of ideas or guidelines or rules of thumb to find good solutions in a
76limited time. Due to its flexibility, speed and considerable accuracy, the researchers aim to study
77the efficiency of using metaheuristic algorithms to optimize the size of steel truss structures.
78
79There are many metaheuristic algorithms available in the literature that can solve a wide range of
80optimization problems. However, according to the No Free Lunch theorem (NFL) (Wolpert &
81Macready, 1997), no single algorithm can solve all optimization problems. This means that an
82algorithm may be effective and efficient in solving one kind of problem and not in another.
83Therefore, new variations of different algorithms are always welcome. In this study, the
84researchers aim to improve an existing metaheuristic algorithm called Grey Wolf Optimizer
85(Mirjalili et al., 2014) to optimize the size of steel truss structures.
115availability and material costs need to be taken into account to produce a more economical
116design.
117
118Structural optimization is a widely used tool in different engineering fields. It is also fast
119emerging in the field of structural engineering to aid structural designers in finding the most
120appropriate size, shape or topology to have better utilization of materials in order to reduce
121structural weight and save construction costs. If structural optimization is used properly with
122guidance and intuition of the designer, it can be very a powerful step towards being a necessary
123tool when doing structural design.
124
125Briseghella, et al. (2012) applied structural optimization in the Granatieri di Sardegna Bridge in
126Italy where the bridge superstructure was decreased in size by removing unnecessary materials at
127the bottom of the flange by inserting cavities. Results showed that 40% of the superstructures
128weight had been reduced. This avoided the application of retrofitting to the existing substructures
129such as the foundations and abutments. The engineers used an innovative design with holes
130located in the bottom flange while still meeting the specifications of their updated seismic code
131which had never been done before. The design and construction of the well-known Urban
132Redevelopment project named the “Three Pacific Place” in Hong Kong China also applied
133structural optimization, which ultimately reduced in the concrete volume of the tower structure
134which accelerated the completion of the entire construction project (Cheung and Chau, 2005).
135Results showed the reduction of the entire reinforced concrete volume by 3502 cubic meters and
136also an increase in the usable floor area by 606 square meters. The much lighter structure
137primarily reduced the foundation cost and the entire structure finished on time and within budget.
138
139Bai et al. (2010) also developed a computer software for structural optimization of aqueducts
140based on Hybrid Genetic Algorithm. Two aqueducts, one in the East River Shenzen Water
141Supply Improvement Project in Shenzen, China, and the one in the Yellow River in South-to-
142North Water Transfer Project were optimized where the project cost is taken as the objective
143function. More than 55.5 million Yuan had been saved for the entire construction cost of the two
144aqueducts showing remarkable economic efficiency.
145
146Micro-Genetic Algorithm was also used when optimizing a 20-storey building subject to
147horizontal deflection constraints (Cheung and Chau, 2005). A linear penalty function was used
148to penalize an excess in deflection. The result showed an optimized structure with an optimum
149design cost that is within the feasible values of deflection limits.
150
1512.2 Optimization Problem Statement
152An optimization problem statement contains the input variables needed to maximize or minimize
153a desired output while satisfying any existing constraints. The main components of an
154optimization problem statement are the objective function (Eq. 2.2.1), domain (Eq. 2.2.2) and
155constraints (Eq. 2.2.3, 2.2.4). The general form of an optimization problem is given in equation
1562.2.1 - 2.2.4.
minimize f ( x ) (2.2.1)
x
with respect ¿ x=( x 1 , x 2 , . . .. , x n ) (2.2.1)
subject ¿ the constraints
hi ( x )=0 i=1, 2,. . . ., m (2.2.3)
gi ( x ) ≤ 0 j=1, 2,. . . . , p (2.2.4)
157
164
165 Figure 2.2.1 - A two-bar tubular truss (Fox, 1971).
166The explicit mathematical expressions needed for analysis are given by the following equations:
Minimize Weight= ρ∙ 2∙ π ∙d ∙ t
√( B 2
2) + H2 (2.2.1.1)
Subject ¿
Stress =
P∙
2√( )
B 2
+H2
2∙t ∙π ∙d∙ H
(2.2.1.3)
π 2 E ( d 2+ t 2 )
Buckling Stress=
8
[( ) ]
B 2
2
+H2
(2.2.1.4)
167The values for Height (H), Horizontal distance (B), Joint load (P), Young’s modulus (E), and
168Material density (ρ) are constant and given in Table 2.2.1.1. The remaining variables, diameter
169(d) and thickness (t) are the design variables. The values of the design variables determine the
170optimality of the design.
171
172Table 2.2.1.1 Data and Specifications of the two-bar tubular truss
H (in) 30
B (in) 60
E (lbs/in2) 30,000
ρ (lbs/in3) 0.3
P (lbs) 66,000
173
174 Table 2.2.1.1 shows a list of commercially available hollow circular sections. One
175method for obtaining the minimum weight of the truss is by brute force method, or trying every
176value for diameter and thickness and comparing all the results that satisfy the constraints.
177Another is by equating the allowable stresses and deflections into the constraint equations to get
178the required values of the diameter and thickness of the tubes. From this, the critical diameter
179and thickness based on Eq 2.2.1.1, Eq 2.2.1.2, Eq 2.2.1.3 and Eq 2.2.1.4 will be the governing
180design.
181
182
183
184
185Table 2.2.1.2 Commercially available hollow circular sections
Diameter (d) (mm) Thickness (t) (mm)
21.3 2.5, 3.2
26.9 2.5, 3.2
33.7 2, 2.6, 3.2, 4
42.4 2, 2.6, 3.2, 4
48.3 2, 2.6, 3.2, 4, 5, 6.3
60.3 2, 2.6, 3.2, 4, 5, 6.3, 8, 10
76.1 2, 2.6, 3.2, 4, 5, 6.3, 8, 10
186
187From this, it can be seen that there are different ways of obtaining the optimum. These methods
188of obtaining the optimum can depend on the type of problem being tackled, its complexity and
189the number of variables. For simple problems, direct calculations such as what is shown in
190example 2.2.1 is effective and efficient in solving for the optimum. However, with increasing
191variables and problem complexity, these direct calculations become inefficient and ineffective. In
192light of this, the researchers propose a more efficient and effective way of solving for the
193optimum of high-dimensional and complex structural optimization problems through the use of
194optimization algorithms.
195
1962.2.2 Linear and Nonlinear Programming Problem
197An optimization problem can be classified as linear or nonlinear depending on the nature of the
198equation of its components, specifically the objective function and constraints (Talbi, 2009). A
199linear programming problem is an optimization problem with a linear objective function and
200linear constraints while a nonlinear programming problem consists of a nonlinear objective
201function and nonlinear constraints (Talbi, 2009). This classification is very important since it is
202used to identify the problem complexity and the optimization technique to be used in solving the
203problem.
204
2052.2.3 Components of an Optimization Problem
2062.2.3a Objective Function
207An objective function is an equation expressed as a function of all the design variables that
208calculates the fitness of the obtained solution. Fitness is the measurements of the quality of
209usefulness of the solution to the problem (Bostian et al., 2016). It is also referred to as merit or
210criterion (Rao, 2009) since it evaluates the quality of a solution considering any present
211constraints. In example 2.2.1, the objective function is given by equation 2.2.1.1.
212
213Single objective and Multi-objective function
214An objective function can be classified according to the number of objectives in the optimization
215problem. A single-objective function consists of only one objective function while a multi-
216objective function consists of more than one objective function to be satisfied. The minimization
217of the weight of a truss structure is an example of a single objective optimization problem. On
218the other hand, the minimization of the weight of a truss structure while also minimizing
219deflection is an example of a multi-objective optimization problem. This study deals with a
220single objective function since the aim is to optimize the size of steel truss structures.
221
2222.2.3b Domain
223The domain is a set of quantities called design variables that are the independent variables in a
224problem statement. Design variables are also called decision variables since they are the
225parameters that are continually changed in the optimization process to achieve the objective
226function (Rao, 2009). The domain is represented as design vector X =[ x 1 , x 2 , … … , x n]T . In
227example 2.2.1, the design variables are the diameter and thickness of the tubes.
228
229When each coordinate axis of an n-dimensional Cartesian space is represented by a design
230variable, the space is called a design space. A design point, which could be a feasible or an
231infeasible solution in the problem, represents every single point in the design space. A feasible
232solution does not violate any constraints and qualifies as a possible solution to the problem while
233an infeasible solution violates a constraint and does not qualify as a solution to the problem (Rao,
2342009).
235
236One-dimensional and Multi-dimensional Design Variables
237The domain can be classified according to the number of design variables. A one-dimensional
238domain is composed of only one design variable while a multi-dimensional domain is composed
239of more than one design variable, which is common in real world problems. In this study, multi-
240dimensional design variables are being tackled in the size optimization of steel truss structures
241since the design variables are the cross-sectional areas of the truss members.
242
243Design Variables in Integer Programming Problems and Real-valued Programming Problems
244The domain may be also classified according to the permissible values of the design variables. A
245real-valued programming problem in optimization permits all design variables to have any real
246values. Therefore, design variables in this kind of problem are called continuous variables. On
247the other hand, an integer programming problem limits the values of the design variables by
248rounding off the objective function to the nearest integer. Therefore, design variables in this type
249of problem are called discrete variables.
250
251This study focuses on optimizing the size of steel truss structures in which the design variables
252are the cross-sectional areas of the truss members. To have a more economical design, the
253member sizes are to be selected from available sets in the market which would lead to a distinct
254design space. Therefore, the design variables are discrete variables.
255
2562.2.3c Constraints
257The last component of an optimization problem statement is the constraint, which limits the
258range of possible solutions in the design space. An optimization problem can either be
259constrained or unconstrained although most real-world problems are constrained. Though this is
260the case, unconstrained optimization problems are still studied since it gives the basic idea in
261understanding the more complex constrained optimization problems (Rao, 2009). In any
262constrained optimization problem with the problem statement,
Minimize f =f ( X ) (2.2.3.1)
Subjected ¿ :
g j ( X)=0, j=1, 2,.... , m❑ (2.2.3.2)
263Eq. 2.2.3.2 is the problem constraint. Constraints are uncontrollable factors in the design
264function. They are due to material properties, design codes and the nature of the optimization
265problem itself (Zhang et al, 2015). The presence of constraints limits the design space of the
266optimization problem. In Figure 2.2.3 (Rao, 2009), for simplicity and easy visualization, a
267hypothetical two-dimensional (x1 & x2) space was assumed. The space that is enclosed by the
268constraint functions is called feasible region, while the area outside the constraint functions is the
269infeasible region. The feasible region is where the optimum can be found.
270
271 Fig. 2.2.3 – Design Space with Constraints
272
273Equality and Inequality Constraints
274A constraint can either be an equality or inequality constraint. As can be seen in the optimization
275statement (Soliman and Mantawy, 2012), Eq. 2.2.3.4 is the equality constraint and Eq. 2.2.3.5 is
276the inequality constraint.
Minimize f ( x1 … … , x n ) (2.2.3.3)
Subject ¿
ϕ i (x1 ..... , x n )=0,(i=1,. .... ,l) (2.2.3.4)
ψ j (x 1 ..... , x n) ≤ 0,( j=1,. .... , m) (2.2.3.5)
277Equality constraints require that the number of equality constraints should be less than the
278number of design variable. Furthermore, the function has to be equal to the right side of the
279equation, thus it severely restricts the design space. Inequality constraints have [<] or [>] signs
280and limit the design space into feasible and infeasible regions. Most engineering optimization
281problems have inequality constraints and are much easier to satisfy than equality constraints
282(Parkinson et al, 2013).
283
284Constraints can also be linear and nonlinear. Linear constraints are easier to satisfy than
285nonlinear constraints since boundaries made by linear constraints in the design space are straight
286lines and is two-dimensional only (Hamming, 1973). However, most real-world optimization
287problems have objective functions that are subjected to nonlinear constraints, which would make
288the optimization problem a nonlinear programming problem. In this study, the constraints would
289be composed of code-based limitations and allowable properties of structural steel from the
290National Structural Code of the Philippines 2015 (NSCP 2015).
291
2922.2.3d Problem Complexity
293An optimization problem can be classified according to its complexity. There are many
294techniques that can be used in solving an optimization problem. The question of what technique
295to use can be answered by the computational complexity theory.
296
297Complexity theory exists to classify problems according to the resources needed to solve it
298relative to its size, which can depend on the number of design variables, objective functions and
299constraints (Arora and Barak, 2007). Problem complexity can be further classified into space and
300time complexity. Space complexity refers to the memory needed to solve the problem, while time
301complexity refers to the time needed to execute the problem. Generally, the memory required by
302an algorithm to function is less than the capacity of the computer being used. However, the
303running time of an algorithm is dependent on its framework or the steps required to solve the
304problem. Therefore, time complexity is more critical than space complexity (Wang et al., 2009).
305
306The most common problem complexity classification from complexity theory is the P and NP
307problems. Class P problems are problems that can be solved by a deterministic algorithm in a
308polynomial time. While Class NP problems can be solved by a nondeterministic algorithm in a
309polynomial time (Whitley and Watson, 2005). Any polynomial time is the time needed for an
310algorithm to solve a given problem (Cormen et al., 2009). From these classifications, the
311decision of what optimization technique to use for a certain type of optimization problem can be
312determined. It is also worthy to know that most real-world problems fall under the Class NP
313problems (Talbi, 2009).
3142.3 Optimization Techniques
315An optimization technique is the method, strategy or algorithm used to solve an optimization
316problem (Rao, 2009). When the optimization problem has been formulated and its complexity
317has been identified, the optimization technique to solve the problem can be determined.
318
3192.3.1 Classical and Non classical Optimization Methods
320The two classifications for the basis of the algorithm’s model are classical and non-classical
321optimization methods. Classical optimization methods are analytical in nature and utilize
322differential calculus to find the optimum solution (Rao, 2009). These classical methods can only
323be applied to problems whose objective functions are differentiable (Dincer & Rosen, 2013).
324Non-classical optimization methods are also known as modern, non-traditional optimization
325methods. Unlike the classical optimization methods, these non-classical methods are based on
326natural phenomena rather than differential calculus (Rao, 2009). These methods are used in
327solving complex problems, especially real-world problems with high-dimensionality, since they
328do not require the derivatives, only the objective function values to obtain the optimum (Rao,
3292009).
330
3312.3.2 Exact and Approximate Optimization Methods
332The two classifications for the basis on the accuracy of the solutions obtained are exact and
333approximate methods. Exact methods are those that ensure that the solution obtained is the
334global optimum in a finite amount of time (Talbi, 2009). These methods inspect every solution in
335the search space until the global optimum is found. However, these methods get trapped in local
336optima when the objective function is non-differentiable or when the problem becomes highly
337complex due to high dimensionality (Radosavljević, 2018). Some examples of these are dynamic
338programming, branch and X algorithms, constraint programming and augmented Lagrangian.
339
340On the other hand, approximate methods obtain good solutions in a reasonable time without
341knowledge of its proximity to the global optimum. These approximate methods can be further
342divided into approximation algorithms and heuristic algorithms. Approximation algorithms
343guarantee that the solution obtained is within a bound or percentage of the global optimum when
344certain conditions for each problem are satisfied (Talbi, 2009). However, when those conditions
345are not met, the approximations are too far from the global optimum (Qin & Li, 2015). Because
346of this, they cannot be applied to a wide range of problems.
347
378applications since they are not bounded by limits of differentiability (Radosavljević, 2018). Since
379most real-world problems are classified as NP-hard problems with increased time and space
380complexity, metaheuristics can be used to solve these and obtain a near-optimal solution (Talbi,
3812009).
382
383The two main characteristics of metaheuristics are exploration and exploitation. In exploration,
384the aim is to search and discover the unexplored regions to ensure that the search landscape is
385evenly explored and to ensure that the search is not confined into a single or smaller area where
386it might be trapped in a local optimum. In exploitation, the promising regions which are closer to
387the global optimum are searched thoroughly to find the optimum solution of that region. The
388core of a metaheuristic algorithm is the use of algorithmic operators and parameters with
389stochastic search mechanisms. The effectivity and efficiency of these methods depend on the
390proper value-setting of the parameters (Radosavljević, 2018).
391
392The basic elements of a metaheuristic algorithm are the following: the agent, the population and
393the design space (Radosavljević, 2018).
394
395The agent, x(t), is the possible solution represented by an n-dimensional vector, wherein n is the
396number of variables. At a certain iteration or time, t, the agent can be expressed as:
X i ( t ) =[ x 1i ( t ) ,. . . .. , xid ( t ) , . .. . . , x ni (t) ] (2.3.2.1.1)
d
397where x i (t ) is the position of the of the ith agent with respect the dth dimension. Or simply,
398this is the value of the dth dimension for the ith possible solution at iteration t.
399The population, POP(t), is the set of agents at a certain iteration time t. They can be expressed in
400equation form or matrix form as follows:
T
POP ( t ) =[ X 1 ( t ) , . .. . . ., X N ( t ) ] (2.3.2.1.2)
[ ]
x11 ( t ) , x 21 ( t ) ,. . . ., x d1 ( t ) , . . .. . , xn1 ( t )
x12 ( t ) , x 22 ( t ) ,. . . ., x d2 ( t ) , . . .. . , xn2 ( t )
.
.
POP ( t ) = 1 .
(2.3.2.1.3)
xi ( t ) , x i ( t ) ,. . . ., x di ( t ) , . . .. . , xni ( t )
2
.
.
.
1 2 d n
x N ( t ) , x N ( t ) , .. . . , x N ( t ) , . .. . . , x N ( t )
401Lastly, the design space is the n-dimensional space of possible solutions and is denoted as X. It is
402defined by the upper and lower limits of the variables.
403
404The choice of the best solution among the population of solutions is evaluated using a fitness
405function or objective function. Radosavljević (2018) defines fitness as a direct measurement of
406the performance of the individual agents of the population and is calculated using the objective
407function.
408
409The general structure of a metaheuristic algorithm can be divided into two stages: initialization
410and iteration. The first step in the initialization stage is the formulation of the objective function
411and the setting of the upper and lower limits of the search space. The next step is the generation
412of the initial population. In the iteration stage, the fitness of each agent in the population is
413calculated and evaluated. After, a new population is generated using the algorithmic operators
414and parameters of the algorithm. These parameters are applied on the search agents from the
415current population. The fitness of the new population is then again evaluated. This procedure is
416repeated until the criteria has been met. After, the optimal solution among those obtained is
417chosen (Radosavljević, 2018).
418
4192.3.2.1.1 Classifications of Metaheuristics
420Population-based and Single solution-based search algorithms
421Population-based search algorithms start with a whole population of solutions which are evolved
422over the course of the run. Single solution-based algorithms begin with a single solution which is
423improved and transformed over the course of the run. This type of algorithm focuses more on
424exploitation while population-based search algorithms focus more on exploration.
425
426Deterministic and Stochastic Algorithms
427Deterministic algorithms return the same final solution when given the same initial solution.
428They require initial knowledge about the problem to have an idea of where the global optimum is
429located, which is why the parameter tuning and initial population for these types of algorithms
430are critical. Conversely, stochastic algorithms apply random rules that make the final solutions
431vary with the same initial solutions. They are more effective when the search landscape is
432unknown since it has a random component.
433
434Memory Usage and Memoryless algorithms
435Algorithms that utilize memory usage store recent solutions of search agents and use these in
436determining the next step or position of the agents in the search space. In contrast, memoryless
437methods do not store solutions since the next step or position is determined only on the current
438solutions (Bhattacharyya & Dutta, 2015).
439
440Nature-inspired and Non-nature inspired algorithms
441Algorithms can also be classified according to its inspiration. Non-nature inspired algorithms are
442those that are based on physics and human behavior. Nature-inspired algorithms are those that
443are inspired by natural phenomena such as evolution, the behavior of animals, human behavior
444and physics. These algorithms were designed to optimize real world problems with increased
445complexity, dimensions and variables (Agarwal and Mehta, 2014). These nature-inspired
446algorithms are mainly classified as evolutionary algorithms and swarm intelligence based
447algorithms.
448
449Evolutionary algorithms are based on concepts of biological evolution and natural selection.
450Most evolutionary algorithms are population-based search algorithms. The initial population of
451these algorithms are randomized and then they are evolved using mutation and recombination
452operators to improve the fitness of the solutions. Some examples of evolutionary algorithms are
453genetic algorithm (GA) (Holland, 1975), genetic programming (GP) (Koza, 1992), differential
454evolution (DE) (Storn & Price, 1997) and biogeography-based optimizer (BBO) (Simon, 2008).
455
456Swarm intelligence based algorithms mimic the collective behavior of groups of animals or
457communities in nature. Most swarm intelligence based algorithms simulate or model the
458interactions or behavior of the swarms, their lifestyle and their relations to each member of the
459swarm as they hunt for food. These types of algorithms are population-based, wherein the initial
460population is randomized. The solutions are then improved over the course of the search through
461exploration and exploitation based on the behavior of the swarm or group of animals being
462mimicked. The optimum is the point where the swarm converges. Some examples of swarm
463optimization techniques are particle swarm optimization (PSO) (Kennedy & Eberhart, 1995),
464cuckoo search (CS) (Yang & Deb, 2009), ant colony optimization (ACO) (Dorigo, et al., 2006),
465artificial bee colony (ABC) (Karaboga et al., 2006), firefly algorithm (FA) (Yang, 2010),
466dragonfly algorithm (DA) (Mirjalili, 2016) and grey wolf optimizer (GWO) (Mirjalili et al.,
4672014).
468
469Metaheuristic algorithms are applicable to a wide range of real-world problems. This is because
470they do not use calculus-based models that require derivatives, but are based on a set of
471guidelines or rules of thumb to obtain a good solution in a reasonable time. Since this research
472involves the optimization of the design of a truss structure, which is a complex and iterative
473process, the researchers aim to study the efficiency of using a metaheuristic algorithm in
474minimizing the weight of steel truss structures.
475
476Grey Wolf Optimizer (GWO) (Mirjalili et al., 2014) is a unique metaheuristic algorithm since it
477incorporates the leadership hierarchy of grey wolves in the algorithm which has led to its strong
478exploitation capabilities. The quality of the solution obtained by the algorithm is considerably
479closer to the optimum compared to other algorithms as shown by Mirjalili et al. (2014). This is
480because the possible solutions in the design space are influenced by the top three best solutions,
481which is how the leadership hierarchy is applied. Due to the optimality of the solutions obtained
482by the algorithm, the researchers chose to use the Grey Wolf Optimizer in minimizing the size of
483steel truss structures.
484III. Related Studies
485
4863.1 Grey Wolf Optimizer (Mirjalili et al., 2014)
487Grey Wolf Optimizer (GWO) is a swarm intelligence based algorithm that was developed by
488Mirjalili et al. in 2014. It was inspired by the nature of grey wolves as they hunt for prey. GWO
489is a unique swarm intelligence based algorithm since this is the first to incorporate the leadership
490hierarchy of grey wolves into the algorithm (Mirjalili et al., 2014). Long et al. (2017) states that
491GWO is a memoryless, population based, stochastic optimization technique. The algorithm is
492fairly new and has strong exploitation capabilities. However, it has the tendency to get trapped in
493local optima. Therefore, its exploration capabilities can still be improved.
494
495The GWO algorithm is based on the social intelligence of grey wolves in leadership and hunting.
496They follow a social hierarchy with the alpha wolf at the top, followed by beta wolf, delta wolf
497and omega wolf. In the algorithm, this hierarchy is reflected by ranking the four fittest values for
498the objective function based on the fittest value down to the least fit among the four ranked
499values. This hierarchy is shown in Figure 3.1.1.
500
Al
Be
De
O Fit
2n
3r
4t
ph
ta
lta
m tes
d
h
Wa
eg Fit
t
W
olf
a So
tes
olf
W t
lut
olf So
io
lut
lu
n
io
tio
nn
501 Figure 3.1.1. - Social hierarchy of grey wolves and its mathematical representation in GWO.
502
503The hunting mechanism of grey wolves include tracking or hunting, encircling and attacking the
504prey. These behaviors are mathematically modeled as follows:
505
506
5073.1.1 Encircling prey
508During a hunt, grey wolves encircle their prey. This behavior is mathematically modeled by
509expressing the position of the wolves around the prey in terms of a distance vector, ⃗
D .
⃗
D=|C ⃗ ∙⃗X p ( t )− ⃗
X (t)| (3.1.1)
⃗
X ( t+1 )= X p ( t )−⃗
⃗ A ∙⃗
D (3.1.2)
⃗
510In the equations above, D represents the distance between the wolf and the prey. ⃗
X p is the
511position of the prey and ⃗ X is the position of the wolf at the current iteration t . Vectors
512 ⃗A and ⃗ C are coefficient vectors and are computed using the following equations:
⃗
A =2 a⃗ ∙ ⃗
r 1−⃗a (3.1.3)
⃗
C =2∙ ⃗r2 (3.1.4)
513In the equations above, the ⃗a components are linearly decreased from 2 to 0 over the course
514of the iterations and r 1 and r 2 are random vectors in [0, 1].
515
5163.1.2 Hunting
517Recognizing the location of their prey is part of the nature of grey wolves in hunting. Their hunt
518is usually led by the alpha wolf. In modelling this hunting mechanism in an abstract search
519space, it is assumed that the alpha wolf would know best where the prey would be since it leads
520the pack. Since the prey is understood to be the optimum solution, the alpha wolf would then
521have the best solution of the pack. Following their social hierarchy, the beta wolf and the delta
522wolf would then have the second best and third best solutions, respectively.
523
524Therefore, the positions of the omega or other wolves or search agents are influenced by the
525positions of alpha, beta and delta wolves, who lead the pack and have the best idea of where the
526prey or optimum solution might be located. This is reflected in the following equations:
⃗
Dα =|⃗C1∙ ⃗
X α ( t )− ⃗X (t )| (3.1.5)
⃗
D β =|⃗
C2∙ ⃗
X β ( t )− ⃗X (t )| (3.1.6)
⃗ ⃗ ⃗ ⃗
D δ=|C3 ∙ X δ ( t ) − X (t )| (3.1.7)
⃗
X 1=⃗X α −⃗
A1 ∙ ( ⃗ D α) (3.1.8)
⃗
X 2=⃗X β −⃗
A2 ∙ ( ⃗ Dβ) (3.1.9)
⃗
X 3=⃗X δ −⃗
A3∙ (⃗ Dδ ) (3.1.10)
⃗X 1+ ⃗
X 2 +⃗
X3
⃗
X ( t+1 )= (3.1.11)
3
527In the equations above, it can be seen that the position of the other wolves in the pack are
528influenced by the positions of the alpha, beta and delta wolves. Therefore, these top three wolves
529estimate the position of the prey and the other wolves randomly update their position based on
530this estimation of the location of the prey. This can be graphically shown in Figure 3.1.2.
531
532
C
1 C
a 2
1 a
Alpha 2
Wolf Beta Wolf
m
a
Dbet
Rom o
ega v
C
3 e
a
3 Dd
Delta Wolf elta
Omega/Oth
er Wolf
533 Figure 3.1.2. - Graphical representation of the position updating in the GWO algorithm.
534
5353.1.3 Attacking prey (exploitation phase)
536Grey wolves finish their hunt by approaching and attacking their prey. This behavior is
537mathematically modelled by decreasing the value of ⃗a from 2 to zero over the course of the
538iterations, where ⃗
A is a random value from [−2 a , 2 a] . When the random value of
539 |⃗
A|<1 , the wolves approach or attack the prey. This is the exploitation phase and is
540graphically presented in Figure 3.1.3 (a).
541
P
561 (a) (b) R
E
562 Figure 3.1.3. – The exploitation and exploration transition of vector ⃗
A .
Y
563
564The pseudocode for the GWO algorithm (Mirjalili et al., 2014) is presented in Figure 3.1.4.
565
583 ( α ) , two beta wolves (β) and three hunter delta wolves ( δ 1 ) . This is mathematically
584modelled by having the alpha wolf ( α ) as the best or fittest solution, followed by the second
585and third fittest solutions as the beta ( β ) wolves and the fourth, fifth and sixth fittest solutions
586as the hunter delta ( δ 1 ) wolves. The rest of the solutions are considered to be omega wolves
587 ( ω ) or scout delta wolves ( δ 2 ) . This hierarchy is graphically presented in Figure 3.2.1.
588
Alpha
wolf
Beta
Hunter delta Scout delta
wolf
wolves wolves
Omega
wolf
589 Figure 3.2.1. - Leadership hierarchy of grey wolves in the GGWO algorithm.
590
591The random scout group aims to improve the exploration capability of the algorithm by
592employing the scout delta wolves ( δ 2 ) to continuously explore the search space for better
593solutions. When these scout delta wolves discover a better solution than the solution obtained by
594the alpha, beta and hunter delta wolves, the scout delta wolves will swap roles with the
595respective wolf with which it has gained a better solution. Therefore, the scout delta wolves can
596swap roles with the alpha, beta and hunter delta wolves in the hierarchy according to its fitness
597value. This is graphically presented in Figure 3.2.2.
598
599
Cooperative Random Scout Group
Hunting Group
619omega wolves ( ω ) are influenced by the positions of these top three wolves in the hierarchy.
620These are shown in the equations below:
⃗
Dα =|⃗ C1∙ ⃗ X α ( t )− ⃗ X (t )| (3.2.5)
⃗
D β 1=|C 2 ∙ X β 1 ( t )− ⃗
⃗ ⃗ X (t)| (3.2.6)
⃗
D β 2=|C ⃗2 ∙ ⃗X β 1 ( t )− ⃗
X (t)| (3.2.7)
⃗
Dδ 1=|⃗ C 3 ∙⃗X δ (t )− ⃗ X (t)| (3.2.8)
⃗ ⃗ ⃗ ⃗
D δ 2=|C 3 ∙ X δ ( t )− X (t)| (3.2.9)
⃗
D δ 3=|C ⃗3 ∙ ⃗X δ ( t ) −⃗X (t)| (3.2.10)
⃗
X 1=⃗ X α −⃗ A1 ∙ ( ⃗
D α) (3.2.11)
⃗
X 21= X β 1− A 2 ∙ (⃗
⃗ ⃗ Dβ1) (3.2.12)
⃗ ⃗ ⃗ ⃗
X 22 = X β 2 − A 2 ∙ ( D β 2 ) (3.2.13)
⃗
X 31=⃗ X δ 1−⃗ A3 ∙ ( ⃗ Dδ1) (3.2.14)
⃗
X 32 =⃗X δ 2−⃗ A3 ∙ ( ⃗ Dδ2) (3.2.15)
⃗
X 33=⃗ X δ 3−⃗ A 3 ∙ (⃗ Dδ 3 ) (3.2.16)
⃗X +⃗ X ⃗
X +⃗
X +⃗
X
⃗
X ( t+1 )=k α ⃗ ( ) (
X 1 +k β 21 22 +k δ 31 32 33
2 3 ) (3.2.17)
k α +k β + k δ =1, k α ≥ 0, k β ≥ 0, k δ ≥ 0 (3.2.18)
621
6223.2.4 Random scout strategy
623The scout delta wolves are employed to find a better solution in the search space. Their position
624updating equation is randomized using a random vector, r δ 2 , which is arbitrary and limited by
⃗
625the upper and lower bound of the control variables. The position updating equation of the scout
626delta wolves is given by:
⃗
X ( t+1 )=⃗
X ( t )+⃗
rδ 2 (3.2.19)
627
6283.3 Inspired Grey Wolf Optimizer (Long et al., 2018)
629Long et al. (2018) also developed a variant of Grey Wolf Optimizer called the Inspired Grey
630Wolf Optimizer (IGWO). This algorithm introduced a nonlinear adjustment strategy of ⃗a and
631a modified position updating equation inspired by the Particle Swarm Optimization (PSO)
632algorithm.
633
6363.3.1 Nonlinear adjustment strategy of a⃗ .
637Long et al. (2018) pointed out that the control parameter ⃗a has a similar role to the inertia
638weight w in the PSO algorithm in that they balance exploration and exploitation. A study by
639Chatterjee and Siarry (2006) suggests that a time-varying, non-linearly decreasing value of the
640inertia weight would provide a better performance compared to the linearly decreasing strategy.
641With this, a logarithmic decay function is used in adjusting the values of ⃗a .
1
(
⃗a ( t )=⃗a initial−( ⃗ainitial−⃗afinal ) x log 1+ ( e−1 ) x
max iter ) (3.3.1)
6423.3.2 Modified position updating equation
643Following the PSO algorithm, the IGWO employs the use of the personal historical best
644 ( pbest ) and the global best position ( gbest ) to add diversity among the search agents and
645improve the exploration capability of the algorithm to avoid local optima stagnation. The
646modified position updating equation is given as follows:
⃗
X +⃗X +⃗X
X ( t+1 )=w ∙ 1 2 3 + c 1 ∙ r 3 ∙ ( ⃗
⃗ X pbest −⃗
X ) + c2 ∙ r 4 ∙ ( ⃗
X 1−⃗
X) (3.3.2)
3
647In equation (1.2), t is the current iteration, r 3 and r 4 are random vectors in [ 0,1 ] ,
648 c 1 ∈ [ 0,1 ] represents the individual memory coefficient and c 2 ∈ [ 0,1 ] represents the
649population communication coefficient. ⃗
X pbest is the position of the personal historical best and
650 w is the inertia weight which is linearly decreased from an initial value w initial to w final
651according to the following equation:
max iter −t
w ( t )= x ( winitial−w final) + w final (3.3.3)
max iter
652The first term of equation (3.3.2) provides the necessary momentum of the search agents to move
653in the search space. The second term is the cognitive component, which simulates the individual
654thinking of each agent as they move towards their personal historical best. The third term is the
655social component, which pulls all the search agents toward the global best solution found so far.
6563.4 Random Walk - Grey Wolf Optimizer (Gupta & Deep, 2017)
657Gupta and Deep (2017) developed a modified GWO algorithm called Random Walk Grey Wolf
658Optimizer (RW-GWO), which focused on improving the search ability of grey wolves by
659incorporating the principles of random walk. The algorithm aimed to improve the drawbacks that
660the leading wolves have, namely the alpha, beta and delta wolves, since all other wolves update
661their position based on the position of the leaders.
662
6633.4.1 Reason on improving the search ability of the leaders
664Gupta and Deep (2017) highlighted that the major drawback of the GWO algorithm is that the
665alpha wolf guides the lesser wolves, beta and delta, in updating their position while no wolf will
666guide the position updating of the alpha wolf. This is mainly the reason of the difficulties in the
667convergence of the algorithm to the global optimum and often happens when the algorithm
668cannot carry out a smooth transition from exploration to exploitation along each iteration.
669Although different engineering problems have been tackled with the use of the traditional GWO,
670some modifications to the basic searching principles can lead to better searching performance
671and reduce its errors in convergence (Heidari and Pahlavani, 2017). Therefore, the addition of
672Random walk to the position updating equation of the wolf leaders aims to reduce local optima
673stagnation and premature convergence.
674
6753.4.2 Design of the random walk based GWO: RW-GWO
676Random walk is a simulation of successive random steps to appear for a seemingly random
677event. This kind of simulation is used to see various outputs in a particular event by managing
678the beginning of the simulation and the probability distribution used. Random walk is a member
679of the Monte Carlo methods or random sampling algorithms originated in 1940 by Stanislaw
680Ulam. These methods are very useful in various kinds of optimization problems, Cuesta (2013).
681
682Random walk can be mathematically expressed as:
N
(3.4.1)
W N =∑ S i
i =1
683Where: S i is the random step depending on the probability distribution used.
684Relationship between succeeding random walks can be expressed as:
N N−1
W N =∑ S i+W N = ∑ S i + X N (3.4.2)
i =1 i =1
685The step size Si may be varying or fixed. In the equation above, it shows that the next step is
686dependent on the present step.
687
688The RW-GWO algorithm (Gupta & Deep, 2017) is incorporated with a random walk using
689Cauchy Distribution. This probability distribution was chosen since it has a variance of infinity,
690which means it could have a longer jump. This helps avoid of local optima stagnation, which will
691make the entire wolf population to explore the search space much better and lead other wolves to
692more promising regions in the search space. Gupta and Deep (2017) also added a Greedy
693approach between old and new position of the wolves to make sure that each step leads to a
694better or optimized objective function.
695
6963.5 Modified GWO with Levy flight (Heidari & Pahlavani, 2017)
697Heidari and Pahlavani (2017) also developed another variant of GWO by reconstructing the
698searching and hunting mechanisms of the grey wolves using Levy flight-based patterns. The
699Levy flight are also randomly oriented scale-free walks which follows the Levy distribution.
700Humphries, et. Al (2010) discovered that numerous animals behave in a way that it can be
701represented using the Levy Flight concept. They stated that searching techniques among
702organisms could be improved such that the best Levy-triggered patterns may be developed.
703
704Different nature-inspired optimization algorithms have incorporated the Levy-flight concept to
705strengthen and make them more robust. Hakli and Uguz (2014) and Jensi and Jiji (2016)
706enhanced the original Particle Swarm Optimizer using Levy flight while Hussein et al. (2014)
707used Artificial Bee Colony Algorithm. Furthermore, Xu and Liu (2017) improved the Grey Wolf
708optimizer by integrating cuckoo search and levy-flight style. These studies have shown that LF
709can greatly enhance the performance of different optimization algorithms.
710
711
712The Levy distribution can be expressed as a power-law equation:
−1−¿ (3.5.1)
L(s) ∼ ¿ s∨¿¿ , 0 < ≤ 2
¿
Where is the Levy index and s is the
variable.
713
714The Levy distribution can also be expressed as:
{
√ [ ]
γ −γ 1
exp 0< μ< s< ∞ , s ≤0
L ( s , γ , μ )= 2π 2(s−μ) 3
2 (3.5.2)
(s−μ)
0
715Where represents a shift parameter and > 0 is a scale parameter
716
717The Levy flight-based GWO can redistribute the population around the entire landscape to avoid
718loss of diversity among the wolves and emphasize exploration when it is required. The Levy
719Flight GWO can also outperform the original GWO by jumping out of sub-optimum points
720towards better and more promising regions using Levy-triggered hunting patterns, which results
721to a much better balance between diversifying and intensifying tendencies. The Levy Flight
722technique also allows hunters to explore possible locations of the prey more effectively.
723
724Heidari and Pahlavani (2017) also discovered that modifications on modified GWO using Levy
725flight with Greedy selection strategy can effectively escape from local optima in dealing with
726more complicated test functions. In using this strategy, new better hunter positions will
727continuously be improved in each generation while worse ones are disregarded. Through this, the
728algorithm’s searching capability are enhanced because hunter wolves will get to communicate
729and share information with the other wolves in the entire searching process. It was observed that
730the new principles incorporated with the original GWO can improve its searching ability and
731quality of results.
732
733
734
735
736IV. Objectives
737This research project aims to study the efficiency of using a metaheuristic algorithm in
738optimizing the size of steel truss structures.
739 1. To determine suitable parameter values for the algorithm in optimizing unconstrained
740 test functions.
741 2. To enhance the exploration capability of the Grouped Grey Wolf Optimization
742 (GGWO) algorithm by integrating random walk and particle best memory into the
743 scout group.
744 3. To test the computational efficiency of the metaheuristic algorithm in optimizing the
745 size of steel truss structures.
746
747V. Significance of the Study
748
749This study would be beneficial to the civil engineering community, that they may make use of
750the digital means of structural optimization as a tool to help provide more economical designs to
751consumers without sacrificing the structural integrity of the structure. Moreover, it can help
752provide more environmentally sustainable designs since fewer materials, therefore, fewer
753resources would be used if a structure is optimized properly. The minimization of the use of steel
754would lessen steel production which would reduce the air pollutants, solid by-products and
755residues and waste water sludge involved in steel production. Furthermore, it would ensure the
756proper use of the finite minerals and resources available. This study may also be beneficial to
757future researchers who are interested in the study of structural optimization using metaheuristic
758algorithms since they can further improve different algorithms in the literature to solve structural
759optimization problems.
760
761
762
763
764
765VI. Scope and Limitations
766This research project is limited to:
7671. The structural size optimization of steel truss structures.
768 2. Design constraints for real-world truss are based on NSCP 2015 provisions on
769 steel tension or compression members.
770 3. Steel sections for the real-world truss problems are taken from the AISC steel
771 manual (AISC shapes database version 15).
772 4. A single-objective algorithm. For the application of the algorithm, the objective
773 will be the minimization of the weight of the truss, which does not include the
774 labor and construction costs of the truss structure.
7755. Offline tuning of parameters of the algorithm.
776
777VII. Proposed Algorithm
778In this study, two parameters will be integrated into the position-updating equation of the
779Random Scout Group wolves of the Grouped Grey Wolf Optimizer, namely random walk and
780internal memory. Equation 7.1 shows the original equation for updating the positions of the scout
781group before the addition of the two new parameters.
(X t +1 )=( X t )+r δ 2 (7.1)
782
7837.1 Added Parameters
7847.1a Theory on Random Walk
785Random Walk is a type of random sampling process where the succession of random steps is
786independent of the previous step (Cuesta, 2013). A random walk is a family of the Monte Carlo
787method, which simulates unknown events and substitutes a range of values from a probability
788distribution curve. Due to this characteristic of the random walk, it is an ideal parameter to be
789added to the Random Scout group to model a more explorative behavior of the wolves. The
790addition of random walk is also integrated into Eq. 7.1 as shown in Eq. 7.2
X ¿t +(X N )
X ¿t +1=¿
¿ (7.2)
where( X N )=(X 0 )+σ 1 s 1 ......+ σ n s n
791 The second term in Eq. 7.2 is the random walk parameter, � is the parameter that controls step
792size s n (Heidari et al. 2017) is taken from a Lévy Distribution.
793
7947.1b Theory on Internal Memory
795Population-based metaheuristics such as Firefly Algorithm (Yang, 2010), commonly use internal
796memory for guidance in the search for the optimum. The term particle/internal memory was first
797used in Particle Swarm Optimization (R. Eberhart, J. Kennedy, 1995). Internal memory serves as
798a memory of the best solutions found so far by a certain search agent. The location of the best
799solution found so far influences the position of the agent in the next iteration.
800
801The internal memory parameter will be added to the scout group’s position updating equation as
802shown in Equation 7.3.
X BEST (7.3)
)
(X t +1 )=( X t )+( X N )+ ρ ¿
803The ⍴ coefficient serves as weight of the best position achieved so far by the
804particle so that it will not overpower the whole position-updating equation.
805Eq. 7.3 is the final position-updating equation to be used in the proposed
806algorithm.
807
809
Start
Initialize Parameters
Initialize t - max
Initialize Population of wolves
Calculate Fitness for all wolves
Determine α,β,δ wolves according to fitness
While( t < t-max)
Update positions of ω wolves (Original Equation based on GGWO)
Update positions of δ wolves (Use Eq. 7.3)
For each wolf
Calculate Fitness
End For
Determine α,β,δ wolves according to fitness
Update Coefficients a, A, C
t = t +1
End While
Return best solution
End
Initialize t-max
Update positions of δ wolves
(Use Eq. 7.3)
Initialize Population of Wolves
Determine α, β, δ wolves
according to fitness Update Coefficients a, A, C
Yes
NoNo Calculate fitness
t = t +1
t=tmax?
Yes
Yes Return Best solution
End
814VIII. Methodology
8158.0 General Framework
816
Start
5. Code constraint
handling technique
into the algorithm
1. Implement GGWO in
MATLAB, Add two new
parameters, Random Walk
& Particle Best
6. Model benchmark
truss structures in
Phase 2
SAP2000
2. Define algorithm
coefficients and
parameters that are not
part of the experiment
7. Run algorithm to
Phase 1
optimize benchmark
trusses
3. Design of
experiments for
parameters � and ⍴.
8. Gather and analyze
statistical data and
results
4. Run and test
algorithm using
benchmark test
functions
End
NO
Better results?
YES
xj 30 [-100,100] 0
i
∑¿
j−1
f 3 (x) = ¿
¿
¿
n
∑¿
i=1
f 4 (x) = maxi { |x i| ,1 ≤i ≤ n } 30 [-100,100] 0
∑¿
i=1
x i+ 0 .5 30 [-100,100] 0
¿
¿
f 6 (x) = ¿
n
∑¿
i=1
n
30 [-1.28, 1.28] 0
f 7 (x) = ∑ ixi 4 + random [0,1)
i=1
861
862Table 8.2.2 Multimodal Test Functions
Function Dim Range Fmin
n
30 [-500,500] -418.9829 x 5
F8 ( x )=∑ −x i sin ( √| xi|)
i=1
n
30 [-5.12, 5.12] 0
F9 ( x )=∑ [ x 2i −10 cos ( 2 π x i) + 10 ]
i=1
30 [-32, 32] 0
( √ ) (
n n
F10 ( x )=−20 exp −0 .2
1
n i=1 i
n
2 1
i=1
)
∑ x −exp n ∑ cos ( 2 π x i ) + 20+e
n
x 30 [-600, 600] 0
F11 ( x )=
1
∑
4000 i=1
x 2i −∏ cos i + 1
i=1 √i ( )
30 2 [-50, 50] 0
{ }
n−1 n
π
F12 ( x ) = 10 sin ( π y 1 ) + ∑ ( y 1−1 ) [ 1+ 10 sin ( π y i+1 ) ] + ( y n−1 ) +∑ u ( x i , 10 , 100 , 4 )
2 2
n i =1 i=1
xi +1
y i=1+
4
{
k ( x i−1 )m x i> a
u ( x i , a , k , m )= 0−a< xi < a
m
k (−x i−a ) x i ←a
30 2 [-50,2 50] 0
{ }
n n
F13 ( x )=0 .1 sin ( 3 π x i ) + ∑ ( x i−1 ) [ 1+sin ( 3 π x i +1 ) ] + ( x n−1 ) [ 1+sin ( 2 π x n ) ] + ∑ u ( x i ,5 , 100 , 4 )
2 2 2
i=1 i=1
2m
30 [0, π] -4.687
( ( )) i x 2i
n
F14 ( x )=−∑ sin ( x i ) ∙ sin , m=10
i=1 π
863
864The performance of the algorithm on unconstrained problems will be evaluated using the
865convergence curve graph. The number of iterations is in the abscissa and the solutions in the
866ordinate. Efficiency will be based on the minimum number of iterations used to locate the
867optimum. The results will be compared to the original GWO (Mirjalili, 2014) which also uses the
868same benchmark functions. The number of dimensions, particles and iteration will be the same
869for both algorithms to quantify the difference in the convergence curve.
870
871The exploration capability of the algorithm will be measured using the Multimodal benchmark
872functions. These functions have several local optima. Exploration capability of an algorithm can
873be said to be effective if it can escape local optima stagnation. Out of the eight multimodal
874functions, ten runs will be performed in each function and the mean and standard deviation per
875run will be computed and will be the basis for assessment of exploration capability in
876comparison with the GWO.
8778.3 Application to truss problems
878The algorithm will be tested on real-world constrained optimization problems to test its
879computational efficiency in solving real-world optimization problems. The algorithm will be
880tested on three benchmark trusses, a 3-bar planar truss (Kumar and Kumar, 2017), a 15-bar
881planar truss (Li et al. 2009), and an 18-bar planar truss (Imai and Schmit, 1981). Other than the
882benchmark truss problems found in the literature, the algorithm will also be tested on an existing
883truss structure. For the benchmark truss problems, the computational efficiency will be measured
884by the number of iterations needed to find the global optimum. For the existing truss, the
885optimized area obtained from the algorithm will be compared with the area used in the actual
886design. The solutions that will be obtained will be compared to the ones gathered using the
887traditional GWO by Mirjalili et al. (2014).
888
8898.3.1 3-bar Planar Truss Problem
890
891 Fig. 8.1 3-bar Planar Truss
L = 100 cm P = 2 kN/ c m2
892This design problem aims to minimize the volume of the three members of the truss which is
893subjected to stress constraints. The decision variables for this case include: A1 & A 2 which
894are the cross-sectional areas as shown. Prayogo, et al. (2018) stated the problem as:
A1
Minimize f , A2 ¿ =
¿ (8.3.1.1)
(2 √2 A1 + A2 )x L
Subject to:
1
g3= P≤σ (8.3.1.2)
A 1 +√ 2 A 2
897
898 Fig. 8.2 15-bar Planar Truss
899
900The 15-bar planar truss as shown has a density of 7800 kg/ m3 and a modulus of elasticity of
901200 GPa as formerly studied by Zhang et al. (2003). The discrete variables for the cross-sectional
902areas of the 15 truss members will be taken out from the set D = [113.2, 143.2, 145.9, 174.9,
903185.9, 235.9, 265.9, 297.1, 308.6, 334.3, 338.2, 497.8, 507.6, 736.7, 791.2, 1063.7]( mm2 ¿ .
904The truss members will be subjected to stress limitations of ±120 MPa and nodal displacement
905limitations of ±10mm. Three different load cases will be considered: Case 1: P1=35 kN ,
911
912 Fig. 8.3 18-bar Planar Truss
913
914An 18-bar planar truss is shown in Fig. 8.3 (Imai and Schmit, 1981). It has the following
915properties: a Young's modulus of 10,000 ksi, a density of 0.1 lb/¿3 were considered for the
916truss members and an allowable stress of 20 ksi for both tensile and compressive members. The
917stress in any compressive member i was not allowed to exceed the Euler's critical buckling stress
918defined as:
KE Ai
σ bi =
2 (8.3.3.1)
Li
919 Where K is the member effective length factor assumed here as 4; A i and Li are the member's
920cross-sectional area and unsupported length, respectively; and E is the Young's modulus. The
921truss was subjected to a set of 20-kip point loads acting downward at the upper nodes. The
922members were ensured to have a cross-sectional area greater than 0.1 in 2. The members were
923assigned to four groups as: Group 1: 1, 4, 8, 12, 16; Group 2: 2, 6, 10, 14, 18; Group 3: 3, 7, 11,
92415; and Group 4: 5, 9, 13, 17. The problem is mathematically stated as:
n
Minimize ( Weight )=∑ ( ρA i Li ) (8.3.3.2)
i=1
With respect ¿ (8.3.3.3)
A 1 i=( A 1 , A 4 , A 8 , A12 , A 16) (8.3.3.4)
A 2 i=( A 2 , A 6 , A10 , A 14 , A18 ) (8.3.3.5)
A 3 i=( A 3 , A 7 , A11 , A15 ) (8.3.3.6)
A 4 i =( A 5 , A9 , A 13 , A17 ) (8.3.3.7)
Subject ¿ (8.3.3.8)
A >0.1 ¿2 (8.3.3.9)
σ tensile ≤ 20 ksi (8.3.3.10)
σ compressive ≤ 20 ksi (8.3.3.11)
KE A
σ compressive ≤ σ ib= 2 i (8.3.3.12)
Li
where : K=4 (8.3.3.13)
925
926
927
928
929
930
931
932
933
9348.4 Real-world Truss
935
936
937 Figure 8.4.1 – Real world truss
938The real-world truss structure to be optimized was taken from a church building. It has a clear
939span of 19.15 from support to support. The truss members were designed to use double angle
940bars. The design sections used are shown in Figure 8.4. The structural designer modelled the
941truss to be fixed-ended at the nodes and to have uniformly distributed loads along the top chords
942as shown in Figures 8.4.1 to 8.4.4b.
943
944In the optimization process, it will be assumed that the connection of the members are pinned
945and the loads are concentrated loads at the nodes along the top chords of the truss. This
946assumption of an idealized truss will be used since truss structures only carry axial loads.
947However, to be consistent with the model assumptions of the structural designer, the member
948forces due to the uniformly distributed load with fixed ends and the point loads with pinned
949connections will be compared to see the discrepancy between the values. This is also to verify if
950the idealized truss assumptions are valid.
951
952The sections to be considered in the optimization process will be taken from AISC shapes
953database version 15.0 for double angle bars as shown in Table 8.3.4. Local supplies of the
954sections will also be considered for practicality reasons of the design and to avoid bias compared
955to the sections that the designer considered in designing the truss.
956
957Design cases during the optimization of the truss will be considered.
958
959Case 1 will follow the section groupings of the designer: Section A – Top chord, bottom chord,
960king post, Section B- Web members. This is shown on Figure 8.4.1.
961
962 Figure 8.4.1 – Case 1
963Case 2 will have the following sections: Section A- Top chord and king post, Section B- Bottom
964chord, Section C- Web members. This is shown on Figure 8.4.2.
965
966 Figure 8.4.2 – Case 2
967
968Case 3 – Unique sections per member based on the algorithm. This is shown on Figure 8.4.3
969
970 Figure 8.4.3 – Case 3
971
972
973Case 4 – Group section trends based on Case 3, then re-optimize based on section groupings.
974This is shown on Figure 8.4.4.
975
976 Figure 8.4.4 – Case 4
977The results of the optimization process will be compared to the built-in optimization tool of
978SAP2000.
979
980The optimization problem statement is as follows:
n
Minimize ( Weight )=∑ ( ρA i Li ) (8.3.4.1)
i=1
With respect ¿ Ai =( A1 , A 2 , … A n) (8.3.4.1)
Subject ¿ constraints (8.3.4.1)
For Tension Members
T
Ai ≥ Ag where A g = [NSCP 504.2 .1]
0.60 F y
Slenderness ratio
Li
max ≤ 240 for main members
ri [NSCP 502.8 .2]
Li
max ≤ 300 for secondary∧bracing members
ri
For Compression Members
C c=
√
2 π2 E
Fy
Slenderness ratio
[NSCP 505−1 a]
K Li
< 200 for all compression members
ri [NSCP 502.8 .1]
where K Li =1.0 L for pin connected members at both ends
[ ]
KL
2 [ NSCP 505.3 .1 ]
when
K Li
ri
≤ Cc ,
F a = 1−
( )
r Fy
2
2C c
FS
2
KL KL
where FS= +
5
3
r ( ) ( )
−
r
3 8C c 8 C 3c
K Li 1.03 x 10
6
when >C c , F a= 2
ri KL [ NSCP 505.3 .2 ]
( )
r
981
982The cross sectional areas, A i , will be obtained from:
983Table 8.4.1 AISC Shapes
DOUBLE ANGLE AREA DOUBLE ANGLE AREA DOUBLE ANGLE AREA
SIZE SIZE SIZE
2L305X305X34.9 40100 2L152X152X14.3 8320 2L89X89X9.5 3230
2L305X305X31.8 36600 2L152X152X12.7 7420 2L89X89X7.9 2710
2L305X305X28.6 33300 2L152X152X11.1 6580 2L89X89X6.4 2190
2L305X305X25.4 29700 2L152X152X9.5 5650 2L76X76X12.7 3560
2L254X254X34.9 33000 2L152X152X7.9 4740 2L76X76X11.1 3140
2L254X254X31.8 30200 2L127X127X22.2 10300 2L76X76X9.5 2720
2L254X254X28.6 27500 2L127X127X19 9030 2L76X76X7.9 2300
2L254X254X25.4 24500 2L127X127X15.9 7610 2L76X76X6.4 1860
2L254X254X22.2 21700 2L127X127X12.7 6180 2L76X76X4.8 1410
2L254X254X19 18700 2L127X127X11.1 5450 2L64X64X12.7 2920
2L203X203X28.6 21700 2L127X127X9.5 4710 2L64X64X9.5 2230
2L203X203X25.4 19500 2L127X127X7.9 3960 2L64X64X7.9 1880
2L203X203X22.2 17200 2L102X102X19 7030 2L64X64X6.4 1540
2L203X203X19 14800 2L102X102X15.9 5950 2L64X64X4.8 1160
2L203X203X15.9 12500 2L102X102X12.7 4840 2L51X51X9.5 1770
2L203X203X14.3 11300 2L102X102X11.1 4260 2L51X51X7.9 1500
2L203X203X12.7 10100 2L102X102X9.5 3690 2L51X51X6.4 1220
2L152X152X25.4 14200 2L102X102X7.9 3100 2L51X51X4.8 929
2L152X152X22.2 12600 2L102X102X6.4 2490 2L51X51X3.2 634
2L152X152X19 10900 2L89X89X12.7 4190 2L89X89X9.5 3230
2L152X152X15.9 9230 2L89X89X11.1 3730 2L89X89X7.9 2710
984
985
986Table 8.4.2 Local Supply of Double Angle Sizes
LOCAL SUPPLY OF DOUBLE ANGLE SIZES
25x25x2 32x32x2 38x38x2 50x50x2 65x65x2 70x70x2
38x38x2.
25x25x2.5 32x32x2.5 5 50x50x2.5 65x65x2.5 70x70x2.5
996
997 Figure 8.4.5 - Dead loads on truss
9988.4.1b Roof live loads
999
1000 Figure 8.4.6 - Roof live load on truss
1001
1002
1003
1005
1006 Figure 8.4.7(a) – Wind loads on truss (1)
1007
1008
1009 Figure 8.4.7(b) – Wind loads on truss (1)
1010
1011
1012
1013
1014
1016
1017 Figure 8.4.8(a) – Wind loads on truss (2)
1018
1019
1020 Figure 8.4.8(b) – Wind loads on truss (2)
1021
1022
1023
1024
1025
10561X. References
1057Agarwal, Parul. Mehta, Shikha. (2014). Nature-inspired algorithms: state-of-art, problems and
1058 prospects. International Journal of Computer Applications, vol. 100, no. 14, pp. 14-21,
1059 August 2014.
1060Ali Asghar Heidari, Parham Pahlavani, An Efficient Modified Grey Wolf Optimizer with Levy
1061 Flight for Optimization Tasks, Applied Soft Computing Journal
1062 http://dx.doi.org/10.1016/j.asoc.2017.06.044
1063Bai, X., Li, Y., & Yang, K. (2010). Hybrid genetic algorithm and its application in structural
1064 optimization design. 2010 2nd IEEE International Conference on Information
1065 Management and Engineering.
1066Beck, Amir. (2014). Introduction to Nonlinear Optimization - Theory, Algorithms, and
1067 Applications with MATLAB - 8.2.2 Convex Quadratic Problems. Society for Industrial
1068 and Applied Mathematics.
1069Belegundu, Ashok D. Chandrupatla, Tirupathi R.. (2011). Optimization Concepts and
1070 Applications in Engineering (2nd Edition) - 9.1 Introduction. Cambridge University
1071 Press.
1072Belegundu, Ashok D. Chandrupatla, Tirupathi R.. (2011). Optimization Concepts and
1073 Applications in Engineering (2nd Edition) - 1.1 Introduction. Cambridge University
1074 Press. Retrieved from https://app.knovel.com/hotlink/pdf/id:kt008MI3L5/optimization-
1075 concepts/introduction
1076Bhattacharyya, Siddhartha. Dutta, Paramartha. (2015). Handbook of research on swarm
1077 intelligence in engineering. IGI Global Series, United Stated of America.
1078Briseghella, B., Fenu, L., Lan, C., Mazzarolo, E., Zordan, T. 2012. An application of topology
1079 optimization to bridge design. ASCE J. Brid. Eng
1080Bostian, Charles W. Kaminski, Nicholas J. Fayez, Almohanad S.. (2016). Cognitive Radio
1081 Engineering - 2.4.2.2 Evolutionary Algorithms. (pp. 36). Institution of Engineering and
1082 Technology. Retrieved from
1083 https://app.knovel.com/hotlink/pdf/id:kt0113FGR1/cognitive-radio-
1084 engineering/evolutionary-algorithms
1085Cheung, Y.K. and Chau, K.W.. (2005). Tall Buildings - From Engineering to Sustainability -
1086 Acknowledgement. World Scientific. Retrieved from
1087 https://app.knovel.com/hotlink/pdf/id:kt004O1133/tall-buildings-from-
1088 engineering/structural-acknowledgement
1089Cheung, Y.K. and Chau, K.W.. (2005). Tall Buildings - From Engineering to Sustainability -
1090 78.7.4.1 GA vs. Trial and Error. World Scientific. Retrieved from
1091 https://app.knovel.com/hotlink/pdf/id:kt004O18F1/tall-buildings-from-engineering/ga-vs-
1092 trial-error
1093Cormen, Thomas H. Leiserson, Charles E. Rivest, Ronald L. Stein Clifford. (2009).Introduction
1094 to Algorithms (3rd Edition) - 34.1 Polynomial Time. (pp. 1053). MIT Press. Retrieved
1095 from https://app.knovel.com/hotlink/pdf/id:kt00U0LDO4/introduction-
1096 algorithms/polynomial-time
1097Cuesta, Hector. (2013). Practical Data Analysis - 6.2 Random Walk Simulation. (pp. 106). Packt
1098 Publishing. Retrieved from https://app.knovel.com/hotlink/pdf/id:kt00U5TSN1/practical-
1099 data-analysis/random-walk-simulation
1100Cuesta, Hector. (2013). Practical Data Analysis - 6.3 Monte Carlo Methods. Packt Publishing.
1101 Retrieved from https://app.knovel.com/hotlink/pdf/id:kt00U5TSO7/practical-data-
1102 analysis/monte-carlo-methods
1103D. H. Wolpert and W. G. Macready, "No free lunch theorems for optimization," in IEEE
1104 Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67-82, April 1997.
1105Digalakis J, Margaritis K. On benchmarking functions for genetic algorithms. Int J Comput Math
1106 2001;77:481–506.
1107Dincer, Ibrahim Rosen, Marc A.. (2013). Exergy - Energy, Environment, and Sustainable
1108 Development (2nd Edition) - 24.3.3.2 ANN. (pp. 485). Elsevier. Retrieved from
1109 https://app.knovel.com/hotlink/pdf/id:kt00C7BJOD/exergy-energy-environment/ann
1110Engineering Design Handbook - Development Guide for Reliability, Part Three - Reliability
1111 Prediction: (AMCP 706-197) - 12.3.4 The Kuhn-Tucker Conditions. (pp. 12). U.S. Army
1112 Materiel Command. Retrieved from
1113Engineering Optimization: Theory and Practice, Fourth Edition Singiresu S. Rao.2009. John
1114 Wiley & Sons, Inc. Page 301
1115Engineering Optimization: Theory and Practice, Fourth Edition Singiresu S. Rao.2009. John
1116 Wiley & Sons, Inc. Page 8. Figure 1.4
1117G. Zhang et al. 2015. Multi-Level Decision Making, Intelligent Systems Reference Library .Page
1118 26. Springer-Verlag Berlin Heidelberg, Heidelberg Germany.
1119Górak, Andrzej Sorensen, Eva. (2014). Distillation: Fundamentals and Principles - 5.8.3
1120 Nonlinear Programming Approaches. Elsevier.
1121Gorse, Christopher Johnston, David Pritchard, Martin. (2012). Dictionary of Construction,
1122 Surveying and Civil Engineering. Oxford University Press. Retrieved from
1123 https://app.knovel.com/hotlink/toc/id:kpDCSCE002/dictionary-construction/dictionary-
1124 construction
1125H. Haklı, H. U˘guz, A novel particle swarm optimization algorithm with Levy flight, Appl. Soft
1126 Comput. 23 (2014) 333–345.
1127Hamming, Richard. (1973). Numerical Methods for Scientists and Engineers (2nd Edition) -
1128 43.10 Optimization Subject to Linear Constraints. (pp. 674). Dover Publications.
1129 Retrieved from https://app.knovel.com/hotlink/pdf/id:kt00B4ES4D/numerical-methods-
1130 scientists/optimization-subject
1131Heidari, A. A., & Pahlavani, P. (2017). An efficient modified grey wolf optimizer with Lévy
1132 flight for optimization tasks. Applied Soft Computing, 60, 115–134.
1133 doi:10.1016/j.asoc.2017.06.044
1134 https://app.knovel.com/hotlink/pdf/id:kt008JHHI4/engineering-design-handbook-
1135 21/kuhn-tucker-conditions
1136Kanji Imai, Lucien A. Schmit, Configuration optimization of trusses, J. Struct. Div. 107 (1981)
1137 745–756.
1138Kumar and Kumar. 2017. An astrophysics-inspired Grey wolf algorithm for numerical
1139 optimization and its application to engineering design problems. Advanced in engineering
1140 Software Journal. Elsevier.
1141Kurowski, Paul M.. (2004). Finite Element Analysis for Design Engineers - 8.2.2 Sensitivity
1142 Studies. SAE International. Retrieved from
1143 https://app.knovel.com/hotlink/pdf/id:kt0082K2H3/finite-element-analysis/sensitivity-
1144 studies
1175Poe, William A. Mokhatab, Saeid. (2017). Modeling, Control, and Optimization of Natural Gas
1176 Processing Plants - 4.5.5.1 Mixed Integer Linear Programming. Elsevier.
1177R. Eberhart, J. Kennedy.(1995).A new optimizer using particle swarm theory. Micro machine and
1178 human science, 1995. MHS ’95., proceedings of the sixth international symposium
1179 on(1995), pp. 39-43
1180R. Jensi, G.W. Jiji, An enhanced particle swarm optimization with levy flight for global
1181 optimization, Appl. Soft Comput. 43 (2016) 248–261.
1182R.L. Fox, Optimization Methods in Engineering Design, Addison Wesley, 1971
1183Radosavljević, Jordan. (2018). Metaheuristic Optimization in Power Engineering - 1.1
1184 Introduction. Institution of Engineering and Technology. Retrieved from
1185 https://app.knovel.com/hotlink/pdf/id:kt011MAC21/metaheuristic-
1186 optimization/introduction
1187Rao, Singiresu S. (2009). Engineering Optimization - Theory and Practice (4th Edition). p.52
1188Rao, Singiresu S. (2009). Engineering Optimization - Theory and Practice (4th Edition).
1189S.A. Soliman and A.H. Mantawy. Modern Optimization Techniques with Applications in Electric
1190 Power Systems, Energy Systems. Springer Science+Business Media, LLC 2012. Page 26
1191Sanjeev Arora and Boaz Barak (2007). Computational Complexity: A modern Approach. (pp. 2).
1192 Cambridge University Press.
1193Savić, Dragan A. Banyard, John K.. (2011). Water Distribution Systems - 7.3.2 Applications of
1194 Multiple-Objective Optimisation to WDSs. ICE Publishing.
1195Stark, Robert M. Nicholls, Robert L.. (2005). Mathematical Foundations for Design - Civil
1196 Engineering Systems.
1197Talbi E.G. (2009). Metaheuristics: Design to implementation. John Wiley & Sons Inc.
1198 Publication. 2009.
1199Talbi E.G.Metaheuristics: Design to implementation. Pp 14. John Wiley & Sons Inc. Publication.
1200 2009.
1201Talbi E.G.Metaheuristics: Design to implementation. Pp 48-53. John Wiley & Sons Inc.
1202 Publication. 2009.
1203Taniguchi, Eiichi Thompson, Russell G. Yamada, Tadashi van Duin, Ron. (2001). City Logistics
1204 - Network Modelling and Intelligent Transport Systems - 2.6.1 Genetic Algorithms.
1235Yang XS. (2010) Firefly Algorithm, Lévy Flights and Global Optimization. In: Bramer M., Ellis
1236 R., Petridis M. (eds) Research and Development in Intelligent Systems XXVI. Springer,
1237 London. Retrieved from https://doi.org/10.1007/978-1-84882-983-1_15
1238X. Cost Estimates
9.1 Proposed Budget for Materials
Item Description Quantity Cost/Unit Subtotal
[1] None
research
June Person(s)
7 8 9 10 11 12 13
Activity/Task Expected Output Responsibl
e
[1] Read more about GWO Algorithm improvements
and decide what to improve and additions Bacay,
about the algorithm Golo,
[2] Make mind map Mind map
[3] Read more about structural Learn about structural Noval
optimization optimization
June Person(s)
14 15 16 17 18 19 20
Activity/Task Expected Output Responsibl
e
[1] Read more about GWO Algorithm improvements
and decide what to improve and additions Bacay,
about the algorithm Golo,
[2] Make mind map Mind map
[3] Read more about structural Learn about structural Noval
optimization optimization
June Person(s)
21 22 23 24 25 26 27
Activity/Task Expected Output Responsibl
e
[1] Thesis update with Engr. Suggestions and
Biton comments to improve our
research Bacay,
[2] Read more about truss size Things to consider in
Golo,
optimization truss size optimization
Noval
[3] Read more about GWO Addition or improvement
variants for GWO algorithm
[4] Make mind map Mind map
June/July Person(s)
28 29 30 1 2 3 4
Activity/Task Expected Output Responsibl
e
[1] Read more about truss size Things to consider in
optimization truss size optimization Bacay,
[2] Read more about GWO Addition or improvement Golo,
variants for GWO algorithm Noval
[3] Make mind map Mind map
July Person(s)
5 6 7 8 9 10 11
Activity/Task Expected Output Responsibl
e
[1] Thesis update with Engr. Suggestions and
Biton comments to improve our
research
[2] Read more about truss size Things to consider in
optimization truss size optimization Bacay,
[3] Read more about Random Addition or improvement Golo,
Walk and Particle Best for GWO algorithm Noval
Memory
[4] Make mind map Mind map
[5] Write Introduction and Written introduction and
RRL of paper RRL
July Person(s)
12 13 14 15 16 17 18
Activity/Task Expected Output Responsibl
e
[1] Read more about truss size Things to consider in
optimization truss size optimization
[2] Read more about Random Addition or improvement
Bacay,
Walk and Particle Best for GWO algorithm
Golo,
Memory
Noval
[3] Make mind map Mind map
[5] Write Introduction and Written introduction and
RRL of paper RRL
July Person(s)
19 20 21 22 23 24 25
Activity/Task Expected Output Responsibl
e
[1] Read more about truss size Things to consider in Bacay,
optimization truss size optimization Golo,
[2] Read more about Random Addition or improvement
Noval
Walk and Particle Best for GWO algorithm
Memory
16 17 18 19 20 21 22 Person(s)
Responsibl
e
[1] Read more about truss size Things to consider in
optimization truss size optimization
[2] Read more about Random Addition or improvement
Bacay,
Walk and Particle Best for GWO algorithm
Golo,
Memory
Noval
[3] Make mind map Mind map
[5] Write Introduction and Written introduction and
RRL of paper RRL
August Person(s)
Activity/Task 23 24 25 26 27 28 29 Expected Output Responsibl
e
[1] Pass first draft Revisions for first draft
[2] Make revisions for first Revised paper Bacay,
draft Golo,
[3] Research about the Methodology Noval
methodology
August/September Person(s)
Activity/Task 30 31 1 2 3 4 5 Expected Output Responsibl
e
[1] Make revisions for first Revised paper
Bacay,
draft
Golo,
[2] Research about the Methodology
Noval
methodology
September Person(s)
6 7 8 9 10 11 12
Activity/Task Expected Output Responsibl
e
[1] Make revisions for first Revised paper
Bacay,
draft
Golo,
[2] Research about the Methodology
Noval
methodology
September Person(s)
Activity/Task 13 14 15 16 17 18 19 Expected Output Responsibl
e
[1] Make revisions for first Revised paper
Bacay,
draft
Golo,
[2] Research about the Methodology
Noval
methodology
Activity/Task September Expected Output
20 21 22 23 24 25 26 Person(s)
Responsibl
e
[1] Thesis update with Engr. Suggestions and
Biton comments to improve our
Bacay,
research
[2] Make revisions for first Revised paper Golo,
draft Noval
[3] Research about the Methodology
methodology
September/October Person(s)
Activity/Task 27 28 29 30 1 2 3 Expected Output Responsibl
e
[1] Make revisions for first Revised paper Bacay,
draft Golo,
[2] Research about the Methodology
Noval
methodology
October Person(s)
4 5 6 7 8 9 10
Activity/Task Expected Output Responsibl
e
[1] Make revisions for first Revised paper
Bacay,
draft
Golo,
[2] Research about the Methodology
Noval
methodology
October Person(s)
Activity/Task 11 12 13 14 15 16 17 Expected Output Responsibl
e
[1] Make revisions for first Revised paper
Bacay,
draft
Golo,
[2] Research about the Methodology
Noval
methodology
October Person(s)
Activity/Task 18 19 20 21 22 23 24 Expected Output Responsibl
e
[1] Make revisions for first Revised paper
Bacay,
draft
Golo,
[2] Research about the Methodology
Noval
methodology
October Person(s)
Activity/Task 25 26 27 28 29 30 31 Expected Output Responsibl
e
December Person(s)
Activity/Task 6 7 8 9 10 11 12 Expected Output Responsibl
e
[1] Prepare for proposal PowerPoint presentation
defense for proposal defense Bacay,
[2] Thesis proposal defense Necessary revisions for Golo,
research Noval
[3] Make revisions Revised paper
December Person(s)
Activity/Task 13 14 15 16 17 18 19 Expected Output Responsibl
e
[2] Make revisions Revised paper Bacay,
[2] Pass revised paper Approval of revised paper Golo,
Noval
December/January Person(s)
27 28 29 30 31 1 2
Activity/Task Expected Output Responsibl
e
[1] Model GGWO in Algorithm will be coded Bacay,
MATLAB and functional in Golo,
MATLAB Noval
January Person(s)
Activity/Task 3 4 5 6 7 8 9 Expected Output Responsibl
e
[1] Model GGWO in Algorithm will be coded Bacay,
MATLAB and functional in Golo,
MATLAB Noval
January Person(s)
Activity/Task 10 11 12 13 14 15 16 Expected Output Responsibl
e
[1] Integrate and model Algorithm will be coded
GGWO with Random Walk and functional in
Bacay,
MATLAB
Golo,
[2] Integrate and model Algorithm will be coded
Noval
GGWO with Particle Best and functional in
Memory MATLAB
January Person(s)
Activity/Task 17 18 19 20 21 22 23 Expected Output Responsibl
e
necessary Golo,
[3] Gather results and make Written discussion of Noval
final paper results
March Person(s)
Activity/Task 14 15 16 17 18 19 20 Expected Output Responsibl
e
[1] Apply algorithm to real-
world steel truss
necessary Golo,
[3] Gather results and make Written discussion of Noval
final paper results
March Person(s)
Activity/Task 21 22 23 24 25 26 27 Expected Output Responsibl
e
[1] Gather results and make Written discussion of
final paper results Bacay,
[2] Pass first draft of final Pass draft to adviser so Golo,
paper that he can make Noval
necessary corrections
March/April Person(s)
Activity/Task 28 29 30 31 1 2 3 Expected Output Responsibl
e
[1] Make revisions for first Corrected final paper
Bacay,
draft of final paper
Golo,
[2] Make second draft of final Second draft of final
Noval
paper paper
[3] Pass second draft of final Pass draft to adviser so
paper that he can make
necessary corrections
April Person(s)
Activity/Task 4 5 6 7 8 9 10 Expected Output Responsibl
e
[1] Make revisions for second Corrected final paper Bacay,
draft of final paper Golo,
[2] Pass third draft of final Third draft of final paper
Noval
paper
April Person(s)
Activity/Task 11 12 13 14 15 16 17 Expected Output Responsibl
e
[1] Schedule final defense Date of final defense Bacay,
Golo.
Noval
1240