_____________ , 20 _____
I,______________________________________________,
hereby submit this as part of the requirements for the
degree of:
________________________________________________
in:
________________________________________________
It is entitled:
________________________________________________
________________________________________________
________________________________________________
________________________________________________
Approved by:
________________________
________________________
________________________
________________________
________________________
30th July 02
Dinar Vivek Deshmukh
Master of Science
Department of Mechanical, Industrial and Nuclear Engineering
Design Optimization of Mechanical Components
Dr. David Thompson
Dr. Sam Anand
Dr. Edward Berger
Design Optimization Of Mechanical Components
A Thesis submitted to the
Division of Research and Advanced Studies
of the University of Cincinnati
in partial fulfillment of the
requirements for the degree of
MASTER OF SCIENCE
in the Department of Mechanical, Industrial and Nuclear Engineering
of the College of Engineering
2002
by
Dinar V. Deshmukh
B. E., University of Pune, Pune, 1999
Committee Chair : Dr. David F. Thompson
Abstract
The need for high performance, low cost designs in the engineering industry is what
drives designers to search for a utopian design. Design optimization demands greater importance
in product development than any other aspect involved. Usually for a design problem under
consideration, there are multiple parameters that need to be optimized and often these are
conflicting in nature. The designer needs to arrive at a suitable design that fulfills all the
requirements but also meets all the optimization parameters to the greatest extent possible.
With this goal, an integrated approach to optimization is presented here with the scope of
designing mechanical components in mind. The method presented incorporates a goal
programming approach called the Compromise Decision Support Problem Technique combined
with a Branch and Bound Algorithm to solve multiobjective, constrained nonlinear problems
with mixed variables. The approach is initially presented qualitatively, and later illustrated with
practical design problems.
The approach is well suited to solving real life engineering problems that are highly
constrained, have multiple objectives and the designs generated need to adhere to industry
standards. We have applied the approach to designing a helical compression spring and a three
stage gear train; components that find universal application in mechanical engineering. The
nature of Pareto Optimal solutions for multiple objective problems is explored and a tradeoff
analysis is proposed. The results obtained are compared with existing solutions in literature, and
comparisons are drawn.
Acknowledgement
First and foremost I would like to express my gratitude to Dr. David F. Thompson for
introducing me to ‘optimization’. The search that started as a graduate course has developed into
this thesis. I would like to thank him for his constant encouragement and input that has
contributed substantially to my work. Without his guidance, I would not have been able to
explore the rich and diverse subject of optimal design engineering.
I would also like to thank Dr. Ed Berger and Dr. Sam Anand for serving on my thesis
committee. Their appraisal of this work has gone into improving it beyond what I could have
envisioned. I thank them for their support and encouragement.
I express my thanks towards my colleagues in the Department of Mechanical, Industrial
and Nuclear Engineering who were always present for valuable discussion and much needed
input. They provided me with a congenial work environment and a friendly banter that
encouraged me throughout my stay at the University of Cincinnati. I also thank the staff at the
Engineering Library at the University of Cincinnati for their assistance in procuring many of the
references, which would not have been accessible otherwise.
I would like to thank my parents who have provided me with an excellent opportunity to
seek knowledge. Without their blessings I would not have been able to pursue my academic
goals. My thanks would be incomplete without mentioning Bela’s support and encouragement,
who has been an inspiration throughout.
1
TABLE OF CONTENTS
Sr. No. Title Pg. No.
A List of Figures 2
B List of Tables 3
C List of Symbols 4
1 The Design Process 5
2 Single Objective Optimization 13
3 Multiple Objective Optimization 22
4 The Compromise Decision Support Problem Technique 38
5 The Spring Design Problem 56
6 The Gear Train Design Problem 67
7 Conclusion 89
References 92
Appendix 1 96
Appendix 2 105
2
LIST OF FIGURES
Sr. No. Title Pg. No.
1.1 Conventional Design Process 7
1.2 Design Optimization Process 9
3.1 Geometrical Interpretation of the Weighting Objectives Method 28
3.2 Weighting Objectives Method for a NonConvex Problem 29
3.3 Graphical Definition of Pareto Optimal 35
3.4 (a) Convex Feasible Set 36
3.4 (b) NonConvex Feasible Set 36
4.1 Graphical Representation of the Compromise DSP 43
4.2 Progress of the Branch and Bound Algorithm 53
5.1 Helical Compression Spring 57
6.1 Surface Fatigue Life Factor 69
6.2 Three Stage Gear Train 71
6.3 Lewis Geometry Factor 76
6.4 Pareto Optimal Curves for Constant Torque 83
6.5 Pareto Optimal Curves for Constant Speed Ratio 86
3
LIST OF TABLES
Sr. No. Title Pg. No.
5.1 Allowable Wire Diameters for ASTM A228 58
5.2 Comparison of Optimal Solutions 64
5.3 Percentage Improvement 64
5.4 Comparison of Solution Time 65
6.1 Input Parameters 72
6.2 Material Properties and Constants 73
6.3 Pareto Optimal Solutions for Constant Torque 84
6.4 Pareto Optimal Solutions for Constant Speed Ratio 87
4
LIST OF SYMBOLS
The list of variables below is used to describe a general optimization problem in this
work. Other symbols specific to individual problems are listed as and when they are used.
x
i
Design Variable
X Design Vector
d
i
Deviation Variable (Compromise Decision Support Problem)
f Objective Function
A
i
Actual Value of System Goal
G
i
Desired Value of System Goal
g
i
Inequality Constraints
h
i
Equality Constraints
x
lb
Lower Bound on Design Variables
x
ub
Upper Bound on Design Variables
w
i
Weights Attached to Individual Objective Functions
5
Chapter 1
The Design Process
1.1 Introduction
The process of design in general may be broadly termed as the process of converting
information that characterizes the needs and requirements for a product into knowledge about a
product. This definition can be applied to products as well as processes. By product we mean a
tool necessary to fulfill a specified objective with a given set of parameters. A process however,
can be the method used to arrive at such a product by choosing among the various alternatives
available [1].
The job of a designer is not simply limited to computing various parameters associated
with a design. Increasingly, decisions taken during the design process define the primary
responsibility of the designer. Decisions in design are invariably multileveled and
multidimensional in nature. They involve information that may come from different sources and
disciplines. There may not be a single parameter that defines merit and performance. Problems
that have more than one performance parameter are often known as multiobjective optimization
problems. All the information required to arrive at a decision may not be available. Some of the
information used in arriving at a decision may be hard, that is, based on scientific principles and
some information may be soft, based on the designer's judgment and experience.
There are constraints that determine the feasibility of a design or that of a decision. In the
end there might even not be a feasible solution to the problem. However in most real life
scenarios, problems are generally not well defined and there exists great ambiguity in
6
determining the design parameters. As such there may not be benchmarks to choose one design
over the other. This is usually the case in ‘open systems’ – a system that cannot be analyzed
independent of the environment in which it exists. However, for a closed form well formulated
problem, the designer may arrive at a suitable solution or a family of solutions.
In general, the design can be classified into the following three families [1] :
! Original Design : An original solution principle is determined for a desired system and used
to create the design of a product. The design specification for the system may require the
same, similar or a new task altogether.
! Adaptive Design : An original design is adapted to different conditions or tasks; thus, the
solution principle remains the same but the product will be sufficiently different so that it can
meet the changed tasks that have been specified.
! Variant Design : The size and/or arrangement of parts or subsystems of the chosen system
are varied. The desired tasks and solution principle are not changed.
However, in any case, the basic structure of the design process remains the same.
Conventional design has been largely based on the knowledge gained by the designer over years
of experience. The designer can use this knowledge in the form of heuristics to arrive at a
satisfactory solution to a design problem. The structure of conventional design can be illustrated
in the flowchart below [2].
7
Analyze the System
Collect Information to describe the system
Estimate Initial Design
Check Performance
Criteria
Is the design
satisfactory ?
Change design based on
experience/heuristics
STOP
Yes
No
Fig. 1.1 Conventional Design Process [2]
1.2 Motivation
Mechanical design optimization remains a vastly researched topic till date. Over the
years, many optimization methods have been proposed. They span a vast milieu of problems
such as linear programming, unconstrained nonlinear optimization, constrained nonlinear
optimization, single and multiobjective optimization and such. Presenting all the wellknown
optimization methods is beyond the scope of this work; however relevant topics are addressed in
subsequent chapters.
8
There exist welldocumented solution methods for certain standard types of problems.
The Simplex Algorithm can solve linear programming problems. Novelties of the linear
programming such as integer and discrete valued linear programming have also been addressed
in literature. There exist optimization methods for unconstrained nonlinear optimization such as
the Steepest Descent Method, Conjugate Gradient Method and QuasiNewton Methods.
However, these methods are useful only in an academic sense because almost all of the
engineering problems are constrained nonlinear optimization problems.
Such problems present a challenge to researchers since solution methods for constrained
optimization involve linesearch methods. Usually an initial guess is presented and a systematic
search procedure is implemented for better and better solutions, until an ‘optimal solution’ is
found that cannot be improved further. Thus the optimization scheme is an iterative one and
involves at least some degree of trial and error. As the problem complexity is increased, the
search procedure becomes tedious and may not guarantee a solution in all cases. However, the
rigorous formulation of the design problem helps the designer to better understand the overall
scope of the problem. The conventional design process is inexact and time consuming. In
contrast, an optimization procedure is fast and accurate. It also does not rely on the designers’
experience as much as the conventional design process does. The overall structure of an
optimization approach to design is presented in the flowchart below [2].
9
Indentify :
i. Design variables
ii. Objective function to be optimized
iii. Constraints to be satisfied
Collect data to describe
the system
Estimate Initial Design
Analyze the system
Check the constraints
Does the design satisfy
convergence criteria ?
Update the design using
an optimization scheme
STOP
No
Yes
Fig. 1.2 Design Optimization Process [2]
Optimization problems with multiple objectives present an even greater challenge. Firstly
the existing problem needs to be reformulated so that it can be solved using existing linesearch
methods. The designer has to rank or weigh all the objectives in some order of importance. The
10
actual solution procedure involves a higher degree of complexity, and this affects the
performance of the solution algorithm greatly. These objectives are usually conflicting in nature,
and hence the designer needs to arrive at an optimum tradeoff between all the objectives.
In industry many mechanical parts have been standardized. Components such as bolts,
shaft sizes, gear pitch, rivets etc. have standard dimensions specified by a governing organization
such as the ASTM. Hence, any design on paper has to adhere to these standards before it can be
put to practice. If the designer arrives at a design, which does not meet certain standards, it
cannot be implemented. Hence the designer has an added constraint of obtaining the best
possible solution and at the same time adhering to existing standards.
Thus, the field of engineering optimization is a rich and diverse one. Researchers come
up with efficient algorithms that can solve the most complex of problems. However, the
algorithm may not be suitably adapted to other applications. Each optimization problem presents
the designer with a new set of challenges and hence demands newer solution methods. Hence
there exists the need for a solution method that can be applied to any generic optimization
problem and is not restricted to only a certain class of problems.
In this work we present an integrated approach to solving constrained nonlinear
engineering optimization problems with multiple objectives. In addition we address the issue of
standardization of mechanical components, which calls for a mixed variable solution algorithm.
Our approach is to use the Compromise Decision Support Problem method to initially solve the
multiobjective nonlinear constrained optimization problem. Then we proceed to use the Branch
and Bound Algorithm to solve the mixed variable problem.
11
As mentioned earlier, the motivation for this work has come from the need for a sound
optimization algorithm that is both efficient as well as universally applicable. We have presented
some test problems and their solutions to illustrate our approach. It is reasonable to assume that
this is by no means an ideal optimization technique. However, we present a solution
methodology for problems that do not have any existing standard solution procedures. The mixed
variable, multiobjective constrained nonlinear optimization problem is most widely occurring
problem kind in mechanical component design. This work primarily addresses the approach as
applied to mechanical component design. However, there exist a large variety of problems with
essentially the same numerical complexity. Some of them include system design, circuit design,
control systems, transportation engineering, chemical engineering, process control etc.
1.3 Outline
With this introduction to the scope of this work, the remaining document is outlined as
follows :
! Chapter 2 presents an introduction and overview of single objective nonlinear constrained
optimization problems. The type of problems and the theory associated with their solution is
briefly presented here.
! Chapter 3 deals with multiobjective nonlinear constrained optimization. Various existing
methods to reformulate the problem as a single objective optimization problem are outlined.
The concept of Pareto Optimality is also introduced.
! Chapter 4 presents our solution methodology, which is the Compromise Decision Support
Problem, combined with the Branch and Bound Algorithm. Initially a qualitative discussion
is presented followed by a mathematical formulation.
12
! With our methodology presented, we proceed to apply it to a ‘Spring Design Problem’ in
Chapter 5, which is a welldiscussed problem in past literature. This is a single objective
optimization problem, which has discrete, integer and continuous design variables [3, 4].
Though this is a relatively ‘small’ problem, it effectively illustrates the application of our
approach.
! The approach is applied to a multiobjective optimization problem in Chapter 6. The problem
selected is the one presented in Thompson et al [5]. This is a richer problem than the one
presented earlier, and validates our approach as applied to design optimization of mechanical
components.
! Finally we present a discussion of our methodology and the results obtained in Chapter 7.
Some conclusions and scope for future work is presented.
13
Chapter 2
Single Objective Optimization
2.1 Optimization
After our initial analysis, we are in a position to consider various alternatives that are
effective solutions to the design problem at hand. We therefore have some knowledge of both the
performance requirements (demand) and the extent to which the product as designed will satisfy
the requirements (capability). This set of alternatives defines what is called the feasible set of
solutions. The design problem then changes to one of choosing between these alternatives, based
on a set of parameters decided by the designer. The relative merits and shortcomings of these
alternatives can then be analyzed simultaneously and the most suitable alternative can be made
available for selection [6].
The method of ‘finetuning’ a feasible solution can be one of trial and error. It can be
based solely on the judgment and experience of the designer. The design parameters associated
with the product are to be varied sequentially and its effect on the performance parameters can be
evaluated. However this is not the best possible approach when the number of design alternatives
is not limited. This method can be cumbersome when the feasible design space is large or the
product has a large number of design parameters.
Hence it is essential that we clearly define a set of objectives, which may be used as a
parameter to compare alternate solutions. It is necessary to obtain a set of values of the design
parameters that yield a feasible solution and at the same time optimize the given set of
objectives. It becomes important that the optimization problem can be effectively expressed
14
through hard information in terms of mathematical equations. In words, the optimization
problem can be described as [7] :
Given : An alternative that is to be improved through modification.
Assumptions used to model the domain of interest.
The system parameters.
The goals/objectives for the design.
Find : The values of the independent system variables (also known as the design
variables).
Satisfy :
The system constraints that must be satisfied for the solution to be
feasible.
The system goals that must achieve a optimal value.
Bounds : The lower and upper bounds on the system variables.
Minimize : The set of goals and their associated priority levels or relative weights,
which is a measure of the system performance.
2.1 Types of Single Objective Optimization Problems
2.1.1 Linear Single Objective Problem :
This is the simplest type of optimization problem. The system constraints are written in
terms of the system variables. In engineering, system constraints are invariably inequalities.
The system constraints and bounds must be satisfied for feasibility. A constraint invariably
has two or more system variables. A bound contains only one system variable and is always
parallel (geometrically) to the axis represented by the system variable. Rarely is a constraint
15
specified in terms of a single system variable. In this case the constraint plays the same role
as a bound in the design space [8].
The set of all combinations of the system variables that satisfy all constraints and bounds
simultaneously is called the set of feasible solutions and the space consisting of the feasible
solutions is called the feasible design space. A solution that results in the violation of any of
the constraints or bounds is called an infeasible solution. A constraint or bound that does not
border the feasible design space is called a redundant constraint or bound.
For a two dimensional optimization problem, with linear constraints and bounds, the
feasible design space is a polygon. It can be shown that the optimal solution to this problem
lies at one of the corner points of this polygon. The most common solution procedure for
these problems is the Simplex Method.
2.1.2 Linear Single Goal Problem :
A linear single objective optimization problem can be easily formulated as a goal
achievement problem. A target value is first assigned to the objective, which can then be
written as a goal. Then depending on the objective an appropriate deviation variable is
included in the optimization function. A deviation variable measures the difference between
a target value or desired goal, and the achievable value of the objective function [8].
The objective is then to minimize this deviation variable without violation any constraints
or bounds. The optimal solution is one that satisfies all the constraints and bounds and
achieves the goal as far as possible. Sometimes the exact goal cannot be achieved, as the
16
feasible design space is limited. In this case, the deviation variable is minimized and the
solution to this problem is the solution to the original problem.
2.3 Mathematical Introduction to Nonlinear Programming
Whenever a mathematical model is available to simulate a reallife application, a
straightforward idea is to apply mathematical optimization algorithms for minimizing a socalled
cost function subject to constraints. A typical example is the minimization of the weight of a
mechanical structure under certain loads and constraints for admissible stresses, displacements,
or dynamic responses. Highly complex industrial and academic design problems are solved
today by means of nonlinear programming algorithms without any chance to get equally
qualified results by traditional empirical approaches.
Nonlinear programming is a direct extension of linear programming, when we replace
linear model functions by nonlinear ones. Numerical algorithms and computer programs are
widely applicable and commercially available in form of black box software. However, to
understand how optimization methods work, how corresponding programs are organized, how
the results are to be interpreted, and, last not least, what are the limitations of the powerful
mathematical technology, it is necessary to understand at least the basic terminology. Thus, we
present a brief introduction into optimization theory; in particular we introduce optimality
criteria for smooth problems [2].
In this review, we consider only smooth, i.e. differentiable constrained nonlinear
programming problems. A general nonlinear constrained optimization problem can be expressed
as :
17
min f(x) x∈R
n
g
i
(x) ≤ 0 i =1,2,…m (2.1)
h
j
(x) = 0 j = 1,2,…n
x
l
≤ x ≤ x
u
Here, x is a ndimensional parameter vector, also called the vector of design variables,
f(x) the objective function or cost function to be minimized under nonlinear inequality and
equality constraints given by g
i
(x) and h
i
(x). It is assumed that these functions are continuously
differentiable in R
n
. The above formulation implies that we do not allow any discrete or integer
variables, respectively. But besides this we do not require any further mathematical structure of
the model functions.
To facilitate the subsequent notation, we assume that upper and lower bounds x
u
and x
l
are not handled separately, i.e. that they are considered as general inequality constraints.
We now define some notations for the first and second derivatives of a differentiable
function [2].
1. The gradient of a realvalued function f(x) is
T
n
x f
x
x f
x
x f
x
x f


.

\

∂
∂
∂
∂
∂
∂
≡ ∇ ) ( ) ( ) ( ) (
2 1
K (2.2)
2. One further differentiation gives the Hessian matrix of f(x)
n j i
j i
x f
x x
x f
,... 2 , 1 ,
2
2
) ( ) (
=


.

\

∂ ∂
∂
≡ ∇ (2.3)
18
3. The Jacobian matrix of a vectorvalued function F(x) = (f
1
(x), . . . , f
l
(x))
T
is also written
in the form
∇F(x) = ( ∇f
1
(x), . . . , ∇f
l
(x)) (2.4)
4. The fundamental tool for deriving optimality conditions and optimization algorithms is
the socalled Lagrangian function
∑
=
− ≡
m
j
j j
x g u x f u x L
1
) ( ) ( ) , ( (2.5)
defined for all x ∈ R
n
and u = (u
1
, . . . , u
m
)
T
∈ R
m
. The purpose of L(x, u) is to link
objective function f(x) and constraints g
j
(x), j = 1, . . . , m. The variables u
j
are called the
Lagrangian multipliers of the nonlinear programming problem.
2.4 Convexity
The characterization of minimum and maximum points whether global or local is related
to the concavity and convexity of functions. A univariate concave function has a negative second
derivative everywhere and guarantees global maximum. A univariate convex function has a
positive derivative everywhere yielding a global minimum [2].
In general we can only expect that an optimization algorithm computes a local minimum
and not a global one, i.e. a point x
*
with f(x
*
) ≤ f(x) for all x ∈ P ∩ U(x
*
), where U(x
*
) is a
suitable neighborhood of x
*
. However, each local minimum of a nonlinear programming
problem is a global one if the problem is convex, for example, if f is convex, g
i
linear for i =
19
1,2,…,m and h
j
concave for j = 1,2,…n. These conditions force the feasible region P to be a
convex set.
A function f : R
n
→ R is called convex, if f(λx + (1  λ)y) ≤ λf(x) + (1  λ)f(y) for all x, y ∈
R
n
and λ ∈ (0, 1), and concave, if we replace ’≤’ by ’≥’ in the above inequality.
For a twice differentiable function f(x), convexity is equivalent to the property that
∇
2
f(x) is positive semi definite, i.e. z
T
∇
2
f(x)z ≥ 0 for all z ∈ R
n
. Convexity of an optimization
problem is important mainly from the theoretical point of view, since many convergence, duality
or other theorems can be proved only for this special case. In practical situations, however, we
have hardly a chance to test whether a numerical problem is convex or not. Discussion of
convexity is also presented in Chapter 3, with Pareto Optimal Solutions under consideration. For
our case, we presume that the problems are convex and proceed to solve them.
2.5 KarushKuhnTucker Conditions
For developing and understanding an optimization method, the subsequent theorems are
essential. They characterize optimality and are therefore also important for testing a current
iteration with respect to its convergence accuracy.
2.5.1 Theorem 1 (necessary second order optimality conditions)
Let f, g
i
and h
j
be twice continuously differentiable for i = 1,2,…m and j = 1,2,…n. Let x
*
be a local minimizer of the nonlinear optimization problem. There exists a u
*
∈ R
m
, such that
20
n j x h u
u x L
n j x h
m i x g
n j m i u
j j
x
j
i
j
,... 2 , 1 , 0 ) (
0 ) , (
,... 2 , 1 , 0 ) (
,... 2 , 1 , 0 ) (
,... 2 , 1 ; ,... 2 , 1 , 0
* *
* *
*
*
*
= =
= ∇
= ≥
= =
= = ≥
(First Order Conditions) (2.6)
and
0 ) , (
* * 2
≥ ∇ s u x L s
x
T
for all
n
R s ∈ with 0 ) (
*
= ∇ s x g
T
i
(Second Order Conditions) (2.7)
The first order conditions are called the KarushKuhnTuckercondition. It says that at a
local solution the gradient of the objective function can be expressed by a linear combination of
gradients of active constraints. Moreover the second order condition implies that the Lagrangian
function is positive semidefinite on the tangential space defined by the active constraints.
2.5.2 Theorem 2 (sufficient second order optimality conditions)
Let f, g
i
and h
j
be twice continuously differentiable for i = 1,2,…m and j = 1,2,…n. Let
m n
R u R x ∈ ∈
* *
, be given, so that the following conditions are satisfied,
n j x h u
u x L
n j x h
m i x g
n j m i u
j j
x
j
i
j
,... 2 , 1 , 0 ) (
0 ) , (
,... 2 , 1 , 0 ) (
,... 2 , 1 , 0 ) (
,... 2 , 1 ; ,... 2 , 1 , 0
* *
* *
*
*
*
= =
= ∇
= ≥
= =
= = ≥
(First Order Conditions) (2.8)
and
21
0 ) , (
* * 2
≥ ∇ s u x L s
x
T
for all
n
R s ∈ with m i s x g s
T
i
,... 2 , 1 , 0 ) ( , 0
*
= = ∇ ≠ for all s with, n j s x h
T
j
,... 2 , 1 , 0 ) (
*
= = ∇ and
0
*
>
j
u (Second Order Conditions) (2.8)
Then x
*
is an isolated local minimum of f on P.
This introduction to nonlinear optimization does not define the complete scope of the
existing optimization techniques, but serves as a first step to solving optimization problems.
There are many optimization softwares available today, which do not demand any sort of
rigorous mathematical formulation of the problem. We have used the Matlab ® Optimization
Toolbox for solving the optimization problems in this work.
22
Chapter 3
Multiple Objective Optimization
3.1 Introduction
Optimal design engineering has been an important research field. Most of the engineering
design problems are characterized by the multiple conflicting objectives. We have identified that
it is the designers’ job, not only to find a solution to the engineering problem, but also to make a
decision regarding which solution to opt for in case there are multiple solutions available. The
multiobjective problem presents an inherent difficulty in the solution process. As in the case of a
single objective optimization problem, which has a single ‘optimal solution’, there cannot be a
single optimal solution to the multiobjective problem. We can present two approaches, which
broadly illustrate the design processes used by a designer to model a multiobjective
optimization problem [9].
3.1.1 A Priori Articulation Of Preference
This method is preferred when adequate knowledge about the system parameters is
available a priori. This information is embedded in a suitable optimization scheme and an
optimal solution is obtained. The approach can be divided into three stages.
a. Conception : In this stage, which is an initial analysis stage, various design concepts are
created. Possible solutions to the problems are proposed and a broad search space is
identified in which the solution can be expected to be obtained.
23
b. Formulation : At this stage, a performance parameter is formulated. It is important to
have adequate information of the system parameters at this stage; otherwise the objective
of the optimization process cannot be formulated.
c. Analysis and Optimization : After the problem has been formulated, any suitable
optimization scheme can be applied and the solution obtained.
Thus, in this approach, the problem is completely formulated and defined before the
designer has any idea regarding the optimal solution. However, all the three stages require the
designer to take decisions. If the optimal solution obtained is not suitable, then the designer can
go back and make suitable changes at any stage.
3.1.2 A Posteriori Articulation Of Preference
This approach to decision making in multiobjective design is applicable when all the
system parameters are not known to the designer. This approach can also be categorized into
three stages.
a. Conception : This stage is very similar to the Conception stage in the earlier approach. As
earlier, design concepts are generated and a search space is defined. However, design
objectives are also identified at this stage. Different design concepts warrant different
design objectives and vice versa. However, the design objectives cannot be quantified or
ranked in any order, since adequate system information may not be available.
b. Analysis and Optimization : With knowledge regarding the design objectives available to
the designer, he proceeds to the analysis stage. The design concepts are analyzed using a
suitable optimization process and a rich set of good designs is obtained. Thus, the
24
designer obtains a large number of solutions to the problem instead of only one solution
as in the earlier case.
c. Selection : With a large selection set available for the designers to consider, the final task
is simply to select the most suitable design. The designs may be evaluated by themselves
or the various objective functions associated with the designs can be evaluated.
If one particular design (Design A) is found to be ‘superior’ to another design (Design B)
in all respects, then the designer can discard Design B in favor of Design A. In this case, Design
A is said to dominate Design B. However, there may be designs which are better than other
designs in some respects but not all. In this case, the designs are called nondominated designs.
‘A set of nondominated designs from the universe of feasible designs is called the Pareto
Optimal Set’. This is a rather crude understanding of Pareto Optimality. The concept of Pareto
Optimal solutions is discussed extensively later in the text [10].
The ‘A Priori Articulation Of Preference’ method is preferred when all the necessary
system information is available beforehand. This results in a welldefined problem formulation
and a single optimal solution to the problem. However, this is not always the case in practical
engineering problems. It is easier for the designer to make a decision for the selection of a
suitable design when he is presented with a set of nondominated solutions obtained by the ‘A
Posteriori Articulation Of Preference’ method. It helps the designer in understanding the scope
of the problem better. However, the Pareto Optimal set of solutions can be very large for many
reallife engineering problems. Hence the task of selection may become overwhelming.
25
3.2 Nonlinear MultiGoal Optimization Techniques [11]
In a multigoal problem, we wish to find a set of values for the design variables that
optimizes a set of objective functions. The set of variables that produces the optimal outcome is
designated as the optimal set and denoted by x
*
. The optimal set is referred to as the Pareto
optimal set and it yields a set of possible answers. A set of points is said to be Pareto optimal if,
in moving from (point A) to another (point B) in the set, any improvement in one of the objective
functions from its current value would cause at least one of the other objective functions to
deteriorate from its current value. The Pareto optimal set yields an infinite set of solutions, from
which the engineer can choose the desired solution. In most cases, the Pareto optimal set is on
the boundary of the feasible region [12, 13].
The majority of engineering design problems are multiobjective, in that there are several
conflicting design aims, which need to be simultaneously achieved. If these design aims are
expressed quantitatively as a set of n design objective functions
f
i
( x ) : i = 1 . . . n (3.1)
where x denotes the design parameters chosen by the designer, the design problem could be
formulated as a multiobjective optimization problem :
min
x e X
f
i
( p ) , for i = 1 . . . n (3.2)
where X denotes the set of possible design parameters x. In most cases, the objective functions
are in conflict, so the reduction of one objective function leads to the increase in another.
Subsequently, the result of the multiobjective optimization is known as a Paretooptimal
solution. A Paretooptimal solution has the property that it is not possible to reduce any of the
26
objective functions without increasing at least one of the other objective functions. The problem
with multiobjective optimization is that there is generally a very large set of Paretooptimal
solutions. Subsequently there is a difficulty in representing the set of Paretooptimal solutions
and in choosing the solution that is the best design.
Some of the other techniques used to solve these types of problems are discussed below
[14]. All of them involve converting the goal attainment problem to a single objective problem,
with a suitable transformation of the objective function. However all these methods differ from
the Compromise Decision Support Problem formulation as the objective function is still a non
linear function of the design variables.
3.3 Method of Weighting Objectives [15]
This method takes each objective function and multiplies it by a fraction of one, the
“weighting coefficient” which is represented by w
i
. The modified functions are then added
together to obtain a single cost function, which can easily be solved using any Single Objective
Optimization method. Mathematically, the new function is written as
. 1
1 0
) ( ) (
1
1
∑
∑
=
=
=
≤ ≤
=
k
i
i
i
k
i
i i
w
and
w
where
x f w x f
(3.3)
27
If the problem is convex, then a complete set of noninferior or Pareto solutions can be
found. However, if the problem is not convex, then there is no guarantee that this method will
yield the entire Pareto set.
In this method, the weighting coefficients are assumed beforehand. The coefficients are
then varied to yield a set of feasible optima, the Pareto Optimal set. The designer is expected to
pick the values of the variables from this set of solutions. However as mentioned earlier, the
weights for each of the objectives depends on the judgment of the designer which can be only
obtained from experience.
3.4 Geometrical Interpretation of the Weighting Objectives Method [16]
Consider the convex space of objective functions for a twoobjective optimization
problem. The problem is as follows
min f
1
min f
2
(3.4)
Upon transformation to a weighting objective problem, the above problem becomes
min f = w
1
f
1
+ w
2
f
2
(3.5)
This new function f is the equation for line L
1
(Fig. 3.1). Line L
1
, with a slope of w
1
/w
2
,
is drawn in the space of objective functions between the origin and the feasible space of
objective functions. The optimum point associated with these two weighting objectives would be
achieved by moving the line L
1
away from the origin, toward the feasible region, until it is just
tangent to the boundary of the feasible region. This tangent point is the optimum point, x*, for a
prescribed w
1
and w
2
.
28
Fig 3.1 Geometrical Interpretation of the Weighting Objectives Method
3.5 Weighting Objectives Method for a NonConvex Problem
This method will guarantee the entire set of noninferior solutions for a function that is
convex. For nonconvex functions, it may not be possible to locate the entire Pareto set. Consider
the space of objective functions F. Lines L
1
, L
2
, and L
3
all have the same slope, using the method
of weighting functions, only point A will be shown as a minimum. In reality, points A, B, and C
are all noninferior solutions of the Pareto set shown as a bold line. (Fig. 3.2)
29
Fig 3.2 Weighting Objectives Method for a NonConvex Problem
3.6 Hierarchical Optimization Method [1]
This method allows the designer to rank the objectives in a descending order of
importance. Each objective function is then minimized individually subject to a constraint that
does not allow the minimum for the new function to exceed a prescribed fraction of a minimum
of the previous function.
Once the k objectives have been ordered from f
1
(most important) to f
k
(least important),
the solution procedure goes as follows :
30
Step 1 : Find the optimum point for f
1
(that is, find x*
(1)
), subject to the original set of
constraints. Assume that all other objective functions do not exist. Repeat Step 2 for j = 2, 3, ...,
k.
Step 2 : Find the optimum point of the j
th
objective function f
j
subject to an additional constraint :
( )
. ,.. 3 , 2 ) (
100
1
1 ) (
1
1 1
k j for x f x f
j
j
j
j
=


.

\
 −
± ≤
−
− −
ε
(3.6)
where ε
j
is the assumed coefficient of the function increment, expressed in percent.
Note: This method is known as the Lexicographic method when ε
j
= 1.
3.7 Global Criterion Method [1]
Here the designer finds the vector of decision variables that is the minimum of a global
criterion. The global criterion is usually defined as seeing how close the designer can get to the
ideal vector .
o
f The quantity
o
f
is the ideal solution, which is sometimes replaced with a so
called demand level
d
f that is specified by the designer (if the ideal solution is unknown). The
scalar objective function for this method is usually written as :
∑
=


.

\
 −
=
k
i
P
o
i
i
o
i
f
x f f
x f
1
) (
) ( (3.7)
The value of P is set by the designer.
31
3.8 Goal Programming Method (Compromise Decision Support Problem)
This is perhaps the most well known method of solving MultiGoal problems. This
method was originally developed by Charnes and Cooper and Ijiri in 1965. For this method, the
designer must construct a set of goals (which may or may not be realistic) that should be
obtained (if possible) for the objective functions. The user then assigns weighting factors to rank
the goals in order of importance. Finally a single objective function is written as the
minimization of the deviations from the above stated goals [17, 18].
A “goal constraint” is slightly different than a “real constraint” in goal programming
problems. A “goal constraint” is a constraint that the designer would like to be satisfied, but a
slight deviation above or below this constraint is acceptable. This method is discussed in detail
later in Chapter 4.
3.9 Tradeoff Method [1]
This method is also known as the constraint method or the ε  constraint method. The
steps in the solution of a problem are as follows.
Step 1 : Convert
[ ] ) ( ),..., ( min
1
x f x f
X x
k
∈
to
) ( min *) ( x f
X x
x f
r r
∈
=
r i and k i for x f
t s
i i
≠ = ≤ ,..., 1 ) (
. .
ε
(3.8)
32
Then, find the minimum of the r
th
objective functions, i.e., find * x such that
) ( min *) ( x f
X x
x f
r r
∈
= (3.9)
subject to an original set and an additional set of constraints represented as
r i and k i for x f
i i
≠ = ≤ ,..., 1 ) ( ε (3.10)
ε
i
is an assumed value that the designer would prefer not to exceed.
Step 2 : Step 1 is repeated for various values of ε
i
until a set of acceptable solutions is compiled.
This method allows the designer to determine the complete Pareto set of optimal points, but only
if all possible values of ε
i
are used.
3.10 Method of Distance Functions [1]
Another form of the global function is the method of distance function. Here, the
distance between the ideal solution and the present solution is minimized. A family of functions
is defined as
∞ ≤ ≤
(
¸
(
¸
− =
∑
=
p x f f f L
p k
i
p
i
o
i p
1 , ) ( ) (
1
1
(3.11)
Example :
L f f f x
L f f f x
L f
i I
f f x
i
o
f
i
k
i
o
i
i
k
i
o
i
1
1
2
2
1
1
2
( ) ( )
( ) ( )
( ) max ( )
= −
= −
¸
(
¸
(
=
∈
−
=
=
∞
∑
∑ (3.12)
33
One can see that if L
2
(f) is minimized, then so is the distance between the final solution
and the ideal solution. It is usually recommended that a relative deviation is examined, rather
than the actual deviation from the ideal solution. In this case,
∞ ≤ ≤
(
(
¸
(
¸
−
=
∑
=
p
f
x f f
f L
p
k
i
p
o
i
i
o
i
p
1 ,
) (
) (
1
1
(3.13)
This method will yield one noninferior solution to the problem. If one uses a set of
weighting values, then a set of noninferior solutions can be found.
There are two disadvantages to this method.
i. The ideal solution should be known, otherwise a demand level is assumed.
ii. If the wrong demand level is chosen, the result will be nonPareto solutions.
The two demand levels shown both would lead to a nonPareto solution using this
method. For this reason it is very important to exercise great care in picking the demand level.
3.11 Pareto Optimal Solutions
A multiobjective problem is one in which a designer’s goal is to simultaneously
optimize (maximize or minimize) two or more objectives. As in the case of a single objective
problem we find the optimal set of design variables that satisfies all the constraints and optimize
each of the objectives, which usually are in conflict with each other. In this case, the term
optimize refers to a solution that gives values of each of the objectives as desired by the designer.
Hence to express these conflicting objectives, Pareto presented a concept of noninferior
solutions in 1906. This idea of Pareto solutions still remains the most important concept in multi
objective optimization.
34
Most realworld search and optimization problems involve multiple objectives (such as
minimizing fabrication cost and maximize product reliability, and others) and should be ideally
formulated and solved as a multiobjective optimization problem. However, the task of multi
objective optimization is different from that of singleobjective optimization in that in multi
objective optimization, there is usually no single solution, which is optimum with respect to all
objectives. The resulting problem usually has a set of optimal solutions, known as Paretooptimal
solutions or noninferior solutions. Since there exists more than one optimal solution and since
without further information no one solution can be said to be better than any other Pareto
optimal solution, one of the goals of multiobjective optimization is to find as many Pareto
optimal solutions as possible [9, 12, 13].
The basic concept is that a design vector x
*
is considered to be Pareto Optimal if no
objective can be improved without worsening at least one of the other objectives. As such, there
is no single solution (utopia point) for the multiobjective optimization problem. We obtain a set
of noninferior solutions called the Pareto Optimal Solution Set. This set defines a curve (for two
objectives, surface for three objectives and hyper surface for more than three objectives) in the
objective space, called the Pareto Optimal Curve [2].
A set of points is said to be Pareto optimal if, in moving from (point A) to another (point
B) in the set (Fig. 3.3), any improvement in one of the objective functions from its current value
would cause at least one of the other objective functions to deteriorate from its current value.
Note that, based on this definition, point C is not Pareto Optimal. The Pareto optimal set yields
an infinite set of solutions, from which the engineer can choose the desired solution
35
Fig. 3.3 Graphical Definition of Pareto Optimal
3.12 Pareto Optimality for the Weighted Objective Method [16]
We have used the Weighted Objective Method to solve the multiobjective problems
considered in this work. Hence we present a detailed discussion of Pareto Optimality using this
method. The performance of the Weighted Objective Method for multiobjective optimization
depends on the feasible solution set for the problem. The solution space can be classified as
being convex or nonconvex. The convexity of the solution space is an important parameter
when a selection is to be made from an available set of Pareto Optimal solutions. Consider the
two figures below.
36
f
2
f
1
Pareto Solutions
Feasible
Region
Fig. 3.4 (a) Convex Feasible Set (Top) (b) NonConvex Feasible Set (Bottom)
In Fig. 3.4 (a) the black circles represent the Pareto Solutions obtained by the Weighted
Objective Method. The lines represent the weighted sums of all the objective functions. Different
slopes of the lines are due to different weights assigned to each of the objective functions. We
can observe that by systematic variation of the weights, we can trace the boundary of the
Feasible Region and obtain all possible Pareto Optimal Solutions. Hence an optimization of the
f
2
f
1
Feasible Region
Pareto Solutions
37
scalar valued weighted sum of the objective functions is sufficient for Pareto Optimality of the
multiobjective problem, if it is assumed that weighted sum increases monotonically with respect
to each objective function. Another observation can be made that the Pareto Optimal Solutions
lie on the boundary of the Feasible Region. Hence we conclude that solution of the problem
formulated using a Weighted Objective Methods guarantees Pareto Optimality if the Feasible
Region is convex.
Now consider Fig. 3.4 (b), in which case, the Feasible Region is not a convex set. As in
the earlier case we can obtain Pareto Optimal solutions (black circles) by systematically varying
the weights associated with each of the objectives. However, this does not guarantee that all the
Pareto Optimal solutions can be determined by this method. As we obtain the solutions from left
to right in the figure, the slope of the weighted sum of the objective function becomes less and
less negative. However, we are bound to miss some of the solution points, such as the white
circle. Thus, the Weighted Objective method cannot guarantee all Pareto Optimal solutions when
the Feasible Region is nonconvex.
38
Chapter 4
The Decision Support Problem Technique
4.1 Introduction
With this understanding of the design process in mind, it becomes increasingly important
to develop a tool that assists the designer in the decision making process. The Decision Support
Problem (DSP) Technique is one such tool that provides support for human judgment in the
overall design process. The Decision Support Problem Technique consists of three principal
components: a design philosophy, an approach for identifying and formulating Decision Support
Problems (DSP’s), and the software [1, 6, 7, 20].
Formulation and solution of Decision Support Problems provide a means for making the
following types of decisions [1] :
! Selection : The indication of a preference, based on multiple attributes, for one among
several feasible alternatives.
! Compromise : The improvement of a feasible alternative through modification.
! Coupled or hierarchical : Decisions that are linked together  selection/selection,
compromise/compromise and selection/compromise decisions may be coupled.
! Conditional : Decisions in which the risk and uncertainty of the outcome are taken into
account.
! Heuristic : Decisions made on the basis of a knowledge base of facts and rules of thumb;
Decision Support Problems that are solved using reasoning and logic only.
39
The primary goal of the Decision Support Problem is to assist the designer in arriving at
the best possible solution. This is achieved in three major stages [1] :
! In the first step we use the available soft information to identify the more promising “most
likelytosucceed” concepts. This is accomplished by formulating and solving a preliminary
selection Decision Support Problem.
! Next, we establish the functional feasibility of these most likely to succeed concepts and
develop them into candidate alternatives. The process of development includes engineering
analysis and design; it is aimed at increasing the amount of hard information that can be used
to characterize the suitability of the alternative for selection.
! In the third step we select one (or at most two) candidate alternative for further development.
This is accomplished by formulating and solving a selection Decision Support Problem. The
selection Decision Support Problem has been designed to use both the hard and the soft
information that is available.
4.2 Mathematical Formulation of the Compromise Decision Support Problem [1, 7] :
4.2.1 System Variables :
These are the parameters that completely define the design problem. When represented as
a row vector, the system variables are called the design vector. System variables are independent
of the other parameters and can be modified as required by the designer to alter the state of the
system. System variables that define the physical attributes of an artifact are always nonzero and
positive.
X = (x
1
, x
2
,... x
n
), x
i
≥ 0 (4.1)
40
4.2.2 System Constraints :
These constraints are a function of one or many system variables. The constraint can be
linear or nonlinear; equality or inequality. Invariably these are rigid constraints and always have
to be satisfied for a feasible solution of the design problem. In the Compromise Decision Support
Problem formulation D(X) is the demand or expectation of the system and C(X) is the capacity or
capability of the system to meet this demand.
C
i
(X) ≥ D
i
(X); i = 1, 2, 3 ..., m (4.2)
4.2.3 System Bounds :
For all physical systems, the design variables are nonnegative. However in addition to
that condition, some system variables have certain minimum and maximum values that cannot be
violated. This condition may be expressed as an equality; either ≥ or ≤ as the case may be.
x
i
≥ min(x
i
) and/or x
i
≤ max(x
i
) (4.3)
4.2.4 System Goal :
The objective of the optimization problem is to achieve a certain goal G
i
(X) which is a
function of the system variables. This is the value that the designer aspires for. However, the
actual value of this system goal obtained is A
i
(X) which is the actual attainment. These two
quantities can be related in the following ways :
a. A
i
(X) ≤ G
i
(X) :
41
In this case, the actual value of the goal is less than or equal to the desired expectation.
e.g. Speed of an electric motor. Overachievement of this system goal may result in unsafe
operation and hence is not desirable.
b. A
i
(X) ≥ G
i
(X) :
In this case, the actual value of the goal is greater than or equal to the desired expectation.
e.g. Escape velocity of a bullet. If the bullet leaves the gun with a lower speed, it may not
accurately hit the target.
c. A
i
(X) = G
i
(X) :
In this case, the actual value of the goal is equal to the desired expectation.
4.2.5 Deviation Variables :
Mathematically the deviation variable is defined as :
d = G
i
(X) – A
i
(X) (4.4)
Since we have seen earlier that the actual value of the goal A
i
can be greater than or less
than the desired value of the goal G
i
, it follows that the deviation variable d can be either
negative or positive. However this does not comply with our earlier statement that all variables
should be nonnegative. Hence the deviation variable is expressed as :
d = d
i

 d
i
+
(4.5)
where,
d
i

. d
i
+
= 0 (4.6)
42
and,
d
i

, d
i
+
≥ 0 (4.7)
With this transformation, the system goal can be expressed as :
A
i
(X) + d
i

 d
i
+
= G
i
; i = 1, 2, ..., m (4.8)
where,
d
i

, d
i
+
≥ 0 and d
i

.d
i
+
= 0 (4.9)
This condition enforces all the deviation variables to be positive. Also, since their product
is zero, one of the deviation variables is always zero. Which deviation variable out of d
i

and d
i
+
goes to zero is decided by the following rules :
1. If A
i
≤ G
i
(i.e. underachievement of goal) then d
i

> 0, d
i
+
= 0
2. If Ai ≥ Gi (i.e. overachievement of goal) then d
i
+
> 0, d
i

= 0
3. If Ai = Gi (i.e. exact achievement of goal) then d
i

= 0, d
i
+
= 0
The deviation variables can be thought of as a degree of latitude that the designer has in
achieving the system goal G. As the attained goal, A becomes closer to the desired goal G, the
deviation variables limit towards zero.
43
Fig. 4.1 Graphical Representation of the Compromise DSP [1]
Fig. 4.1 is a graphical representation of the Compromise Decision Support Problem for a
two variable problem. The arrows indicate the direction of feasibility for each constraint and
bound and eventually, the shaded area in gray is the feasible design space. There is a distinct
difference between system variables (x
i
) and deviation variables (d
i
). As seen in Fig. 4.1, the i
th
system variable is the distance from the origin in the i
th
dimension. However, the deviation
variable originates from the curve (or surface) of the system goal G. Hence it does not have a
fixed origin, but is merely the deviation from the system goal curve (or surface).
44
Traditional optimization problems have an objective function, which is a function of the
design variables under consideration. This function is then either minimized or maximized to
achieve a desired goal. In the Compromise Decision Support Problem setup, the objective
function is a function of the deviation variables only. Hence to achieve a desired goal G, our
objective shall always be to minimize the objective function. The smaller the objective function,
the closer the system is to the desired goal and hence better the performance.
However, there are numerical issues with the minimization of the objective function,
which need to be addressed. Goals are not equally important to a designer. Therefore to solve the
problem, given a designer’s preferences, the goals are rankordered into priority levels. Within a
priority level it is imperative that the deviation variables are of the same order of magnitude.
Otherwise while optimizing the objective function, the deviation variable with the larger
numerical value shall take precedence and dominate the optimization process. Hence
normalization of the system goal is necessary.
The procedure to overcome this issue is to normalize the goal attainment level (A
i
) with
respect to the desired level (G
i
). The different cases to be considered are :
1. If the objective is to maximize the attainment level (A
i
) so that it reaches the desired goal
(G
i
) from below, consider the ratio A
i
/G
i
. Since A
i
is always going to be less than G
i
, this
ratio is always less than 1.
A
i
(X) ≤ G
i
→ A
i
(X)/G
i
≤ 1 (4.10)
Hence, the transformed system goal can be expressed as an equality by introducing the
deviation variables at this stage.
45
A
i
(X)/G
i
+ d
i

 d
i
+
= 1 (4.11)
With this transformation, the deviation variables vary between 0 and 1.
2. If the objective is to minimize the attainment level (A
i
) so that it reaches the desired goal
(G
i
) from above, consider the ratio G
i
/A
i
. Since A
i
is always going to be more than G
i
,
this ratio is always less than 1.
A
i
(X) ≥ G
i
→ G
i
(X)/A
i
≤ 1 (4.12)
Hence, the transformed system goal can be expressed as an equality by introducing the
deviation variables at this stage.
G
i
(X)/A
i
 d
i

+ d
i
+
= 1 (4.13)
Again, the deviation variables vary between 0 and 1. However in this case, the signs of
the deviation variables are exchanged to account for the inverse ratio.
3. If the objective is to exactly achieve the system goal (G
i
), two cases arise.
a. If the system goal is approached from below by A
i
, then use
A
i
(X)/G
i
+ d
i

 d
i
+
= 1
and minimize the sum (d
i

+ d
i
+
) (4.14)
b. If the system goal is approached from above by A
i
, then use
A
i
(X)/G
i
 d
i

+ d
i
+
= 1
46
and minimize the sum (d
i

+ d
i
+
) (4.15)
4.3 The Objective (Deviation) Function [1] :
In the compromise Decision Support Problem formulation the aim is to minimize the
difference between that which is desired and that which can be achieved. This is done by
minimizing the deviation function, Z(d

, d
+
), which is always written in terms of the deviation
variables. In all cases it might not be feasible to achieve the desired goal. Hence the designer has
to make a compromise on the achievement level of the desired goal. However this compromise is
to be as small as possible and this then becomes the objective of the Compromise Decision
Support Problem. Some linear or nonlinear combination of the deviation variables (d
i

, d
i
+
) can
be used to form the objective function. The value of this function can then be used as an
indicator of the degree to which the desired goals are achieved.
The deviation variables themselves are deviations from each of the system objectives.
Normalization of the system objectives makes the deviation variables comparable. However all
the system objectives may not be equally important, and hence all the deviation variables are not
to be minimized to the same extent. Hence an appropriate function needs to be defined which
correctly represents the relative importance of all the deviation variables. Two different
approaches are presented here which define the objective function of the Compromise Decision
Support Problem [1].
47
4.3.1 Archimedean Approach (Weighted Objectives Method) :
In this case, the deviation variables are simply multiplied by a weight and the objective
function is then only a linear combination of these products. It can be mathematically
represented as :
Z(d

,d
+
) = ∑w
i
(d
i

+ d
i
+
) i=1,..., m (4.16)
where each of the individual weights represents the relative importance of the system objective
associated with that deviation variable. The weights w
i
satisfy the following conditions :
∑w
i
= 1, w
i
≥ 0 for all i (4.17)
We know that one of our deviation variables associated with each system objective is
always zero. Hence the weight w
i
is applied to only one of the deviation variable, either d
i

or d
i
+
.
Though this seems to be a relatively simple way to formulate the objective function for
the Compromise Decision Support Problem, determining the actual weights for the deviation
variables is an important task, which can only be achieved through experience.
4.3.2 Preemptive Approach (Lexicographic Method) :
In this approach, the weighted sum of the deviation variables is replaced by a rank
ordering of the deviation variables. The highest ranked objective is minimized first and then the
next and so on until the last. The preemptive approach can be mathematically represented as :
Z = [ f
1
(d
i

, d
i
+
), f
2
(d
i

, d
i
+
),… f
k
(d
i

, d
i
+
)] (4.18)
48
The method of approach is to achieve the first objective f
1
and then the second one and so
on. However this is a qualitative approach as compared to the Archimedean approach. This
method is preferred when little mathematical information is available about the objective.
However there are no mathematical formulations involved and it serves as a good indicator of
the goal achievement.
4.4 Branch and Bound Algorithm
Branch and Bound is a general exhaustive search method. Suppose we wish to minimize
a function f(x), where x is restricted to some feasible region defined by explicit mathematical
constraints. To apply branch and bound, we must have a means of computing a lower bound on
an instance of the optimization problem and a means of dividing the feasible region of a problem
to create smaller sub problems. There must also be a way to compute an upper bound for at least
some instances; for practical purposes, it should be possible to compute upper bounds for some
set of nontrivial feasible regions [8, 21].
The method starts by considering the original problem with the complete feasible region,
which is called the root problem. Essentially, the Branch and Bound Algorithm generates a tree
like structure to identify and solve a set of increasingly constrained subproblems derived from
the root problem. The lowerbounding and upperbounding procedures are applied to the root
problem. If the bounds match, then an optimal solution has been found and the procedure
terminates. Otherwise, the feasible region is divided into two or more regions, each strict sub
regions of the original, which together cover the whole feasible region; ideally, these sub
problems partition the feasible region. These sub problems become children of the root search
node. The algorithm is applied recursively to the sub problems, generating a tree of sub
49
problems. If an optimal solution is found to a sub problem, it is a feasible solution to the full
problem, but not necessarily globally optimal. Since it is feasible, it can be used to prune the rest
of the tree: if the lower bound for a node exceeds the best known feasible solution, no globally
optimal solution can exist in the subspace of the feasible region represented by the node. If we
know the solution to the continuous variable problem, that can serve as a lower bound for the
Branch and Bound Algorithm. This is because adding additional constraints (which are added
when the problem is branched into subproblems) shall make the feasible design region smaller,
never greater. Hence the solution to the Branch and Bound Algorithm problem shall always be
inferior to the continuous variable problem. The search proceeds until all nodes have been solved
or pruned, or until some specified threshold is meet between the best solution found and the
lower bounds on all unsolved sub problems [22].
A large number of realworld planning problems called combinatorial optimization
problems share the following properties: They are optimization problems, are easy to state, and
have a finite but usually very large number of feasible solutions. The problems in addition share
the property that no polynomial method for their solution is known. All of these problems are
NPhard.
Branch and Bound Algorithm is by far the most widely used tool for solving largescale
NPhard combinatorial optimization problems. The Branch and Bound Algorithm is, however, an
algorithm paradigm, which has to be modeled out for each specific problem type, and numerous
choices for each of the components exist. Even then, principles for the design of efficient the
Branch and Bound Algorithms have emerged over the years.
50
Solving NPhard discrete optimization problems to optimality is often an immense job
requiring very efficient algorithms, and the Branch and Bound Algorithm paradigm is one of the
main tools in construction of these. The Branch and Bound Algorithm searches the complete
space of solutions for a given problem for the best solution. However, explicit enumeration is
normally impossible due to the exponentially increasing number of potential solutions. The use
of bounds for the function to be optimized combined with the value of the current best solution
enables the algorithm to search parts of the solution space only implicitly. At any point during
the solution process, the status of the solution with respect to the search of the solution space is
described by a pool of yet unexplored subset of this and the best solution found so far. Initially
only one subset exists, namely the complete solution space, and the best solution found so far is
∞. The unexplored subspaces are represented as nodes in a dynamically generated search tree,
which initially only contains the root, and each iteration of a classical The Branch and Bound
Algorithm processes one such node. The iteration has three main components: selection of the
node to process, bound calculation, and branching. The sequence of these may vary according to
the strategy chosen for selecting the next node to process. If the selection of next sub problem is
based on the bound value of the sub problems, then the first operation of an iteration after
choosing the node is branching, i.e. subdivision of the solution space of the node into two or
more subspaces to be investigated in a subsequent iteration. For each of these, it is checked
whether the subspace consists of a single solution, in which case it is compared to the current
best solution keeping the best of these. Otherwise the bounding function for the subspace is
calculated and compared to the current best solution. If it can be established that the subspace
cannot contain the optimal solution, the whole subspace is discarded, else it is stored in the pool
of live nodes together with it's bound. The search terminates when there are no unexplored parts
51
of the solution space left, and the optimal solution is then the one recorded as "current best". This
is known as the eager strategy for node evaluation, since bounds are calculated as soon as nodes
are available. The alternative is to start by calculating the bound of the selected node and then
branch on the node if necessary. The nodes created are then stored together with the bound of the
processed node. This strategy is called lazy and is often used when the next node to be processed
is chosen to be a live node of maximal depth in the search tree [21].
The problem is to minimize a function :
f(x) of variables (x
1
,… x
n
) over a region of feasible solutions, S :
min
S x ∈
f(x) (4.19)
The function f(x) is called the objective function and may be of any type. The set of
feasible solutions is usually determined by general conditions on the variables, e.g. that these
must be nonnegative integers or binary, and special constraints determining the structure of the
feasible set.
The term sub problem is used to denote a problem derived from the originally given
problem through addition of new constraints. A sub problem hence corresponds to a subspace of
the original solution space.
The solution of a problem with a Branch and Bound Algorithm is traditionally described
as a search through a search tree, in which the root node corresponds to the original problem to
be solved, and each other node corresponds to a sub problem of the original problem. Given a
node Q of the tree, the children of Q are sub problems derived from Q through imposing
52
(usually) a single new constraint for each sub problem, and the descendants of Q are those sub
problems, which satisfy the same constraints as Q and additionally a number of others. The
leaves correspond to feasible solutions, and for all NPhard problems, instances exist with an
exponential number of leaves in the search tree. To each node in the tree a bounding function g
associates a real number called the bound for the node. For leaves the bound equals the value of
the corresponding solution, whereas for internal nodes the value is a lower bound for the value of
any solution in the subspace corresponding to the node. Usually the bounding function g is
required to satisfy the following three conditions:
1. g(P
i
) ≤ f(P
i
) for all nodes P
i
in the tree
2. g(P
i
) = f(P
i
) for all leaves in the tree
3. g(P
i
) ≥ g(P
j
) if P
j
is the parent of P
i
These state that g is a bounding function, which for any leaf agrees with the objective
function, and which provides closer and closer (or rather not worse) bounds when more
information in terms of extra constraints for a sub problem is added to the problem description
[23].
53
Fig 4.2 Progress of the Branch and Bound Algorithm
The search tree is developed dynamically during the search and consists initially of only
the root node. For many problems, a feasible solution to the problem is produced in advance
using a heuristic, and the value hereof is used as the current best solution (called the incumbent).
In each iteration of a Branch and Bound Algorithm, a node is selected for exploration from the
pool of live nodes corresponding to unexplored feasible sub problems using some selection
strategy. If the eager strategy is used, a branching is performed: Two or more children of the
node are constructed through the addition of constraints to the sub problem of the node. In this
way the subspace is subdivided into smaller subspaces. For each of these the bound for the node
is calculated, possibly with the result of finding the optimal solution to the sub problem. In case
the node corresponds to a feasible solution or the bound is the value of an optimal solution, the
54
value hereof is compared to the incumbent, and the best solution and its value are kept. If the
bound is no better than the incumbent, the sub problem is discarded (or fathomed), since no
feasible solution of the sub problem can be better that the incumbent. In case no feasible
solutions to the sub problem exist the sub problem is also fathomed. Otherwise the possibility of
a better solution in the sub problem cannot be ruled out, and the node (with the bound as part of
the information stored) is then joined to the pool of live sub problems. If the lazy selection
strategy is used, the order of bound calculation and branching is reversed, and the live nodes are
stored with the bound of their parent as part of the information.
The bounding function is the key component of any Branch and Bound Algorithm in the
sense that a low quality bounding function cannot be compensated for through good choices of
branching and selection strategies. Ideally the value of a bounding function for a given sub
problem should equal the value of the best feasible solution to the problem, but since obtaining
this value is usually in itself NPhard, the goal is to come as close as possible using only a
limited amount of computational effort. A bounding function is called strong, if it in general
gives values close to the optimal value for the sub problem bounded, and weak if the values
produced are far from the optimum. One often experiences a trade off between quality and time
when dealing with bounding functions: The more time spent on calculating the bound, the better
the bound value usually is. It is normally considered beneficial to use as strong a bounding
function as possible in order to keep the size of the search tree as small as possible.
The strategy for selecting the next live sub problem to investigate usually reflects a trade
off between keeping the number of explored nodes in the search tree low, and staying within the
memory capacity of the computer used. If one always selects among the live sub problems one of
55
those with the lowest bound, called the best first search strategy, BeFS, no superfluous bound
calculations take place after the optimal solution has been found.
A sub problem P is called critical if the given bounding function when applied to P
results in a value strictly less than the optimal solution of the problem in question. Nodes in the
search tree corresponding to critical sub problems have to be partitioned by the Branch and
Bound Algorithm no matter when the optimal solution is identified  they can never be discarded
by means of the bounding function. Since the lower bound of any subspace containing an
optimal solution must be less than or equal to the optimum value, only nodes of the search tree
with lower bound less than or equal to this will be explored. After the optimal value has been
discovered only critical nodes will be processed in order to prove optimality. The preceding
argument for optimality of BeFS with respect to number of nodes processed is valid only if eager
node evaluation is used since the selection of nodes is otherwise based on the bound value of the
parent of each node. BeFS may, however, also be used in combination with lazy node
evaluation.
All branching rules in the context of The Branch and Bound Algorithm can be seen as
subdivision of a part of the search space through the addition of constraints, often in the form of
assigning values to variables. If the subspace in question is subdivided into two, the term
dichotomic branching is used, otherwise it is called polytomic branching. Convergence of the
Branch and Bound Algorithm is ensured if the size of each generated sub problem is smaller than
the original problem, and the number of feasible solutions to the original problem is finite.
Normally, the sub problems generated are disjoint  in this way the problem of the same feasible
solution appearing in different subspaces of the search tree is avoided.
56
Chapter 5
The Spring Design Problem
5.1 Introduction
Design of helical compression springs for various engineering applications has been a
longstanding optimization problem. Springs find applications in almost all machines in some
form or the other. Springs can be designed for static loading or dynamic loading or both. Usually
there are geometrical constraints, which limit the size of the spring. Also there are standard wire
diameters of the spring from which a designer has to make a choice [3, 4].
From the optimization point of view, three main design variables for the spring are
considered. As mentioned earlier the wire diameter (d) is one important system variable.
However this is not a continuous design variable but a discrete one. Another system variable is
the coil diameter (D). This is a continuous variable; however it does depend upon the wire
diameter selected. The last system variable under consideration is the number of turns (N) of the
spring coil. Again, this is not a continuous variable but can take only integer values. These three
design variables completely define the geometry of the spring. When a suitable material for the
spring is chosen all the other spring characteristics such as spring rate, free length, solid length
etc. are completely defined [3, 4].
57
Figure 5.1 Helical Compression Spring
Helical compression springs can be found in numerous mechanical devices. They are
used to exert force, to provide flexibility, and to store or absorb energy. To design a helical
spring, design criteria such as fatigue, yielding, surging, buckling, etc., should be taken into
consideration. To obtain a solution, which meets various mechanical requirements, an
optimization study should be performed. Thus the spring design problem is a challenging one
considering the three different types of design variables involved in the design process. Using a
certain algorithm, a continuous solution to the spring design problem may be obtained. However
this might not be a viable solution because of the discrete nature of the coil diameter and the
integer nature of the number of turns.
58
5.2 Compromise DSP Formulation for the Spring Design Problem
The theoretical Compromise Decision Support Problem Formulation presented in
Chapter 4 is applied to the problem of designing a helical spring. Our main objective is to
minimize the weight (i.e. solid volume) of the spring, subjected to certain geometrical
constraints. Other restrictions on the design arise due to material strength and spring loading
parameters. We present a problem that is well discussed in literature, and is used to test the
efficiency of various optimization algorithms. The spring is a helical compression spring with a
circular crosssection of the wire, manufactured from music wire spring steel ASTM A228.
Hence the wire diameter of the spring can only take the following discrete values given in Table
5.1 below [3, 4]. The ends of the spring are to be squared and ground.
0.0090 0.0095 0.0104 0.0118 0.0128 0.0132
0.0140 0.0150 0.0162 0.0173 0.0180 0.0200
0.0230 0.0250 0.0280 0.0320 0.0350 0.0410
0.0470 0.0540 0.0630 0.0720 0.0800 0.0920
0.1050 0.1200 0.1350 0.1480 0.1620 0.1770
0.1920 0.2070 0.2250 0.2440 0.2630 0.2830
0.3070 0.3310 0.3620 0.3940 0.4375 0.5000
Table 5.1 : Allowable Wire Diameters for ASTM A228 (in.)
59
5.3 Design Parameters for the Spring
As mentioned earlier, there are three design variables associated with the spring design
problem :
1. The coil diameter (D), which is a continuous variable.
2. The wire diameter (d), which is a discrete variable and can take values in Table 1 above.
3. The number of turns (N), which can take integer values only.
The design limitations or geometric constraints can be listed as follows [3] :
1. The preload (F
p
) is 300 lb.
2. Maximum working load (F
max
) is 1000 lb.
3. Maximum allowable deflection under preload (∂
pm
) is 6 in.
4. Maximum deflection from preload position to maximum loading (∂
w
) is 1.25 in.
5. Maximum free length of the spring (l
max
) is 14 in.
6. Maximum outside diameter of the spring (D
max
) is 3 in.
7. Minimum wire diameter (d
min
) is 0.2 in.
The objective is to minimize the solid volume of the spring, which is given by
) 2 (
4
) (
2
2
+ = N Dd x f
π
(5.1)
The material constants associated with the spring are :
1. Allowable shear stress (S) = 189,000 psi.
2. Shear Modulus (G) = 1.15 x 10
8
Other parameters associated with the spring are :
Spring Index :
d
D
C = (5.2)
60
Wahl’s Correction Factor :
C C
C
C
f
615 . 0
4 4
1 4
+
−
−
= (5.3)
Working Load Deflection : .) (
max
in
K
F
= ∂ (5.4)
Free Length : d N l
f
) 2 ( 05 . 1 + + ∂ = (5.5)
Preload Deflection :
K
F
p
p
= ∂ (5.6)
Since we have only one objective (i.e. Minimization of volume), we define a single
deviation variable d
1
. Also, we assume a target value for the spring volume, V
TV
, to be the
optimal value of the spring for a continuous variable problem. From a simple nonlinear
constrained optimization of this problem, we observe that this goal cannot be achieved (due to
discrete variables), hence the deviation variable shall be a measure of how far we are from this
goal [1].
5.4 Constraint and Goal Formulation for the Compromise DSP
The constraints can be expressed in mathematical form as follows [5] :
1
8
3
max
1
≤ =
d S
D F C
g
f
π
(Shear Stress) (5.7)
1
max
2
≤ =
l
l
g
f
(Free Length) (5.8)
1
min
3
≤ =
d
d
g (Wire Diameter) (5.9)
61
1
max
4
≤
+
=
D
d D
g (Outside Diameter) (5.10)
1
3
5
≤ =
C
g (Winding Limit) (5.11)
1
6
≤ =
pm
p
d
d
g (Preload Deflection) (5.12)
1
) 2 ( 05 . 1
max
7
≤
(
¸
(
¸
+ −
−
− ∂
=
f
p
p
l
d N
K
F F
g
(Working Deflection Consistency) (5.13)
1
max
8
≤
−
=
w
p
Kd
F F
g (Deflection Requirement) (5.14)
In the Compromise Decision Support Problem formulation, the goal can be expressed as :
0
) 2 (
4
:
1
2
2
= −
(
(
(
(
¸
(
¸
+
+
d
V
N Dd
F
TV
π
(5.15)
where, V
TV
is the target value for the optimization goal and d
1
is the deviation variable.
The upper and lower bounds for the design variables can be defined as :
Coil Diameter : D
LB
£ D £ D
UB
; with D
LB
=1.0, D
UB
= 6.0 (5.16)
Wire Diameter : d
LB
£ d £ d
UB
; with d
LB
=0.0, d
UB
= 0.5 (5.17)
62
Number of Turns : N
LB
£ N £ N
UB
; with N
LB
=3.0, N
UB
= 30.0 (5.18)
5.5 Branch and Bound Algorithm
The Branch and Bound algorithm, is especially useful in solving constrained nonlinear
integer programming problems, in spite of being an exhaustive searching algorithm. However it
can be also used to solve problems with discrete valued design variables with a suitable
modification. This approach is briefly explained here. We use the current problem under
consideration as an example to illustrate the procedure [21].
We have a discrete valued design variable; wire diameter (d). A set of values that this
variable can take are listed in Table 5.1. We define the wire diameter to be a binary (01)
combination of this set. The wire diameter can thus be expressed as the following matrix
multiplication :
d = [0.0090 0.0095 0.0104 …0.5000][x_status] (5.19)
where x_status is a column vector, with as many rows as elements in the discrete valued set.
Only one element of this column vector is 1, rest all being zero. This x_status vector is appended
to the existing design vector. All the elements in the x_status vector are integervalued variables.
Two additional linear equality constraints are added to any other existing constraints. The
first is the definition of the discrete variable in terms of elements of x_status vector. From the
above formulation of wire diameter we have,
d  {0.0090*x_status(1) + 0.0095*x_status(2) + 0.0104*x_status(3) + …+ 0.5000*x_status(n)}
= 0 (5.20)
63
where n is the length of the discrete valued set.
The second equality constraint pertains solely to the x_status vector. The sum of all the
elements of the x_status vector must be 1. Since all elements of the x_status vector are integer
valued variables, this constraint ensures that only one element of the vector shall have a value of
1, whereas the rest shall all be zero’s at any given iteration stage. Hence the discrete valued
variable shall always take a value from the allowable discrete value set.
x_status(1)+ x_status(2)+ x_status(3)+…+ x_status(n) = 1 (5.21)
Thus the Branch and Bound algorithm, which can be effectively used to solve integer
optimization problems, can also be used to solve optimization problems with discrete valued
variables.
However, the procedure becomes extremely sensitive to the starting vector. It is
important to choose a suitable starting vector for the optimization procedure, since the
performance of the Branch and Bound algorithm depends on it. It can be assumed that the
solution to the mixed (continuous, integer, discrete) variable problem will be close to the solution
of the same problem with continuous variables. We initially solve the continuous variable
problem, and use the solution as a starting vector for the Branch and Bound algorithm.
5.6 Results and Discussion
We present results by Sandgren and Kannan & Kramer who have solved the same
problem using different optimization techniques [3, 4]. Also the solution obtained using
continuous variables is presented purely for comparison. We have used the Branch and Bound
algorithm in conjunction with the Compromise Decision Support Problem formulation. The
64
starting vector for Sandgren and Kannan & Kramer is different than what we have used. As
mentioned earlier, we use the solution of the continuous variable problem as the starting vector
for the branch and bound algorithm.
Design
Variables
Starting
Values
{1}
Continuous
Solution
{2}
Sandgren
Solution
{3}
Kannan &
Kramer
Solution
{4}
Compromise
DSP Solution
Coil Diameter
D (in.)
2.000 1.000 1.180 1.329 1.000
Wire Diameter
d (in.)
0.375 0.269 0.283 0.283 0.283
Number of
Turns N
10 3 10 7 3
Solid Volume V
(in
3
)
8.327 0.891 2.799 2.365 0.988
Table 5.2 Comparison of Optimal Solutions
We find that the Compromise Decision Support Problem solution is superior to both the
Sandgren Solution and Kannan & Kramer Solution. It is also only slightly greater than the
Continuous Solution. The reduction in spring volume (objective function) can be expressed in
terms of percentage improvement in Table 5.3 below.
Improvement of Solution using Compromise DSP and Branch & Bound Algorithm over
Sandgren Solution Kannan & Kramer Solution Continuous Solution
64.7017 % 58.2241 % 10.8866 %
Table 5.3 Percentage Improvement
65
We also mentioned that the efficiency of the Branch and Bound Algorithm depends on
the initial guess vector. Hence we present results for different random starting vectors. A
measure of the efficiency of the algorithm can be assessed from the number of cycles (i.e.
iterations) needed to find an optimal solution.
Starting Vector
Initial Guess Values
(D, d, N)
Number of Cycles
Execution Time for
Branch & Bound
Algorithm
{1} (2.000, 0.375, 10) 121 24.305 s
{2} (1.000, 0.269, 3) 19 4.236 s
{3} (1.180, 0.283, 10) 28 6.670 s
{4} (1.329, 0.283, 7) 9 2.093 s
Table 5.4 Comparison of Solution Time
As expected we find that the starting guess vector does affect the performance of the
Branch and Bound Algorithm. The starting vector {1} is farthest from the actual solution (in
terms of objective function value) and it requires the largest iteration time. Our procedure to use
the continuous solution as a starting vector for the Branch and Bound Algorithm performs better
than all other starting vectors except {4}, which is the solution obtained by Kannan & Kramer.
This could be considered as an exceptional case, since there are no set rules for using a specific
starting vector. But we can see the obvious advantage of using {2} (Continuous Solution) as a
starting vector as opposed to {1}.
66
Our procedure of combining the Compromise Decision Support Problem formulation,
and then using the Branch & Bound Algorithm to obtain solution for a mixed variable problem is
shown to be superior to other methods previously published. However, the problem under
consideration is a relatively small sized problem. Hence any general statements regarding the
superiority of this method over other methods cannot be made at this stage. However, its
application to solve mixed variable problems is suitably illustrated. In the subsequent chapter we
present its application on a fairly large sized problem, with a larger design vector as well as more
constraint equations. Also, the spring design problem is a single goal optimization problem. The
‘Gear Train Design’ problem, which is presented in Chapter 6, is a multiple goal optimization
problem, which applies even more aspects of the Compromise Decision Support Problem
approach.
67
Chapter 6
The Gear Train Design Problem
6.1 Introduction
Multistage power transmission units find a variety of applications in today’s world.
Almost all automobile and aerospace applications make use of a multistage gearbox as a
primary power transmission unit. In this regard, the design of a multistage gearbox is of
considerable importance. Usually, the necessary reduction ratio (speed ratio) and transmitted
torque are parameters that control the design of a gearbox.
A lot of research has gone into developing suitable algorithms for nonlinear constrained
optimization. Many researchers [2428] have presented different approaches to optimize a Gear
Train Design Problem in some form or the other. We do not present their approaches here for the
sake of brevity, however it suffices to say that Gear Train Design is a rich and diverse design
problem. Recently, the focus of optimization algorithms has shifted from traditional methods to
evolutionary methods such as expert systems, fuzzy logic, genetic algorithms and such.
The focus of gearbox design can be said to be the strength of the mating gears, especially
the teeth. The tooth bending failure and surface fatigue failure of gear teeth account for almost
all cases of gearbox failures. It can be assumed that one of these two mechanisms is responsible
for failure of the gear. If failure by either of these mechanisms is equally likely, then such a gear
design can be considered to be an optimal one.
With this idea of an optimal gear design, the goal of the gear train design problem
becomes one of reduction of the weight of the gearbox. This need for weight reduction is even
68
more prominent in aerospace applications, which can have a large penalty associated with
increased weight. It is also an important aspect in the development of today’s fuelefficient cars,
which cannot do without a power transmission unit such as a gearbox. As mentioned earlier,
usually a multistage gearbox is employed for this purpose. It is intuitive to consider that the
weight of the gearbox shall increase with the number of stages (twostage or threestage). This is
because, the number of mating gear pairs shall increase, as well as supplemental elements such
as connecting shafts, bearings etc. However, for a desired constant reduction ratio, the speed
ratio in every stage becomes smaller as the number of stages increase. If it were desired to have a
very large reduction ratio in a single stage, the volume of the gear set would be very large. A
smaller reduction ratio in each stage would mean that the gears could be made smaller, and
hence result in a more compact gearbox [5].
With this background of the gear train design problem, we can conclude that it is a
multiple objective optimization problem. The primary objective would be to minimize the weight
(i.e. volume) of the complete gear train. It is also desired that tooth bending fatigue failure and
surface fatigue failure of the gear teeth should be equally likely for an optimal design. For a
multistage gearbox it is also desired that the strength of the mating gears in each pair should be
comparable, because the weakest gear pair would determine the failure criteria for the complete
gearbox.
6.2 Tradeoff Analysis
It is to be expected that in general, stronger gears would result in greater weight of the
gears. The surface fatigue life factor (C
Li
) of a gear represents the 99% reliability lifetime. A plot
of the variation of the surface fatigue life factor with the lifetime expressed in number of cycles
69
is shown in Fig. 6.1. In traditional gear design approaches, a desired lifetime for the gears is
chosen and this determines the surface life fatigue factor for the gear. However, in our
formulation, the surface fatigue life factor is one of the design parameters. It is desired to
maximize the lifetime (thereby minimize the surface life fatigue factor) for a given loading and
geometry of the gearbox.
Fig 6.1 Surface Fatigue Life Factor [29]
Hence a tradeoff analysis is proposed between the surface fatigue life factor (C
Li
) and the
optimal volume of the gearbox. This results in what is commonly referred to as the Pareto
Optimal Curve. We expect that the two objectives in the scope of this problem i.e. minimization
of volume and maximizing lifetime (or minimizing the surface life fatigue factor) are conflicting
in nature. Hence there exist multiple “optimal designs” for a given loading and geometry. These
“optimal” solutions form the tradeoff curve, which is the Pareto Optimal Set.
70
6.3 Problem Formulation
The scope of our study is limited to threestage gear train design, though the
methodology presented here can be applied to any multistage gearbox. The basic problem
formulation is obtained from Thompson et al [5]. However, the paper presents a formulation,
which is general nonlinear optimization problem. We reformulate the problem to adhere to the
Compromise Decision Support Problem method. Also, all variables were considered to be
continuous in Thompson’s work. In this section we outline the nature of the design variables and
present a Compromise Decision Support Problem formulation that is suitable for the Branch and
Bound Algorithm.
The nomenclature used in this section is the same as that found in Juvinall and Marshek
[29]. Though an effort is made to define all the parameters used in this formulation, the readers
may refer to the original text for more details. We present the design methodology for a three
stage gear train, with the understanding that this can be extended to any multistage gearbox
design. Fig 6.2 illustrates the geometric and design parameters associated with a threestage gear
train.
71
Fig. 6.2 Three Stage Gear Train
6.4 Input Parameters :
In general the gear train design depends on the torque transmission capacity of the
gearbox and the overall speed reduction ratio desired. The input torque to the gear train (T
in
) and
the overall speed ratio (e) are the input parameters for our design problem. Keeping these two
input parameters constant, we generate the Pareto Optimal Curves. Choosing different values for
these parameters can generate different sets of Pareto Optimal Curves. The values we consider
in this problem are tabulated below.
72
Input Torque
(T
in
lbin.)
Overall Speed Ratio
(e)
80 0.15
120 0.1
180 0.0667
270 0.05
Table 6.1 Input Parameters
6.5 Material Parameters :
The following are the material properties and constants regarding the gear design. These
constants are selected considering the scope of the problem we are dealing with. It is necessary
to choose appropriate values depending on the nature of the loading, manufacturing processes
used for generating the gear profile and material of the gear [5].
73
Description Symbol Value Units
Bending Reliability Factor (99.00%) k
r
0.814 none
Elastic Coefficient C
p
2300 ◊psi
Mean Stress Factor k
ms
1.4 none
Mounting Factor K
m
1.6 none
Overload Factor K
o
1.0 none
Pressure Angle f 20± degree
Shaft Length (2stage) L
s
8.0 in.
Shaft Length (3stage) L
s
4.0 in
Surface Fatigue Strength S
fe
190,000 psi
Surface Reliability Factor (99.00%) C
r
1.0 none
Torsional Stress Limit – Shaft t
max
25,000 psi
Velocity Factor K
v
2.0 none
Table 6.2 Material Properties and Constants
6.6 Design Vector :
The overall design of an individual gear is determined by its diameter and diametral pitch
(English units). In the English units, usually the diametral pitch (P) is taken to be an integer
value. This is analogous to the standard module for the SI system. The number of teeth for a gear
is also an integer parameter, however it is not a design variable in our formulation. With this
understanding the design vector is as given below.
x = [ d
p1
d
g1
b
1
P
1
H
1
d
p2
d
g2
b
2
P
2
H
2
d
p3
b
3
P
3
H
3
d
s1
d
s2
] (6.1)
74
We have a predetermined overall reduction ratio (e) for the gear train. Hence one of the
six gear diameters can be expressed as a function of the overall reduction ratio and the remaining
five gear diameters. We choose to consider the gear diameter of the third pair (d
g3
) as a
dependent variable instead of an independent design variable. It is given by,
e d d
d d d
d
g g
p p p
g
2 1
3 2 1
3
= (6.2)
We observe that all the design variables are continuous variables except the diametral
pitch (P) which is an integer variable.
Upper Bound :
The upper bound on the variables is given by the following vector.
x
ub
= [10 10 20 50 500 10 10 20 50 500 10 20 50 500 10 10 ] (6.3)
Since we are using the English units, a maximum gear (or pinion) diameter of 10 in.
should be a practical upper bound. For the scope of our problem, with the maximum value of
input torque as 270 lbin. and largest speed reduction (e = 0.05), this is an acceptable limit. The
Brinell Hardness value of 500 BHN is a material upper bound, considering the gear surfaces are
hardened to the maximum extent possible. The diametral pitch also has an upper bound of 50in
1
,
which is chosen by trial and error.
Lower Bound :
The lower bound on the variables is given by the following vector.
x
lb
= [0 0 0 0 200 0 0 0 0 200 0 0 0 200 0 0] (6.4)
75
A lower limit of the Brinell Hardness value is chosen to be 200 BHN arbitrarily. All the
remaining variables are simply considered to be nonnegative and hence they have a lower bound
of zero.
6.7 Constraints :
In this design problem, as was the case in the spring design problem, we have both
geometric constraints as well as material constraints. They are categorized below and expressed
in the terms of the design variables. They are formulated according to the Compromise Decision
Support Problem formulation, and are not in their original form.
6.7.1 Bending Fatigue Constraints :
This constraint takes into account the torque acting on the gear tooth. Since ours is a
speedreduction gearbox, the torque acting on the gear tooth successively increases with each
stage. The constraints for each of the 3 stages are given by,
0 . 1
) ( ) , (
) (
1
'
1 1 1 1
1
1
≤ =
H C S d d P J b
P k
x g
s n p p
b
(Stage 1) (6.5)
0 . 1
) ( ) , (
) (
2
'
1 2 2 2 2
1 2
2
≤ =
H C S d d d P J b
d P k
x g
s n p p p
g b
(Stage 2) (6.6)
0 . 1
) ( ) , (
) (
3
'
1 2 3 3 3 3
1 2 3
3
≤ =
H C S d d d d P J b
d d P k
x g
s n p p p p
g g b
(Stage 3) (6.7)
where, the lumped constant k
b
is given by,
76
ms r
m o v in
b
k k
K K K T
k
2
= (6.8)
The Lewis Geometry Factor, J(P, d
p
) is a function of the pinion tooth number (which is
expressed as the product of diametral pitch, P and diameter of the pinion, d
p
). The plot of J
versus tooth number N is given below.
Fig. 6.3 Lewis Geometry Factor [5]
The curve is approximated by a 3
rd
order polynomial function, given by,
J = (2.175602µ10
7
)N
3
– (7.902098µ10
5
)N
2
+ (7.935120µ10
3
)N + 0.223833
N =P
i
d
pi
; i =1,2,3 (6.9)
The fatigue strength surface factor C
s
is a function of the Brinell Hardness value, and is
given by,
C
s
= (8.333333µ10
4
)H + 0.933333 (6.10)
77
6.7.2 Shaft Torsional Stress Constraints :
The shaft connecting the gears through the stages undergo torsional stresses due to the
tangential load on the gear teeth. This stress should not exceed the maximum shear stress for the
shaft material. The constraint is formulated as,
0 . 1 ) (
1
3
1
1
4
≤ =
p s
g
d d
d k
x g
τ
(Connecting Stages 1 & 2) (6.11)
0 . 1 ) (
1 2
3
2
1 2
5
≤ =
p p s
g g
d d d
d d k
x g
τ
(Connecting Stages 2 & 3) (6.12)
where the lumped constant k
τ
is given by,
max
16
πτ
τ
in
T
k = (6.13)
6.7.3 Face Width Constraints :
This is a geometrical constraint on the face width of the gears to ensure that the gears are
not too thick or too thin. For each of the three stages, the constraint equations are given below.
0 . 1
9
) (
1 1
6
< =
P b
x g
(Stage 1 – Minimum Face Width) (6.14)
0 . 1
9
) (
2 2
7
< =
P b
x g
(Stage 2 – Minimum Face Width) (6.15)
0 . 1
9
) (
3 3
8
< =
P b
x g
(Stage 3 – Minimum Face Width) (6.16)
78
0 . 1
14
) (
1 1
9
< =
P b
x g
(Stage 1 – Maximum Face Width) (6.17)
0 . 1
14
) (
2 2
10
< =
P b
x g
(Stage 2 – Maximum Face Width) (6.18)
0 . 1
14
) (
3 3
11
< =
P b
x g
(Stage 3 – Maximum Face Width) (6.19)
6.7.4 Interference Constraints :
With meshing gears it is essential that there is no ‘interference’ in operation. This
requires that the point of tangency between the pinion and gear remain on the involute profile of
the gear, outside of the base circle. This factor is a function of the center distance between the
gears and the pressure angle (f) and the constraint is expressed as,
0 . 1
) 2 (
4
sin
1
) (
1 1 1
2
1
2
1 1
12
≤
+
+
=
p g p
g
d d d P
d P
x g
φ
(Stage 1) (6.15)
0 . 1
) 2 (
4
sin
1
) (
2 2 2
2
2
2
2 2
13
≤
+
+
=
p g p
g
d d d P
d P
x g
φ
(Stage 2) (6.16)
0 . 1
) 2 (
4
sin
) (
1 2 1 2
2
3
2
3
2
1 2 1 2 3 3
14
≤
+
+
=
g g p p p
g g p p p
d ed d d d P
d ed d d d P
x g
φ
(Stage 3) (6.17)
79
6.7.5 Minimum Pinion Tooth Number Constraints :
It is a general design rule that the minimum number of teeth on a pinion should be 16.
This constraint is expressed for the pinion gear in all the three stages.
0 . 1
16
) (
1 1
15
≤ =
p
d P
x g (Pinion for Stage 1) (6.18)
0 . 1
16
) (
2 2
16
≤ =
p
d P
x g (Pinion for Stage 2) (6.19)
0 . 1
16
) (
3 3
17
≤ =
p
d P
x g (Pinion for Stage 3) (6.20)
6.8 Design Goal (Objective Function)
We have identified two objectives for our gear design problem. The first objective is to
minimize the overall volume (weight) of the gearbox. For a given set of input parameters (T
in
, e)
we can obtain an optimal design in terms of the design variables listed earlier, which satisfy all
the constraints. This design set shall result in a specific value of the surface fatigue life factor
(C
Li
). Our second objective is to minimize this factor, since it shall result in a greater life (See
Fig. 6.1) for the gearbox. However, these are conflicting objectives and we cannot improve one
without worsening the other.
Hence we propose to generate a set of ‘optimal’ solutions for a given set of input
parameters. These optimal solutions define the Pareto Optimal Curve and a designer can choose
any one of the designs on these curves. To generate this curve, we use the Archimedean
Approach (Weighted Sum) and define a scalar valued goal, given by the expression,
80
(
¸
(
¸
+ +
+ =
TV
c
TV
V
C
x f x f x f
V
x f
x f
) ( ) ( ) ( ) (
) (
1 1 1 0
α α (6.21)
where, f
0
is the total volume of the gearbox. f
1
, f
2
, f
3
are surface fatigue life factors for the
gears in the 3 stages.
V
α and
c
α are weights for the two objectives, which can be varied to obtain
the Pareto Optimal Curve. V
TV
is a target value for the gearbox volume as in the case of the
spring design problem. C
TV
is a target value for the surface fatigue life factor.
We first obtain a solution to the same optimization problem with continuous variables.
This serves a twofold purpose. Firstly, the design vector obtained at the end of this optimization
can be used as a starting vector for the Branch and Bound Algorithm. As illustrated in Chapter 4,
this reduces the iterations required for the Branch and Bound Algorithm. This technique is
especially useful in a problem such as this one, which has a large number of design variables and
a number of nonlinear constraints. Our formulation of the problem for the Compromise
Decision Support Problem solution may also result in some of the constraints becoming ill
conditioned. However, the normalization technique used, wherein all the constraints are
expressed as g(x) ≤ 1, attempts to avoid this.
Secondly, the optimal value of the gearbox volume obtained at this stage can be used as a
target value for the mixed variable problem. We can expect that the mixed variable problem
solution shall result in a value that is greater than the continuous variable solution, with all the
constraints remaining the same. Thus a deviation variable d
1
can be defined from this target
value (V
TV
) and our objective would be to minimize this deviation. The same argument can be
applied for minimizing the Surface Fatigue Life Factor. A second deviation variable d
2
can be
81
defined to quantify the deviation from the value (C
TV
) obtained from the continuous variable
solution.
The volume of the gearbox can be expressed as,
( ) ( ) ( )
(
(
¸
(
¸
+ +


.

\

(
(
¸
(
¸
+ + + + + = Ls d d b d
d ed
d d
b d d b d d x f
s s p
g g
p p
g p g p
2
2
2
1 3
2
3
2
1 2
1 2
2
2
2
2
2 1
2
1
2
1 0
1
4
) (
π
(6.22)
where, L
s
is the fixed shaft length.
The surface fatigue life factors can be expressed in terms of the design variables for each
of the three stages as,
1
2
1 1
1 1
1
) (
) (
g p
g p s
d d b
d d k
x f
+
= (Surface Fatigue Life Factor for Stage 1) (6.23)
1 2
2
2 2
1 2 2
2
) (
) (
p g p
g g p s
d d d b
d d d k
x f
+
= (Surface Fatigue Life Factor for Stage 2) (6.24)
2
1 2 3 3
1 2 1 2 1 2
3
) (
) (
) (
p p p
g g g g p p s
d d d b
d d d ed d d k
x f
+
= (Surface Fatigue Life Factor for Stage 3) (6.25)
where the lumped constant k
s
is given by,
2 2
2
sin cos
4
R fe
in m o v p
s
C S
T K K K C
k
φ φ
= (6.26)
82
As mentioned earlier, a smaller surface fatigue life factor (C
Li
) means a longer life for the
gearbox, these functions are to be directly minimized. The weights associated with the two
objectives (
0
α and
c
α ) are systematically varied to generate the Pareto Optimal Curves.
Finally, the goal for the Compromise Decision Support Problem solution can be
expressed as,
0
) ( ) ( ) ( ) (
:
2
3 2 1
1
0
=


.

\

−
+ +
+


.

\

−
+ +
d
C
x f x f x f
d
V
x f
F
TV
C
TV
V
α α (6.27)
The value of the deviation variables thus obtained shall give us an estimate as to how
much the discrete variable solution has deviated from the continuous one. However, in this
particular study, our scope is to obtain the Pareto Optimal Curves for the two conflicting
objectives, and hence that shall be the main subject of discussion henceforth.
6.9 Branch and Bound Technique
With the abovementioned problem formulation, we can solve the problem using the
Branch and Bound Algorithm. In this case, we have three design variables (Diametral Pitch for
the 3 stages) that have integer values. However, there are no discrete valued variables. Hence we
do not need to define the x_status vector as in the case of the spring design problem. The original
Branch and Bound Algorithm can solve such integervalued problems.
6.10 Results and Discussion
We present the Pareto Optimal Curves for the Compromise Decision Support Problem
problem. There are two cases under study. In the first case, the input torque (T
in
) is kept constant
83
at 120 lbin. Then we obtain different Pareto Optimal Curves for varying values of the overall
speed reduction ratio (e). The plot below shows the Pareto Optimal Curves for speed reduction
ratios of 0.15, 0.1, 0.0667, and 0.05.
Tradeoff Curves for a constant Torque
(T=120 lbin)
1.00
1.10
1.20
1.30
1.40
1.50
1.60
1.70
1.80
1.90
2.00
10 20 30 40 50 60 70
Volume (cubic in.)
S
u
r
f
a
c
e
L
i
f
e
F
a
c
t
o
r
e=0.05
e=0.0667
e=0.1
e=0.15
Fig. 6.4 Pareto Optimal Curves for Constant Torque
84
T = 120 lbin.
e = 0.05
Vol. 38.1219 38.8847 38.9422 40.9414 41.4795 45.0417 54.3028 56.4096 67.9248
C
Li
1.9032 1.8507 1.7898 1.6938 1.5859 1.4753 1.3156 1.2281 1.1172
e = 0.0667
Vol. 28.1000 28.8442 30.9366 32.7868 33.8369 35.9518 41.4818 44.5735 48.1748
C
Li
1.9542 1.8015 1.7590 1.6103 1.5413 1.4305 1.2599 1.1973 1.1275
e = 0.1
Vol. 18.7143 18.8249 19.8443 21.4528 23.5694 24.2719 25.4360 27.0994 36.7726
C
Li
1.9344 1.9000 1.8398 1.8182 1.7186 1.6673 1.3945 1.3124 1.1700
e = 0.15
Vol. 12.8155 13.4837 14.5645 15.0679 16.4025 17.1901 18.9693 21.0447 22.9703
C
Li
1.8499 1.7604 1.6686 1.6046 1.5149 1.4611 1.2256 1.1327 1.0798
Table 6.3 Pareto Optimal Solutions for Constant Torque
We can draw some conclusions from the Pareto Optimal Curves above. It can be
expected that for a greater speed reduction (smaller e), the weight of the gearbox shall be greater
than one for a lesser speed reduction. Thus we observe that for a given surface fatigue life factor,
the volume (and hence weight) of the gearbox increases with reduction the speed ratio e.
85
Another observation is that for a given speed reduction ratio e, the surface fatigue life
factor has an inverse relationship with the overall weight of the gearbox. As seen from Fig. 6.1, a
lower value of surface fatigue life factor is desirable for higher life of the gearbox. However, we
observe that as surface fatigue life factor decreases, the weight of the gearbox increases. This
means that a lighter gearbox has a larger surface fatigue life factor and hence a smaller lifetime.
It is intuitive to expect that a heavier gearbox would mean lower operating stresses in the gears,
and hence lower fatigue.
Another comparison can be made with the results obtained by Thompson et al [5] and
those presented here. A general observation can be made that for a given surface fatigue life
factor, the gearbox volume obtained by Thompson et al is slightly lower than those obtained
using the Branch and Bound Algorithm. This result is to be expected, since Thompson’s solution
is a continuous variable solution and that presented here is a mixed variable solution. However,
the increase in volume for a given surface fatigue life factor is not significant, and this justifies
our guess that the mixed variable problem solution shall lie in the vicinity of the continuous
variable solution. Thus this validates the technique of using the design vector obtained in
continuous variable solution as a starting point for the Branch and Bound Algorithm.
For the second case, we keep the overall speed reduction ratio constant at e = 0.1. We
then proceed to obtain the Pareto Optimal Curves for different values of input torque, T
in
= 80,
120, 180, and 270 lbin. The plots obtained are illustrated in the figure below.
86
Tradeoff Curves for a constant Reduction Ratio
(e=0.1)
0.80
1.00
1.20
1.40
1.60
1.80
2.00
10 20 30 40 50 60 70 80
Volume (cubic in.)
S
u
r
f
a
c
e
L
i
f
e
F
a
c
t
o
r
T=80
T=120
T=180
T=270
Fig. 6.5 Pareto Optimal Curves for Constant Speed Ratio
87
e = 0.1
T = 80 lbin.
Vol. 12.9733 13.2944 14.4805 15.8081 16.9753 18.6938 20.9040 21.8406 24.5240
C
Li
1.7749 1.7449 1.6944 1.5451 1.3707 1.1919 1.0777 0.9925 0.8990
T = 120 lbin.
Vol. 18.7143 18.8249 19.8443 21.4528 23.5694 24.2719 25.4360 27.0994 36.7726
C
Li
1.9344 1.9000 1.8398 1.8182 1.7186 1.6673 1.3945 1.3124 1.1700
T = 180 lbin.
Vol. 27.6574 29.9579 31.6027 33.3165 37.9896 41.9605 47.0838 52.4715 56.4649
C
Li
1.9817 1.8782 1.7409 1.5704 1.3752 1.2478 1.1584 1.0978 1.0221
T = 270 lbin.
Vol. 40.8690 42.3795 45.6200 53.9178 59.8515 61.3885 67.4679 71.3577 75.4346
C
Li
2.0389 1.9816 1.6219 1.4306 1.3479 1.3024 1.1745 1.0656 1.0221
Table 6.4 Pareto Optimal Solutions for Constant Speed Ratio
Again, we observe similar trends for the Pareto Optimal Curves. It can be expected that a
gearbox transmitting greater torque (T
in
) shall have a great weight than one transmitting a lower
torque, all other parameters held constant. Here too, we observe that surface fatigue life factor
88
has an inverse relationship with the volume (weight) of the gearbox. A similar reasoning can be
applied in this case too to explain the relationship.
One important difference between the results found in Thompson et al [5] and the Pareto
Optimal Curves presented here lies in the actual shape of the curve. Thompson et al have
obtained fairly smooth curves by systematically varying
0
α and
c
α . Since those are continuous
variable solution we can observe a clear inverse quadratic relationship between the surface
fatigue life factor and the volume of the gearbox. However this is not the case in our results. We
find that the solutions do not chalk a smooth curve, but the results tend to deviate from an
apparent trend line. This fact brings out the importance of actually solving a mixed variable
problem as opposed to solving the simplified continuous variable problem and then rounding off
the design variables to a desired value. Such a rounding off may not always result in the optimal
solution. A rounding down could result in some of the constraints being violated and hence the
design rendered infeasible. Due to this irregularity, we observe that Pareto Optimal Curves are
not smooth curves, but have some irregularities in them.
89
Chapter 7
Conclusion
7.1 Conclusion
We have illustrated an integrated approach for solving mixed variable, multiobjective
constrained nonlinear optimization problems. Such a problem encompasses the whole gamut of
existing optimization problems. Problems such as linear programming, unconstrained
optimization, and single objective optimization are subsets of the mixed variable, multiobjective
constrained nonlinear optimization problem. The method used combines the Compromise
Decision Support Problem Technique and a Branch and Bound Algorithm to arrive at a final
optimal solution. The approach used here can be justified for a variety of reasons. The
Compromise Decision Support Problem Technique is a Goal Programming technique, wherein
the deviation between a desired objective (target value) and achievable value is minimized. Thus,
our focus shifts from the values of the actual design variables themselves to the objective
function. In many actual engineering examples, this is of advantage since our ultimate goal is to
minimize the objective function as long as the design variables satisfy the given problem
constraints.
As discussed earlier, the bounding function for the Branch and Bound Algorithm is the
most important parameter as far as its efficiency and convergence are concerned. The lower
bound for the algorithm sets a limit for the end solution obtained; hence it becomes important to
choose an appropriate bounding value. The solution obtained by solving the same problem with
continuous variables serves as an ideal bounding function for the Branch and Bound Algorithm.
It is an acceptable criterion, since the solution to the mixed variable minimization problem
90
cannot be lower than the one obtained for the continuous variable problem. Hence the deviation
from the continuous variable optimization problem serves as an objective function to be
minimized. The Branch and Bound Algorithm is shown to be more efficient when the starting
design vector is the optimal solution obtained by solving the continuous variable problem.
The Branch and Bound Algorithm is a systematic search procedure for solving problems
with mixed variables. The approach is adapted to both integer and discrete variable problems.
The solution to the mixed variable problem becomes increasingly important when designing for
mechanical components that conform to certain specific standard dimensions. The optimization
and standardization process becomes integrated and no further computation is necessary after the
optimal solution is obtained.
7.2 Future Work
We have illustrated an integrated optimization approach as applied to designing
mechanical components such as helical compression spring and gear train. The approach needs
to be adapted to solving optimization problems from other fields of engineering such as control
systems, circuit design, and process engineering etc. Each problem presents the designer with
new challenges regarding problem formulation and seeking an optimal solution.
The solution to a multiobjective optimization problem is not a single optimal design, but
usually a large set of nondominated or Pareto Optimal solutions. It remains the designer’s job to
choose a solution from this Pareto Optimal Set that best suits the requirement. Such a decision
methodology remains an open research area. Though an optimization technique presents the
91
designer with a solution, the ultimate decision of selection is still invested with the designer.
Thus optimization is merely a tool in assisting the designer in the quest for an ideal solution.
The Compromise Decision Support Problem Technique combined with the Branch and
Bound Algorithm is another approach to solving optimization problems. However, the actual
optimization process involves more than just the numerical solution to the optimization problem.
Problem formulation, choice of an initial design vector, the minimization algorithm, are some of
the factors that affect the end solution. These aspects of the optimization process remain largely a
trial and error procedure. Before solving a problem, the designer needs to utilize his experience
to make smart decisions, which would simplify the solution process.
92
References
[1] Mistree, F., J. Allen, H. Karandikar, J. Shupe, and E. Bascaran, Learning how to Design :
A Mindson, Handson, Decision Based Approach, 1995
(http://www.srl.gatech.edu/education/ME3110/textbook/textbook.pdf)
[2] Arora, J., Introduction to Optimum Design, New York : McGrawHill, 1989
[3] Sandgren, E., “Nonlinear integer and discrete programming in mechanical design,”
Advances in Design Automation, vol. 14, pp.95105, 1988
[4] Kannan, B., and S. Kramer, “Augmented Lagrange multiplier based method for mixed
integer discrete continuous optimization and its applications to mechanical design,”
Journal of Mechanical Design, Transactions of the ASME, vol.116, no. 2, pp. 405411,
1994
[5] Thompson, D., S. Gupta, and A. Shukla, “Tradeoff analysis in minimum volume design
of multistage spur gear reduction units,” Mechanism and Machine Theory, vol. 35, no. 5,
pp.609627, 2000
[6] Marston, M., J. Allen, and F. Mistree, “Decision support problem technique: Integrating
descriptive and normative approaches in decision based design,” Journal of Engineering
Valuation and Cost Analysis, vol. 3, no. 2, pp.107129, 2000
[7] Bras, B., and F. Mistree, “Compromise decision support problem for axiomatic and
robust design,” Journal of Mechanical Design, Transactions Of the ASME, vol. 117 no.
1, pp.1019, 1995
[8] Sultan, A., Linear Programming : An Introduction with Applications, Boston : Academic
Press, pp. 394403, 1993
93
[9] Balling, R., “Pareto sets in decisionbased design,” Journal of Engineering Valuation and
Cost Analysis, vol. 3, no. 2, pp.189198, 2000
[10] Kurapati, A., and S. Azarm, “Immune network simulation with multiobjective genetic
algorithms for multidisciplinary design optimization,” Engineering Optimization, vol. 33,
no. 2, pp.245260, 2000
[11] ElSayed, M., B. Ridgely, and E. Sandgren, “Nonlinear structural optimization using goal
programming,” Computers and Structures, vol. 32, no. 1 pp.6973, 1989
[12] Kasprzak, E., and K. Lewis, “Approach to facilitate decision tradeoffs in Pareto solution
sets,” Journal of Engineering Valuation and Cost Analysis, vol. 3, no. 2, pp.173187,
2000
[13] Kasprzak, E., and K. Lewis, “Pareto analysis in multiobjective optimization using the
collinearity theorem and scaling method,” Structural and Multidisciplinary Optimization,
vol. 22, no. 3, pp.208218, 2001
[14] Gal, T., T. Stewart, and T. Hanne, Multicriteria Decision Making : Advances in MCDM
Models, Algorithms, Theory, and Applications, Boston : Kluwer Academic, c1999
[15] Ignizio, J., Goal Programming And Extensions, Lexington, Mass. : Lexington Books,
1976
[16] Athan, T., and P. Papalambros, “Note on weighted criteria methods for compromise
solutions in multiobjective optimization,” Engineering Optimization, vol. 27, no. 2,
pp.155176, 1996
[17] Mistree, F., and J. Allen, “Position paper optimization in decisionbased design,”
Proceedings of the Conference on Optimization in Industry, pp.135142, 1997
94
[18] Chen, W., K. Lewis, and L. Schmidt, “Open workshop on decisionbased design : Origin,
status, promise, and future,” Journal of Engineering Valuation and Cost Analysis, vol. 3,
no. 2, pp.5766, 2000
[19] Wierzbicki, A., “Decision support methods and applications: The crosssections of
economics and engineering or environmental issues,” Annual Reviews in Control, vol. 24,
pp.919, 2000
[20] Chen, W., K. Lewis, and L. Schmidt, “Decisionbased design: An emerging design
perspective,” Journal of Engineering Valuation and Cost Analysis, vol. 3, no. 1, pp.57
66, 2000
[21] Sierksma, G., Linear and Integer Programming : Theory and Practice, New York :
Marcel Dekker, pp. 213220, 326329, 2000
[22] Mavrotas, G., and D. Diakoulaki, “Branch and bound algorithm for mixed zeroone
multiple objective linear programming,” European Journal of Operational Research, vol.
107, no. 3, pp.530541, 1998
[23] Kesavan, P., and P. Barton, “Generalized branchandcut framework for mixedinteger
nonlinear optimization problems,” Computers and Chemical Engineering, vol. 24, no. 2,
pp.13611366, 2000
[24] Yogota, T., T. Taguchi, and M. Gen, “Solution method for optimal weight design
problem of the gear using genetic algorithms,” Computers & Industrial Engineering, vol.
35, no. 34, pp.523526, 1998
[25] Pomrehn, L., and P. Papalambros, “Discrete optimal design formulations with application
to gear train design,” Journal of Mechanical Design, Transactions Of the ASME, vol.
117, no. 3, pp.419424, 1995
95
[26] Savage, M., S. Lattime, J. Kimmel, and H. Coe, “Optimal design of compact spur gear
reductions,” Journal of Mechanical Design, Transactions of the ASME, vol. 116, no. 3,
pp. 690696, 1994
[27] Wang, H., and H. Wang, “Optimal engineering design of spur gear sets,” Mechanism &
Machine Theory, vol. 29, no. 7, pp.10711080, 1994
[28] Andrews, G., and J. Argent, “Computeraided optimal gear design,” American Society of
Mechanical Engineers, Design Engineering Division (Publication) DE, vol. 43, no. 1,
pp.39139, 1992
[29] Juvinall, R., and K. Marshek, Fundamentals of Machine Component Design, New York :
John Wiley, 2000
96
Appendix 1
The complete solution set for the Gear Train Design Problem presented in Chapter 6 is
listed here. The complete design vector is given as,
x = [ d
p1
d
g1
b
1
P
1
H
1
d
p2
d
g2
b
2
P
2
H
2
d
p3
b
3
P
3
H
3
d
s1
d
s2
]
where,
d
p
: Pinion Diameter for each stage
d
g
: Gear Diameter for each stage.
b : Face Width of the mating gears in each stage.
P : Diametral Pitch of the mating gears in each stage.
H : Brinell Hardness of the mating gears in each stage.
d
s
: Shaft Diameter.
T
in
: Input Torque to the gearbox.
e : Overall Speed Reduction Ratio of the gearbox.
N
p
: Number of teeth for the pinion in each stage.
C
Li
: Surface Fatigue Life Factor.
97
T = 120 lbin, e = 0.05
d
p1
0.8331 0.8356 1.1262 1.1568 1.1687 1.0981 1.4152 1.4400 1.4451
d
g1
2.4245 2.4600 3.0434 3.1622 2.9593 2.9731 3.9269 3.9320 4.2646
b
1
0.7258 0.7214 0.4831 0.4732 0.5641 0.5913 0.4127 0.4342 0.4311
P
1
19 19 19 19 25 24 23 25 25
H
1
500 500 500 500 500 500 500 500 500
d
p2
1.3741 1.4758 1.1595 1.4189 1.3462 1.3280 1.5097 1.5881 1.8118
d
g2
3.6792 3.8221 3.2714 3.9249 3.7896 3.9211 4.4139 4.5362 5.3487
b
2
0.7764 0.7416 1.0124 0.7449 0.8268 0.8082 0.7711 0.7719 0.6409
P
2
12 12 14 13 15 13 15 16 14
H
2
500 500 500 500 500 500 500 500 500
d
p3
1.9099 1.6375 1.6558 1.9560 1.6010 2.2526 2.6603 2.6098 3.4319
b
3
1.0760 1.4323 1.4008 1.0405 1.4008 0.9654 0.8614 0.8604 0.7472
P
3
8 10 10 9 10 9 11 11 12
H
3
500 500 500 500 500 500 500 500 500
d
s1
0.4144 0.4160 0.4042 0.4058 0.3956 0.4045 0.4078 0.4056 0.4163
d
s2
0.5754 0.5712 0.5712 0.5696 0.5586 0.5803 0.5832 0.5756 0.5972
Vol. 38.1219 38.9422 38.8847 40.9414 41.4795 45.0417 54.3028 56.4096 67.9248
C
Li
1.9032 1.8507 1.7898 1.6938 1.5859 1.4753 1.3156 1.2281 1.1172
N
p1
16 16 21 22 29 26 32 36 36
N
p2
16 18 16 18 20 17 22 26 26
N
p3
16 16 16 17 16 21 28 28 42
98
T = 120 lbin, e = 0.0667
d
p1
0.8331 1.0833 1.0279 1.2716 1.5070 1.3818 1.5328 1.4482 1.4400
d
g1
2.4245 2.3792 2.4488 2.8585 3.0583 3.0521 3.4885 3.4046 3.5062
b
1
0.7258 0.5570 0.5601 0.4409 0.3884 0.4329 0.3832 0.4293 0.4342
P
1
19 21 19 20 23 23 23 25 25
H
1
500 500 500 500 500 500 500 500 500
d
p2
1.3741 1.0819 1.1121 1.1664 1.3501 1.3003 1.6267 1.6106 1.9270
d
g2
3.6792 2.8793 2.9346 3.1285 3.6717 3.4567 4.3073 4.2235 4.9184
b
2
0.7764 0.9452 0.9703 0.9067 0.6588 0.8274 0.6131 0.6810 0.5420
P
2
12 15 14 15 15 17 16 17 17
H
2
500 500 500 500 500 500 500 500 500
d
p3
1.9099 1.4985 2.0220 1.7214 1.5694 2.0137 2.4104 2.6265 2.8181
b
3
1.0760 1.3112 0.9099 1.2042 1.1797 0.9171 0.7793 0.7387 0.7045
P
3
8 11 10 12 11 11 12 12 13
H
3
500 500 500 500 500 500 500 500 500
d
s1
0.4144 0.3773 0.3876 0.3802 0.3674 0.3780 0.3818 0.3859 0.3904
d
s2
0.5754 0.5228 0.5356 0.5282 0.5129 0.5236 0.5281 0.5322 0.5336
Vol. 28.1000 28.8442 30.9366 32.7868 33.8369 35.9518 41.4818 44.5735 48.1748
C
Li
1.9542 1.8015 1.7590 1.6103 1.5413 1.4305 1.2599 1.1973 1.1275
N
p1
16 23 20 26 35 32 36 36 36
N
p2
16 16 16 18 20 22 26 28 32
N
p3
16 16 20 20 17 22 28 32 36
99
T = 120 lbin, e = 0.1
d
p1
0.9872 1.0369 1.1549 0.9182 1.0409 0.9984 1.4408 1.4400 1.7600
d
g1
1.7439 1.7914 2.0339 1.7473 2.0817 1.8951 2.4196 2.5330 2.9132
b
1
0.6072 0.5832 0.4748 0.6766 0.5265 0.6353 0.4337 0.4342 0.3600
P
1
20 15 19 21 18 22 25 25 25
H
1
500 306 500 500 500 500 500 500 410
d
p2
1.1650 0.9982 1.1446 1.1498 1.1949 1.1845 1.3087 1.4587 1.4624
d
g2
2.7112 2.3598 2.7403 2.7195 2.8864 2.8697 3.0583 3.4744 3.4830
b
2
0.6556 0.8734 0.6771 0.7250 0.7056 0.7122 0.6991 0.5894 0.6398
P
2
14 16 14 14 13 14 20 18 22
H
2
500 500 500 500 500 500 500 500 500
d
p3
1.4569 1.5407 1.6662 1.6712 1.9855 2.1157 1.7685 2.1396 1.8098
b
3
0.9757 0.8667 0.8333 1.0206 0.9440 0.7324 0.7947 0.6877 0.9045
P
3
11 10 11 13 15 12 12 13 15
H
3
500 500 500 500 500 500 500 500 500
d
s1
0.3508 0.3483 0.3505 0.3597 0.3657 0.3593 0.3450 0.3503 0.3433
d
s2
0.4649 0.4639 0.4689 0.4792 0.4906 0.4826 0.4578 0.4679 0.4585
Vol. 18.7143 18.8249 19.8443 21.4528 23.5694 24.2719 25.4360 27.0994 36.7726
C
Li
1.9344 1.9000 1.8398 1.8182 1.7186 1.6673 1.3945 1.3124 1.1700
N
p1
20 16 22 19 19 22 36 36 44
N
p2
16 16 16 16 16 17 26 26 32
N
p3
16 16 18 22 29 26 22 28 28
100
T = 120 lbin, e = 0.15
d
p1
1.2033 1.1496 1.0024 1.0031 1.3272 1.2715 1.7600 1.7600 1.9200
d
g1
1.4692 1.4655 1.3371 1.3745 1.8799 1.8383 2.2520 2.3595 2.4931
b
1
0.5573 0.5891 0.7457 0.7391 0.4266 0.4648 0.3600 0.3600 0.3600
P
1
16 16 16 19 21 22 25 25 25
H
1
258 260 262 285 500 500 410 412 333
d
p2
0.8947 1.1520 1.0610 1.1933 1.3271 1.4951 1.4522 1.4647 1.4466
d
g2
2.0262 2.4730 2.4197 2.3831 2.6714 2.9715 3.1102 3.0875 3.2326
b
2
0.7683 0.5480 0.5968 0.6672 0.5734 0.4860 0.5015 0.5626 0.5586
P
2
18 16 15 21 20 19 22 25 25
H
2
500 500 500 500 500 500 500 500 500
d
p3
1.3528 1.3341 1.4482 1.5323 1.5636 1.5611 1.6029 1.8611 2.2783
b
3
0.7610 0.8095 0.9186 0.6897 0.7856 0.8406 0.8015 0.7045 0.5442
P
3
12 13 15 13 15 17 17 18 18
H
3
500 500 500 500 500 500 500 500 500
d
s1
0.3102 0.3147 0.3195 0.3224 0.3259 0.3282 0.3151 0.3200 0.3166
d
s2
0.4074 0.4060 0.4205 0.4059 0.4115 0.4126 0.4061 0.4103 0.4140
Vol. 12.8155 13.4837 14.5645 15.0679 16.4025 17.1901 18.9693 21.0447 22.9703
C
Li
1.8499 1.7604 1.6686 1.6046 1.5149 1.4611 1.2256 1.1327 1.0798
N
p1
20 18 16 19 28 28 44 44 48
N
p2
16 19 16 25 26 28 32 36 36
N
p3
16 17 22 20 24 26 28 34 40
101
e = 0.1, T = 80 lbin
d
p1
1.0096 1.0378 0.9532 0.8342 1.2800 1.4400 1.5200 1.6800 1.8400
d
g1
1.5852 1.6599 1.6249 1.5130 2.1937 2.3382 2.4819 2.5717 2.7760
b
1
0.5199 0.4838 0.5409 0.6662 0.3600 0.3600 0.3600 0.3600 0.3600
P
1
18 19 20 21 25 25 25 25 25
H
1
258 286 289 283 405 354 290 250 250
d
p2
0.9184 0.8672 1.0057 1.0031 1.1508 1.4140 1.5257 1.5923 1.6800
d
g2
2.3176 2.1900 2.5442 2.5405 2.6971 3.3534 3.7232 3.9699 4.3006
b
2
0.6251 0.7143 0.5660 0.6053 0.6151 0.4475 0.4039 0.3771 0.3600
P
2
17 18 16 16 23 23 22 24 25
H
2
500 500 500 500 500 500 500 500 500
d
p3
1.1506 1.1985 1.5586 1.8208 1.5206 1.5035 1.7337 1.6877 1.8854
b
3
1.0050 0.9870 0.7252 0.6585 0.7801 0.8096 0.7305 0.7384 0.6786
P
3
14 14 13 14 16 17 18 19 20
H
3
500 500 500 500 500 500 500 500 500
d
s1
0.2947 0.2965 0.3029 0.3092 0.3034 0.2980 0.2986 0.2922 0.2908
d
s2
0.4012 0.4038 0.4127 0.4215 0.4030 0.3974 0.4019 0.3962 0.3978
Vol. 12.9733 13.2944 14.4805 15.8081 16.9753 18.6938 20.9040 21.8406 24.5240
C
Li
1.7749 1.7449 1.6944 1.5451 1.3707 1.1919 1.0777 0.9925 0.8990
N
p1
18 20 19 17 32 36 38 42 46
N
p2
16 16 16 16 26 32 34 38 42
N
p3
16 17 21 26 24 26 32 32 38
102
e = 0.1, T = 120 lbin
d
p1
0.9872 1.0369 1.1549 0.9182 1.0409 0.9984 1.4408 1.4400 1.7600
d
g1
1.7439 1.7914 2.0339 1.7473 2.0817 1.8951 2.4196 2.5330 2.9132
b
1
0.6072 0.5832 0.4748 0.6766 0.5265 0.6353 0.4337 0.4342 0.3600
P
1
20 15 19 21 18 22 25 25 25
H
1
500 306 500 500 500 500 500 500 410
d
p2
1.1650 0.9982 1.1446 1.1498 1.1949 1.1845 1.3087 1.4587 1.4624
d
g2
2.7112 2.3598 2.7403 2.7195 2.8864 2.8697 3.0583 3.4744 3.4830
b
2
0.6556 0.8734 0.6771 0.7250 0.7056 0.7122 0.6991 0.5894 0.6398
P
2
14 16 14 14 13 14 20 18 22
H
2
500 500 500 500 500 500 500 500 500
d
p3
1.4569 1.5407 1.6662 1.6712 1.9855 2.1157 1.7685 2.1396 1.8098
b
3
0.9757 0.8667 0.8333 1.0206 0.9440 0.7324 0.7947 0.6877 0.9045
P
3
11 10 11 13 15 12 12 13 15
H
3
500 500 500 500 500 500 500 500 500
d
s1
0.3508 0.3483 0.3505 0.3597 0.3657 0.3593 0.3450 0.3503 0.3433
d
s2
0.4649 0.4639 0.4689 0.4792 0.4906 0.4826 0.4578 0.4679 0.4585
Vol. 18.7143 18.8249 19.8443 21.4528 23.5694 24.2719 25.4360 27.0994 36.7726
C
Li
1.9344 1.9000 1.8398 1.8182 1.7186 1.6673 1.3945 1.3124 1.1700
N
p1
20 16 22 19 19 22 36 36 44
N
p2
16 16 16 16 16 17 26 26 32
N
p3
16 16 18 22 29 26 22 28 28
103
e = 0.1, T = 180 lbin
d
p1
1.0835 1.2179 1.3095 1.5192 1.5321 1.5930 1.5880 1.6805 2.0766
d
g1
1.9737 2.2604 2.3874 2.6472 2.7243 2.8592 2.9489 3.0669 3.6962
b
1
0.7011 0.5769 0.5540 0.4883 0.5282 0.5322 0.5573 0.5366 0.3900
P
1
17 16 17 18 21 23 24 25 23
H
1
500 500 500 500 500 500 500 500 500
d
p2
1.3475 1.1703 1.2450 1.4393 1.8406 1.7219 2.0147 1.8508 2.0092
d
g2
3.0738 2.7346 2.9066 3.3287 4.1515 3.8782 4.6741 4.2418 4.5327
b
2
0.7580 1.0240 0.9681 0.7990 0.5916 0.7506 0.5673 0.7195 0.6682
P
2
12 14 14 15 15 19 16 19 21
H
2
500 500 500 500 500 500 500 500 500
d
p3
1.7735 1.9861 1.7734 1.7224 2.2066 2.0908 2.4641 2.4400 2.4076
b
3
0.9981 0.9408 1.2014 1.2057 0.8324 1.0423 0.9583 0.9488 0.9356
P
3
9 10 11 12 11 13 15 15 15
H
3
500 500 500 500 500 500 500 500 500
d
s1
0.4058 0.4083 0.4059 0.3998 0.4025 0.4037 0.4084 0.4060 0.4026
d
s2
0.5341 0.5418 0.5384 0.5287 0.5279 0.5292 0.5406 0.5353 0.5281
Vol. 27.6574 29.9579 31.6027 33.3165 37.9896 41.9605 47.0838 52.4715 56.4649
C
Li
1.9817 1.8782 1.7409 1.5704 1.3752 1.2478 1.1584 1.0978 1.0221
N
p1
18 19 22 28 32 36 38 42 48
N
p2
16 16 18 22 28 32 32 36 42
N
p3
16 19 20 20 24 28 36 36 36
104
e = 0.1, T = 270 lbin
d
p1
1.2616 1.3515 1.7277 1.8065 1.6544 1.8768 2.0201 2.1874 2.0640
d
g1
2.3770 2.4680 2.7888 3.1481 3.0191 3.4645 3.6934 3.8579 3.6813
b
1
0.7121 0.6759 0.6231 0.5699 0.6795 0.5280 0.5166 0.5097 0.5922
P
1
13 13 19 18 19 17 19 21 23
H
1
500 500 500 500 500 500 500 500 500
d
p2
1.3623 1.3353 1.5347 1.8619 1.7961 1.9415 2.3752 2.2911 2.6598
d
g2
3.0386 3.0454 3.5718 4.2519 4.1778 4.3498 5.3703 5.0758 5.9982
b
2
1.1507 1.1608 0.9766 0.8064 0.9074 0.8696 0.6297 0.7637 0.5731
P
2
12 12 14 14 14 15 14 18 16
H
2
500 500 500 500 500 500 500 500 500
d
p3
1.7977 1.9619 1.6946 2.3828 2.4346 2.5521 2.7692 2.5515 2.7884
b
3
1.4739 1.2263 1.4827 0.9988 1.2107 1.0735 1.0025 1.1163 1.0480
P
3
9 8 9 9 12 11 12 13 13
H
3
500 500 500 500 500 500 500 500 500
d
s1
0.4697 0.4648 0.4461 0.4577 0.4647 0.4665 0.4650 0.4595 0.4612
d
s2
0.6137 0.6119 0.5912 0.6027 0.6158 0.6104 0.6104 0.5990 0.6048
Vol. 40.869 42.3795 45.62 53.9178 59.8515 61.3885 67.4679 71.3577 75.4346
C
Li
2.0389 1.9816 1.6219 1.4306 1.3479 1.3024 1.1745 1.0656 1.0221
N
p1
16 18 32 32 32 32 38 46 48
N
p2
16 16 22 26 26 30 34 42 42
N
p3
16 16 16 22 28 28 32 32 36
105
Appendix 2
This appendix lists all the Matlab ® files necessary to run the Compromise Decision
Support Problem Method and the Branch and Bound Algorithm. They are arranged into four
sections.
1. This section contains three files : spring_main.m, spring_fun.m and spring_con.m. These
files solve the general constrained nonlinear optimization problem using fmincon. Run
spring_main.m to obtain the solution. The design vector resulting from this optimization
routine is used as a starting vector to the Branch and Bound Algorithm, and the optimal
objective function is used as a bounding function.
2. This section contains four files : spring_bnb.m, spring_fun.m, spring_con.m and bnb.m.
These files solve the Compromise Decision Support Problem problem with the Branch
and Bound Algorithm. Run spring_bnb.m, which calls bnb.m to solve the mixed variable
optimization problem.
3. This section contains three files : gear_main.m, gear_fun.m and gear_con.m. These files
solve the general constrained nonlinear optimization problem using fmincon. Run
gear_main.m to obtain the solution.
4. This section contains four files : gear_bnb.m, gear_fun.m, gear_con.m and bnb.m. These
files solve the Compromise Decision Support Problem problem with the Branch and
Bound Algorithm. Run gear_bnb.m, which calls bnb.m to solve the mixed variable
optimization problem.
106
Section 1
% filename : spring_main.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file is the main file for solving the general constrained
% nonlinear optimization spring design problem. The objective is
% to minimize the weight of the spring subject to constraints
% listed in Chapter 5. It calls spring_fun.m and spring_con.m
clc;
clear all;
close all;
global S G Fmax lmax dmin Dmax Fp delta_pm delta_w C K delta delta_p k
S = 189000; % Allowable Shear Stress (psi)
G = 1.15e8; % Shear Modulus
Fmax = 1000; % Maximum Working Load (lb)
lmax = 14; % Maximum Free Length (in)
dmin = 0.2; % Minimum Wire Diameter (in)
Dmax = 3.0; % Maximum Outer Diameter (in)
Fp = 300; % Preload (lb)
delta_pm = 6.0; % Maximum deflection under preload (in)
delta_w = 1.25; % Maximum deflection for working load (in)
x0 = [2.0 0.375 10.0]; % Initial Guess Vector
% x0(1) is the Coil Diameter (in)
% x0(2) is the Wire Diameter (in)
% x0(3) is the number of Turns
disp('The starting design vector is :');
disp(x0);
f_ini = spring_fun(x0); % Evaluate the Objective Function at the begining
disp('The starting objective function is :');
disp(f_ini);
x = x0;
C = x(1)/x(2); % Spring Index
K = (4*C  1)/(4*C  4) + 0.615/C; % Curvature Factor
k = (x(2)^4)*G/(8*(x(1)^3)*x(3)); % Spring Rate (lb/in)
delta = Fmax/k; % Deflection under maximum working load (in)
lf = delta + 1.05*(x(3) + 2)*x(2); % Free Length (in)
delta_p = Fp/k; % Deflection under preload (in)
107
A = [];
b = [];
% No linear inequality constraints
Aeq = [];
beq = [];
% No linear equality constraints
lb = [1 0.2 3];
% Lower bound on design variables
ub = [3 0.5 20];
% Upper bound on design variables
options = optimset('Display','iter'); % Show progress after each iteration
x = fmincon('spring_fun', x0, A, b, Aeq, beq, lb, ub, 'spring_con', options);
disp('The optimized design vector is :');
disp(x);
f_fin = spring_fun(x); % Evaluate the Objective Function at the end
disp('The optimal objective function value is :');
disp(f_fin);
% End
108
% filename : spring_fun.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file calculates the objective function
function f = spring_fun(x)
f = pi^2*x(1)*(x(2)^2)*(x(3) + 2)/4;
% End
109
% filename : spring_con.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file contains the nonlinear constraints
function [g, h] = spring_con(x)
global S G Fmax lmax dmin Dmax Fp delta_pm delta_w C K delta lf delta_p k
C = x(1)/x(2);
K = (4*C  1)/(4*C  4) + 0.615/C;
k = (x(2)^4)*G/(8*(x(1)^3)*x(3));
delta = Fmax/k;
lf = delta + 1.05*(x(3) + 2)*x(2);
delta_p = Fp/k;
% Inequality Constraints
g(1) = 8*K*Fmax*x(1)/(S*3.142*((x(2))^3))  1;
g(2) = lf/lmax  1;
g(3) = (x(1) + x(2))/Dmax  1;
g(4) = 1  C/3;
g(5) = delta_p/delta_pm  1;
g(6) = (delta_p  (Fmax  Fp)/k  1.05*(x(3) + 2)*x(2))/lf  1;
g(7) = (Fmax  Fp)/(k*delta_w)  1;
% Equality Constraints
h = [] ;
% End
110
Section 2
% filename : spring_bnb.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file is the main file for solving the Compromise DSP and
% Branch and Bound Formulation of the spring design problem. It
% calls bnb.m which is the Branch and Bound program, subject to
% constraints specified in spring_con.m and optimizes the
% objective function in spring_fun.m
clc;
clear all;
close all;
global S G Fmax lmax dmin Dmax Fp delta_pm delta_w C K delta delta_p k
S = 189000; % Allowable Shear Stress (psi)
G = 1.15e8; % Shear Modulus
Fmax = 1000; % Maximum Working Load (lb)
lmax = 14; % Maximum Free Length (in)
dmin = 0.2; % Minimum Wire Diameter (in)
Dmax = 3.0; % Maximum Outer Diameter (in)
Fp = 300; % Preload (lb)
delta_pm = 6.0; % Maximum deflection under preload (in)
delta_w = 1.25; % Maximum deflection for working load (in)
% Allowable wire diameters :
wire_diameters = [ 0.207 0.225 0.244 0.263 0.283 0.307 0.331 0.362 0.394 0.437 0.500 ];
n = length(wire_diameters);
y0 = [0; 0; 0; 1; 0; 0; 0; 0; 0; 0; 0];
x0 = [2.0000; 0.375; 10.0000; y0]; % Initial Guess Vector
% x0(1) is the Coil Diameter (in)
% x0(2) is the Wire Diameter (in)
% x0(3) is the number of Turns
% x_status should be a column vector such that:
% x_status(i) = 0 if x(i) is continuous,
% x_status(i) = 1 if x(i) is integer,
x_status = [ 0; 0; 1; ones(n,1) ];
disp('The starting design vector is :');
111
disp(x0(1:3));
f_ini = spring_fun(x0); % Evaluate the Objective Function at the begining
disp('The starting objective function is :');
disp(f_ini);
x = x0;
C = x(1)/x(2); % Spring Index
K = (4*C  1)/(4*C  4) + 0.615/C; % Curvature Factor
k = (x(2)^4)*G/(8*(x(1)^3)*x(3)); % Spring Rate (lb/in)
delta = Fmax/k; % Deflection under maximum working load (in)
lf = delta + 1.05*(x(3) + 2)*x(2); % Free Length (in)
delta_p = Fp/k; % Deflection under preload (in)
A = [];
b = [];
% Linear inequality constraints
Aeq = [ 0, 1, 0, wire_diameters; 0, 0, 0, ones(1,n) ];
beq = [ 0; 1 ];
% Linear equality constraints
ymin = zeros(n,1);
lb = [1; 0.2; 3; ymin];
% Lower bound on design variables
ymax = ones(n,1);
ub = [3; 0.5; 20; ymax];
% Upper bound on design variables
options.MaxFunEvals=1e6;
[errmsg,Z,X,t,c,fail] = bnb('spring_fun',x0,x_status,lb,ub,A,b,Aeq,beq,'spring_con');
disp('The optimized design vector is :');
disp(X(1:3));
f_fin = spring_fun(X); % Evaluate the Objective Function at the end
disp('The optimal objective function value is :');
disp(f_fin);
disp('The number of Branch and Bound Cycles are :');
disp(c);
disp('The time Branch and Bound Algorithm ran is :');
112
disp(t);
% End
113
% filename : spring_fun.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file calculates the objective function
function f = spring_fun(x)
f = pi^2*x(1)*(x(2)^2)*(x(3) + 2)/4;
% End
114
% filename : spring_con.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file contains the nonlinear constraints
function [g, h] = spring_con(x)
global S G Fmax lmax dmin Dmax Fp delta_pm delta_w C K delta lf delta_p k
C = x(1)/x(2);
K = (4*C  1)/(4*C  4) + 0.615/C;
k = (x(2)^4)*G/(8*(x(1)^3)*x(3));
delta = Fmax/k;
lf = delta + 1.05*(x(3) + 2)*x(2);
delta_p = Fp/k;
% Inequality Constraints
g(1,1) = 8*K*Fmax*x(1)/(S*3.142*((x(2))^3))  1;
g(2,1) = lf/lmax  1;
g(3,1) = (x(1) + x(2))/Dmax  1;
g(4,1) = 1  C/3;
g(5,1) = delta_p/delta_pm  1;
g(6,1) = (delta_p  (Fmax  Fp)/k  1.05*(x(3) + 2)*x(2))/lf  1;
g(7,1) = (Fmax  Fp)/(k*delta_w)  1;
% Equality Constraints
h = [] ;
% End
115
function [errmsg,Z,X,t,c,fail] = bnb(fun,x0,xstat,xl,xu,a,b,aeq,beq,nonlc,setts,opts,varargin);
% BNB20 Finds the constrained minimum of a function of several possibly integer variables.
% Usage: [errmsg,Z,X,t,c,fail] =
% bnb(fun,x0,x_status,lb,ub,A,B,Aeq,Beq,nonlcon,settings,options,P1,P2,...)
%
% BNB solves problems of the form:
% Minimize F(x) subject to: lb <= x0 <=ub
% A*x <= B Aeq*x=Beq
% C(x)<=0 Ceq(x)=0
% x(i) is continuous for xstatus(i)=0
% x(i) integer for xstatus(i)= 1
% x(i) fixed for xstatus(i)=2
%
%
% fun is the function to be minimized and should return a scalar. F(x)=feval(fun,x).
% x0 is the starting point for x. x0 should be a column vector.
% x_status is a column vector describing the status of every variable x(i).
% lb and ub are column vectors with lower and upper bounds for x.
% A and Aeq are matrices for the linear constrains.
% B and Beq are column vectors for the linear constrains.
% nonlcon is the function for the nonlinear constrains.
% [C(x);Ceq(x)]=feval(nonlcon,x). Both C(x) and Ceq(x) should be column vectors.
%
% errmsg is a string containing an error message if BNB found an error in the input.
% Z is the scalar result of the minimization,
% X the values of the accompanying variables.
% t is the time elapsed while the algorithm BNB has run and
% c is the number of BNB cycles.
% fail is the number of nonconvergent leaf subproblems.
%
% settings is a row vector with settings for BNB:
% settings(1) (standard 0) if 1: if the subproblem does not converge do not branch it and
% raise fail by one. Normally BNB will always branch a nonconvergent subproblem so
% it can try again to find a solution.
% A subproblem that is a leaf of the branchandboundtree cannot be branched. If such
% a problem does not converge it will be considered infeasible and fail will be raised by
% one.
global maxSQPiter;
% STEP 0 CHECKING INPUT
Z=[]; X=[]; t=0; c=0; fail=0;
if nargin<2, errmsg='BNB needs at least 2 input arguments.'; return; end;
if isempty(fun), errmsg='No fun found.'; return;
elseif ~ischar(fun), errmsg='fun must be a string.'; return; end;
if isempty(x0), errmsg='No x0 found.'; return;
116
elseif ~isnumeric(x0)  ~isreal(x0)  size(x0,2)>1
errmsg='x0 must be a real column vector.'; return;
end;
xstatus=zeros(size(x0));
if nargin>2 & ~isempty(xstat)
if isnumeric(xstat) & isreal(xstat) & all(size(xstat)<=size(x0))
if all(xstat==round(xstat) & 0<=xstat & xstat<=2)
xstatus(1:size(xstat))=xstat;
else errmsg='xstatus must consist of the integers 0,1 en 2.'; return; end;
else errmsg='xstatus must be a real column vector the same size as x0.'; return; end;
end;
lb=zeros(size(x0));
lb(find(xstatus==0))=inf;
if nargin>3 & ~isempty(xl)
if isnumeric(xl) & isreal(xl) & all(size(xl)<=size(x0))
lb(1:size(xl,1))=xl;
else errmsg='lb must be a real column vector the same size as x0.'; return; end;
end;
if any(x0<lb), errmsg='x0 must be in the range lb <= x0.'; return;
elseif any(xstatus==1 & (~isfinite(lb)  lb~=round(lb)))
errmsg='lb(i) must be an integer if x(i) is an integer variabele.'; return;
end;
lb(find(xstatus==2))=x0(find(xstatus==2));
ub=ones(size(x0));
ub(find(xstatus==0))=inf;
if nargin>4 & ~isempty(xu)
if isnumeric(xu) & isreal(xu) & all(size(xu)<=size(x0))
ub(1:size(xu,1))=xu;
else errmsg='ub must be a real column vector the same size as x0.'; return; end;
end;
if any(x0>ub), errmsg='x0 must be in the range x0 <=ub.'; return;
elseif any(xstatus==1 & (~isfinite(ub)  ub~=round(ub)))
errmsg='ub(i) must be an integer if x(i) is an integer variabale.'; return;
end;
ub(find(xstatus==2))=x0(find(xstatus==2));
A=[];
if nargin>5 & ~isempty(a)
if isnumeric(a) & isreal(a) & size(a,2)==size(x0,1), A=a;
else errmsg='Matrix A not correct.'; return; end;
end;
B=[];
if nargin>6 & ~isempty(b)
if isnumeric(b) & isreal(b) & all(size(b)==[size(A,1) 1]), B=b;
else errmsg='Column vector B not correct.'; return; end;
end;
if isempty(B) & ~isempty(A), B=zeros(size(A,1),1); end;
117
Aeq=[];
if nargin>7 & ~isempty(aeq)
if isnumeric(aeq) & isreal(aeq) & size(aeq,2)==size(x0,1), Aeq=aeq;
else errmsg='Matrix Aeq not correct.'; return; end;
end;
Beq=[];
if nargin>8 & ~isempty(beq)
if isnumeric(beq) & isreal(beq) & all(size(beq)==[size(Aeq,1) 1]), Beq=beq;
else errmsg='Column vector Beq not correct.'; return; end;
end;
if isempty(Beq) & ~isempty(Aeq), Beq=zeros(size(Aeq,1),1); end;
nonlcon='';
if nargin>9 & ~isempty(nonlc)
if ischar(nonlc), nonlcon=nonlc;
else errmsg='fun must be a string.'; return; end;
end;
settings = [0 0];
if nargin>10 & ~isempty(setts)
if isnumeric(setts) & isreal(setts) & all(size(setts)<=size(settings))
settings(setts~=0)=setts(setts~=0);
else errmsg='settings should be a row vector of length 1 or 2.'; return; end;
end;
maxSQPiter=1000;
%
% 10/17/01 KMS: BEGIN
%
% options=optimset('fmincon');
options = optimset( optimset('fmincon'), 'MaxSQPIter', 1000);
%
% 10/17/01 KMS: END
%
if nargin>11 & ~isempty(opts)
if isstruct(opts)
if isfield(opts,'MaxSQPIter')
if isnumeric(opts.MaxSQPIter) & isreal(opts.MaxSQPIter) & ...
all(size(opts.MaxSQPIter)==1) & opts.MaxSQPIter>0 & ...
round(opts.MaxSQPIter)==opts.MaxSQPIter
maxSQPiter=opts.MaxSQPIter;
opts=rmfield(opts,'MaxSQPIter');
else errmsg='options.maxSQPiter must be an integer >0.'; return; end;
end;
options=optimset(options,opts);
else errmsg='options must be a structure.'; return; end;
end;
evalreturn=0;
118
eval(['z=',fun,'(x0,varargin{:});'],'errmsg=''fun caused error.''; evalreturn=1;');
if evalreturn==1, return; end;
if ~isempty(nonlcon)
eval(['[C, Ceq]=',nonlcon,'(x0,varargin{:});'],'errmsg=''nonlcon caused error.''; evalreturn=1;');
if evalreturn==1, return; end;
if size(C,2)>1  size(Ceq,2)>1, errmsg='C en Ceq must be column vectors.'; return; end;
end;
% STEP 1 INITIALIZATION
currentwarningstate=warning;
warning off;
tic;
lx = size(x0,1);
z_incumbent=inf;
x_incumbent=inf*ones(size(x0));
I = ceil(sum(log2(ub(find(xstatus==1))lb(find(xstatus==1))+1))+size(find(xstatus==1),1)+1);
stackx0=zeros(lx,I);
stackx0(:,1)=x0;
stacklb=zeros(lx,I);
stacklb(:,1)=lb;
stackub=zeros(lx,I);
stackub(:,1)=ub;
stackdepth=zeros(1,I);
stackdepth(1,1)=1;
stacksize=1;
xchoice=zeros(size(x0));
if ~isempty(Aeq)
j=0;
for i=1:size(Aeq,1)
if Beq(i)==1 & all(Aeq(i,:)==0  Aeq(i,:)==1)
J=find(Aeq(i,:)==1);
if all(xstatus(J)~=0 & xchoice(J)==0 & lb(J)==0 & ub(J)==1)
if all(xstatus(J)~=2)  all(x0(J(find(xstatus(J)==2)))==0)
j=j+1;
xchoice(J)=j;
if sum(x0(J))==0, errmsg='x0 not correct.'; return; end;
end;
end;
end;
end;
end;
errx=optimget(options,'TolX');
handleupdate=[];
if ishandle(settings(2))
taghandlemain=get(settings(2),'Tag');
if strcmp(taghandlemain,'main BNB GUI')
119
handleupdate=guiupd;
handleupdatemsg=findobj(handleupdate,'Tag','updatemessage');
bnbguicb('hide main');
drawnow;
end;
end;
optionsdisplay=getfield(options,'Display');
if strcmp(optionsdisplay,'iter')  strcmp(optionsdisplay,'final')
show=1;
else show=0; end;
% STEP 2 TERMINATION
while stacksize>0
c=c+1;
% STEP 3 LOADING OF CSP
x0=stackx0(:,stacksize);
lb=stacklb(:,stacksize);
ub=stackub(:,stacksize);
x0(find(x0<lb))=lb(find(x0<lb));
x0(find(x0>ub))=ub(find(x0>ub));
depth=stackdepth(1,stacksize);
stacksize=stacksize1;
percdone=round(100*(1sum(0.5.^(stackdepth(1:(stacksize+1))1))));
% UPDATE FOR USER
if ishandle(handleupdate) & strcmp(get(handleupdate,'Tag'),'update BNB GUI')
t=toc;
updatemsg={ ...
sprintf('searched %3d %% of tree',percdone) ...
sprintf('Z : %12.4e',z_incumbent) ...
sprintf('t : %12.1f secs',t) ...
sprintf('c : %12d cycles',c1) ...
sprintf('fail : %12d cycles',fail)};
set(handleupdatemsg,'String',updatemsg);
drawnow;
else
disp(sprintf('*** searched %3d %% of tree',percdone));
disp(sprintf('*** Z : %12.4e',z_incumbent));
disp(sprintf('*** t : %12.1f secs',t));
disp(sprintf('*** c : %12d cycles',c1));
disp(sprintf('*** fail : %12d cycles',fail));
end;
% STEP 4 RELAXATION
[x z convflag]=fmincon(fun,x0,A,B,Aeq,Beq,lb,ub,nonlcon,options,varargin{:});
120
% STEP 5 FATHOMING
K = find(xstatus==1 & lb~=ub);
separation=1;
if convflag<0  (convflag==0 & settings(1))
% FC 1
separation=0;
if show, disp('*** branch pruned'); end;
if convflag==0,
fail=fail+1;
if show, disp('*** not convergent'); end;
elseif show, disp('*** not feasible');
end;
elseif z>=z_incumbent & convflag>0
% FC 2
separation=0;
if show
disp('*** branch pruned');
disp('*** ghosted');
end;
elseif all(abs(round(x(K))x(K))<errx) & convflag>0
% FC 3
z_incumbent = z;
x_incumbent = x;
separation = 0;
if show
disp('*** branch pruned');
disp('*** new best solution found');
end;
end;
% STEP 6 SELECTION
if separation == 1 & ~isempty(K)
dzsep=1;
for i=1:size(K,1)
dxsepc = abs(round(x(K(i)))x(K(i)));
if dxsepc>=errx  convflag==0
xsepc = x; xsepc(K(i))=round(x(K(i)));
dzsepc = abs(feval(fun,xsepc,varargin{:})z);
if dzsepc>dzsep
dzsep=dzsepc;
ixsep=K(i);
end;
end;
end;
121
% STEP 7 SEPARATION
if xchoice(ixsep)==0
% XCHOICE==0
branch=1;
domain=[lb(ixsep) ub(ixsep)];
sepdepth=depth;
while branch==1
xboundary=(domain(1)+domain(2))/2;
if x(ixsep)<xboundary
domainA=[domain(1) floor(xboundary)];
domainB=[floor(xboundary+1) domain(2)];
else
domainA=[floor(xboundary+1) domain(2)];
domainB=[domain(1) floor(xboundary)];
end;
sepdepth=sepdepth+1;
stacksize=stacksize+1;
stackx0(:,stacksize)=x;
stacklb(:,stacksize)=lb;
stacklb(ixsep,stacksize)=domainB(1);
stackub(:,stacksize)=ub;
stackub(ixsep,stacksize)=domainB(2);
stackdepth(1,stacksize)=sepdepth;
if domainA(1)==domainA(2)
stacksize=stacksize+1;
stackx0(:,stacksize)=x;
stacklb(:,stacksize)=lb;
stacklb(ixsep,stacksize)=domainA(1);
stackub(:,stacksize)=ub;
stackub(ixsep,stacksize)=domainA(2);
stackdepth(1,stacksize)=sepdepth;
branch=0;
else
domain=domainA;
branch=1;
end;
end;
else
% XCHOICE~=0
L=find(xchoice==xchoice(ixsep));
M=intersect(K,L);
[dummy,N]=sort(x(M));
part1=M(N(1:floor(size(N)/2))); part2=M(N(floor(size(N)/2)+1:size(N)));
122
sepdepth=depth+1;
stacksize=stacksize+1;
stackx0(:,stacksize)=x;
O = (1sum(stackx0(part1,stacksize)))/size(part1,1);
stackx0(part1,stacksize)=stackx0(part1,stacksize)+O;
stacklb(:,stacksize)=lb;
stackub(:,stacksize)=ub;
stackub(part2,stacksize)=0;
stackdepth(1,stacksize)=sepdepth;
stacksize=stacksize+1;
stackx0(:,stacksize)=x;
O = (1sum(stackx0(part2,stacksize)))/size(part2,1);
stackx0(part2,stacksize)=stackx0(part2,stacksize)+O;
stacklb(:,stacksize)=lb;
stackub(:,stacksize)=ub;
stackub(part1,stacksize)=0;
stackdepth(1,stacksize)=sepdepth;
end;
elseif separation==1 & isempty(K)
fail=fail+1;
if show
disp('*** branch pruned');
disp('*** leaf not convergent');
end;
end;
end;
% STEP 8 OUTPUT
t=toc;
Z = z_incumbent;
X = x_incumbent;
errmsg='';
if ishandle(handleupdate)
taghandleupdate=get(handleupdate,'Tag');
if strcmp(taghandleupdate,'update BNB GUI')
close(handleupdate);
end;
end;
eval(['warning ',currentwarningstate]);
123
Section 3
% filename : gear_main.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file is the main file for solving the general constrained
% nonlinear optimization gear train design problem. The objective
% is to minimize the weight of the gearbox subject to constraints
% listed in Chapter 6. It calls gear_fun.m and gear_con.m
clc;
clear all;
close all;
global kr Cp kms Km Ko phi Ls2 Ls3 Sfe Cr tau_max Kv e Tin dp1 dg1 b1 P1 H1 dp2 dg2 b2
P2 H2 dp3 b3 P3 H3 ds1 ds2 ks kb kt J1 J2 J3 Cs1 Cs2 Cs3 N1 N2 N3
kr = 0.814; %Bending reliability factor (99.0%)
Cp = 2300; %Elastic coefficient
kms = 1.4; %Mean stress factor
Km = 1.6; %Mounting factor
Ko = 1.0; %Overload factor
phi = pi*20/180; %Pressure angle
Ls2 = 8.0; %Shaft length (N = 2)
Ls3 = 4.0; %Shaft length (N = 3)
Sfe = 190000; %Surface fatigue strength
Cr = 1.0; %Surface reliability factor (99.0%)
tau_max = 25000; %Torsional stress limitshaft
Kv = 2.0; %Velocity factor
e = 0.1; %Overall speed ratio
Tin = 120; %Input torque
% Lumped Parameters
ks = (4*Cp^2*Kv*Ko*Km*Tin)/(cos(phi)*sin(phi)*Sfe^2*Cr^2);
kb = (2*Tin*Kv*Ko*Km)/(kr*kms);
kt = (16*Tin)/(pi*tau_max);
%x = [dp1 dg1 b1 P1 H1 dp2 dg2 b2 P2 H2 dp3 b3 P3 H3 ds1 ds2] %Design Vector
% x(1)=dp1 %Diameter of pinion of set 1
% x(2)=dg1 %Diameter of gear of set 1
% x(3)=b1 %Width of set 1
% x(4)=P1 %Diametral pitch of set 1
% x(5)=H1 %Brinell Hardness #
% x(6)=dp2 %Diameter of pinion of set 2
% x(7)=dg2 %Diameter of gear of set 2
% x(8)=b2 %Width of set 2
124
% x(9)=P2 %Diametral pitch of set 2
% x(10)=H2 %Brinell Hardness #
% x(11)=dp3 %Diameter of pinion of set 3
% x(12)=b3 %Width of set 3
% x(13)=P3 %Diametral pitch of set 3
% x(14)=H3 %Brinell Hardness #
% x(15)=ds1 %Shaft Diameter
% x(16)=ds2 %Shaft Diameter
% Initial Guess Vector
x0 = [0.8193 1.6 0.71 19.53 250 1.1404 2.25 0.73 14.03 250 1.3548 1.13 11.81 250 0.5 0.5];
disp('The starting design vector is :');
disp(x0);
% Evaluate the Objective Function at the beginning
f_ini = gear_fun(x0);
disp('The starting objective function is :');
disp(f_ini);
x = x0;
dg3 = x(1)*x(6)*x(11)/(x(2)*x(7)*e);
Ft1 = 2*Tin/x(1);
Ft2 = 2*Tin*x(2)/(x(6)*x(1));
Ft3 = (2*Tin*x(2)*x(7))/(x(11)*x(6)*x(1));
A = [];
b = [];
% No linear inequality constraints
Aeq = [];
beq = [];
% No linear equality constraints
lb = [0 0 0 0 200 0 0 0 0 200 0 0 0 200 0 0];
% Lower bound on design variables
ub = [10 10 20 25 500 10 10 20 25 500 10 20 25 500 10 10];
% Upper bound on design variables
options = optimset('Display','iter'); % Show progress after each iteration
options.MaxFunEvals=1e6;
options.MaxIter=1e3;
x = fmincon('gear_fun', x0, A, b, Aeq, beq, lb, ub, 'gear_con', options);
125
disp('The optimized design vector is :');
disp(x);
f_fin = gear_fun(x); % Evaluate the Objective Function at the end
disp('The optimal objective function value is :');
disp(f_fin);
f0 = pi/4*((x(1)^2 + x(2)^2)*x(3) + (x(6)^2 + x(7)^2)*x(8) + (1 +
((x(1)*x(6))/(e*x(2)*x(7)))^2)*x(11)^2*x(12) + (x(15)^2 + x(16)^2)*Ls3);
f1 = ks*(x(1) + x(2))/(x(3)*x(1)^2*x(2));
f2 = ks*((x(6) + x(7))*x(2))/(x(8)*x(6)^2*x(7)*x(1));
f3 = ks*(((x(6)*x(1) + e*x(7)*x(2)))*x(7)*x(2))/(x(12)*(x(11)*x(6)*x(1))^2);
cli = (f1+f2+f3)/3;
disp('The optimal gearbox weight is :');
disp(f0);
disp('The optimal Surface Fatigue Life Factor is :');
disp(cli);
N1 = x(4)*x(1);
N2 = x(9)*x(6);
N3 = x(13)*x(11);
disp('The no. of teeth on pinion for stage 1 are :');
disp(N1);
disp('The no. of teeth on pinion for stage 2 are :');
disp(N2);
disp('The no. of teeth on pinion for stage 3 are :');
disp(N3);
% End
126
% filename : gear_fun.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file calculates the objective function
function f = gear_fun(x)
global e Ls2 Ls3 ks
% Weighing factors for the two objectives
a0 = 1; %Gearbox Volume weighing factor
acli = 1; %Surface Fatigue Life weighing factor
% f0 is the volume of the gearbox
f0 = pi/4*((x(1)^2 + x(2)^2)*x(3) + (x(6)^2 + x(7)^2)*x(8) + (1 +
((x(1)*x(6))/(e*x(2)*x(7)))^2)*x(11)^2*x(12) + (x(15)^2 + x(16)^2)*Ls3);
% f1, f2, f3 are Surface Fatigue Life Factors
f1 = ks*(x(1) + x(2))/(x(3)*x(1)^2*x(2));
f2 = ks*((x(6) + x(7))*x(2))/(x(8)*x(6)^2*x(7)*x(1));
f3 = ks*(((x(6)*x(1) + e*x(7)*x(2)))*x(7)*x(2))/(x(12)*(x(11)*x(6)*x(1))^2);
% Weighted sum of the objective functions
f = a0*f0 + acli*(f1 + f2 + f3);
% End
127
% filename : gear_con.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file lists the nonlinear constraints
function [g, h] = gear_con(x)
global kb kt e J1 J2 J3 Cs1 Cs2 Cs3 N1 N2 N3 phi
% Number of teeth on pinion at each stage
N1 = x(4)*x(1);
N2 = x(9)*x(6);
N3 = x(13)*x(11);
% Lewis Geometry Factor for Each Stage
J1=0.00000021756022*N1^3
0.00007902097902*N1^2+0.00793512043512*N1+0.22383333333333;
J2=0.00000021756022*N2^3
0.00007902097902*N2^2+0.00793512043512*N2+0.22383333333333;
J3=0.00000021756022*N3^3
0.00007902097902*N3^2+0.00793512043512*N3+0.22383333333333;
% Fatigue Strength Surface Factor
Cs1=0.00083333333333*x(5)+0.93333333333333;
Cs2=0.00083333333333*x(10)+0.93333333333333;
Cs3=0.00083333333333*x(14)+0.93333333333333;
% Inequality Constraints
g(1) = kb*x(4)  x(3)*J1*x(1)*250*x(5)*Cs1;
g(2) = kb*x(2)*x(9)  x(8)*J2*x(6)*x(1)*250*x(10)*Cs2;
g(3) = kb*x(13)*x(7)*x(2)  x(12)*J3*x(11)*x(6)*x(1)*250*x(14)*Cs3;
g(4) = kt*x(2)  x(15)^3*x(1);
g(5) = kt*x(2)*x(7)  x(16)^3*x(1)*x(6);
g(6) = x(3)*x(4) + 9;
g(7) = x(8)*x(9) + 9;
g(8) = x(12)*x(13) + 9;
g(9) = x(3)*x(4)  14;
g(10) = x(8)*x(9)  14;
g(11) = x(12)*x(13)  14;
g(12) = x(4)*x(2)  (sin(phi))^2*x(4)^2*x(1)*(2*x(2) + x(1))/4 + 1;
g(13) = x(9)*x(7)  (sin(phi))^2*x(9)^2*x(6)*(2*x(7) + x(6))/4 + 1;
g(14) = x(13)*x(11)*x(6)*x(1)  (sin(phi))^2*x(13)^2*x(11)^2*(2*x(6)*x(1) + e*x(7)*x(2)) +
e*x(7)*x(2);
g(15) = x(4)*x(1) + 16;
g(16) = x(9)*x(6) + 16;
g(17) = x(13)*x(11) + 16;
128
% Equality Constraints
h = [] ;
% End
129
Section 4
% filename : gear_bnb.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file is the main file for solving the Compromise DSP and
% Branch and Bound Formulation of the gear design problem. It
% calls bnb.m which is the Branch and Bound program, subject to
% constraints specified in gear_con.m and optimizes the
% objective function in gear_fun.m
clc;
clear all;
close all;
global kr Cp kms Km Ko phi Ls2 Ls3 Sfe Cr tau_max Kv e Tin dp1 dg1 b1 P1 H1 dp2 dg2 b2
P2 H2 dp3 b3 P3 H3 ds1 ds2 ks kb kt J1 J2 J3 Cs1 Cs2 Cs3 N1 N2 N3
kr = 0.814; %Bending reliability factor (99.0%)
Cp = 2300; %Elastic coefficient
kms = 1.4; %Mean stress factor
Km = 1.6; %Mounting factor
Ko = 1.0; %Overload factor
phi = pi*20/180; %Pressure angle
Ls2 = 8.0; %Shaft length (N = 2)
Ls3 = 4.0; %Shaft length (N = 3)
Sfe = 190000; %Surface fatigue strength
Cr = 1.0; %Surface reliability factor (99.0%)
tau_max = 25000; %Torsional stress limitshaft
Kv = 2.0; %Velocity factor
e = 0.1; %Overall speed ratio
Tin = 120; %Input torque
% Lumped Parameters
ks = (4*Cp^2*Kv*Ko*Km*Tin)/(cos(phi)*sin(phi)*Sfe^2*Cr^2);
kb = (2*Tin*Kv*Ko*Km)/(kr*kms);
kt = (16*Tin)/(pi*tau_max);
%x = [dp1 dg1 b1 P1 H1 dp2 dg2 b2 P2 H2 dp3 b3 P3 H3 ds1 ds2] %Design Vector
% x(1)=dp1 %Diameter of pinion of set 1
% x(2)=dg1 %Diameter of gear of set 1
% x(3)=b1 %Width of set 1
% x(4)=P1 %Diametral pitch of set 1
% x(5)=H1 %Brinell Hardness #
% x(6)=dp2 %Diameter of pinion of set 2
% x(7)=dg2 %Diameter of gear of set 2
130
% x(8)=b2 %Width of set 2
% x(9)=P2 %Diametral pitch of set 2
% x(10)=H2 %Brinell Hardness #
% x(11)=dp3 %Diameter of pinion of set 3
% x(12)=b3 %Width of set 3
% x(13)=P3 %Diametral pitch of set 3
% x(14)=H3 %Brinell Hardness #
% x(15)=ds1 %Shaft Diameter
% x(16)=ds2 %Shaft Diameter
x0 = [1.0369; 1.7914; 0.5832; 15.4311; 305.8953; 0.9982; 2.3598; 0.8734; 16.0287; 500.0000;
1.5407; 0.8667; 10.3846; 500.0000; 0.3483; 0.4639]; % Initial Guess Vector
% x_status should be a column vector such that:
% x_status(i) = 0 if x(i) is continuous,
% x_status(i) = 1 if x(i) is integer,
x_status = [ 0; 0; 0; 1; 0; 0; 0; 0; 1; 0; 0; 0; 1; 0; 0; 0 ];
disp('The starting design vector is :');
disp(x0);
f_ini = gear_fun(x0); % Evaluate the Objective Function at the beginning
disp('The starting objective function is :');
disp(f_ini);
x = x0;
dg3 = x(1)*x(6)*x(11)/(x(2)*x(7)*e);
Ft1 = 2*Tin/x(1);
Ft2 = 2*Tin*x(2)/(x(6)*x(1));
Ft3 = (2*Tin*x(2)*x(7))/(x(11)*x(6)*x(1));
A = [];
b = [];
% No linear inequality constraints
Aeq = [];
beq = [];
% No linear equality constraints
lb = [0; 0; 0; 0; 200; 0; 0; 0; 0; 200; 0; 0; 0; 200; 0; 0];
% Lower bound on design variables
ub = [10; 10; 20; 25; 500; 10; 10; 20; 25; 500; 10; 20; 25; 500; 10; 10];
131
% Upper bound on design variables
options.MaxFunEvals=1e6;
[errmsg,Z,X,t,c,fail] = bnb('gear_fun',x0,x_status,lb,ub,A,b,Aeq,beq,'gear_con');
disp(errmsg);
disp('The optimized design vector is :');
disp(X);
f_fin = gear_fun(X); % Evaluate the Objective Function at the end
disp('The optimal objective function value is :');
disp(f_fin);
disp('The number of Branch and Bound Cycles are :');
disp(c);
disp('The time Branch and Bound Algorithm ran is :');
disp(t);
f0 = pi/4*((X(1)^2 + X(2)^2)*X(3) + (X(6)^2 + X(7)^2)*X(8) + (1 +
((X(1)*X(6))/(e*X(2)*X(7)))^2)*X(11)^2*X(12) + (X(15)^2 + X(16)^2)*Ls3);
f1 = ks*(X(1) + X(2))/(X(3)*X(1)^2*X(2));
f2 = ks*((X(6) + X(7))*X(2))/(X(8)*X(6)^2*X(7)*X(1));
f3 = ks*(((X(6)*X(1) + e*X(7)*X(2)))*X(7)*X(2))/(X(12)*(X(11)*X(6)*X(1))^2);
cli = (f1+f2+f3)/3;
disp('The optimal gearbox weight is :');
disp(f0);
disp('The optimal Surface Fatigue Life Factor is :');
disp(cli);
N1 = X(4)*X(1);
N2 = X(9)*X(6);
N3 = X(13)*X(11);
disp('The no. of teeth on pinion for stage 1 are :');
disp(N1);
disp('The no. of teeth on pinion for stage 2 are :');
disp(N2);
disp('The no. of teeth on pinion for stage 3 are :');
disp(N3);
% End
132
% filename : gear_fun.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file calculates the objective function
function f = gear_fun(x)
global e Ls2 Ls3 ks
% Weighing factors for the two objectives
a0 = 1; %Gearbox Volume weighing factor
acli = 1; %Surface Fatigue Life weighing factor
% f0 is the volume of the gearbox
f0 = pi/4*((x(1)^2 + x(2)^2)*x(3) + (x(6)^2 + x(7)^2)*x(8) + (1 +
((x(1)*x(6))/(e*x(2)*x(7)))^2)*x(11)^2*x(12) + (x(15)^2 + x(16)^2)*Ls3);
% f1, f2, f3 are Surface Fatigue Life Factors
f1 = ks*(x(1) + x(2))/(x(3)*x(1)^2*x(2));
f2 = ks*((x(6) + x(7))*x(2))/(x(8)*x(6)^2*x(7)*x(1));
f3 = ks*(((x(6)*x(1) + e*x(7)*x(2)))*x(7)*x(2))/(x(12)*(x(11)*x(6)*x(1))^2);
% Weighted sum of the objective functions
f = a0*f0 + acli*(f1 + f2 + f3);
% End
133
% filename : gear_con.m
% Written by : Dinar Deshmukh
% Date : 11 July 2002
% Comments : This file lists the nonlinear constraints
function [g, h] = gear_con(x)
global kb kt e J1 J2 J3 Cs1 Cs2 Cs3 N1 N2 N3 phi
% Number of teeth on pinion at each stage
N1 = x(4)*x(1);
N2 = x(9)*x(6);
N3 = x(13)*x(11);
% Lewis Geometry Factor for Each Stage
J1=0.00000021756022*N1^3
0.00007902097902*N1^2+0.00793512043512*N1+0.22383333333333;
J2=0.00000021756022*N2^3
0.00007902097902*N2^2+0.00793512043512*N2+0.22383333333333;
J3=0.00000021756022*N3^3
0.00007902097902*N3^2+0.00793512043512*N3+0.22383333333333;
% Fatigue Strength Surface Factor
Cs1=0.00083333333333*x(5)+0.93333333333333;
Cs2=0.00083333333333*x(10)+0.93333333333333;
Cs3=0.00083333333333*x(14)+0.93333333333333;
% Inequality Constraints
g(1,1) = kb*x(4)/(x(3)*J1*x(1)*250*x(5)*Cs1)  1;
g(2,1) = kb*x(2)*x(9)/(x(8)*J2*x(6)*x(1)*250*x(10)*Cs2)  1;
g(3,1) = kb*x(13)*x(7)*x(2)/(x(12)*J3*x(11)*x(6)*x(1)*250*x(14)*Cs3)  1;
g(4,1) = kt*x(2)/(x(15)^3*x(1))  1;
g(5,1) = kt*x(2)*x(7)/(x(16)^3*x(1)*x(6))  1;
g(6,1) = 9/(x(3)*x(4))  1;
g(7,1) = 9/(x(8)*x(9))  1;
g(8,1) = 9/(x(12)*x(13))  1;
g(9,1) = x(3)*x(4)/14  1;
g(10,1) = x(8)*x(9)/14  1;
g(11,1) = x(12)*x(13)/14  1;
g(12,1) = x(4)*x(2)/((sin(phi))^2*x(4)^2*x(1)*(2*x(2) + x(1))/4  1)  1;
g(13,1) = x(9)*x(7)/((sin(phi))^2*x(9)^2*x(6)*(2*x(7) + x(6))/4  1)  1;
g(14,1) = x(13)*x(11)*x(6)*x(1)/((sin(phi))^2*x(13)^2*x(11)^2*(2*x(6)*x(1) + e*x(7)*x(2)) 
e*x(7)*x(2))  1;
g(15,1) = 16/(x(4)*x(1))  1;
g(16,1) = 16/(x(9)*x(6))  1;
g(17,1) = 16/(x(13)*x(11))  1;
134
% Equality Constraints
h = [] ;
% End
135
function [errmsg,Z,X,t,c,fail] = bnb(fun,x0,xstat,xl,xu,a,b,aeq,beq,nonlc,setts,opts,varargin);
% bnb Finds the constrained minimum of a function of several possibly integer variables.
% Usage: [errmsg,Z,X,t,c,fail] =
% bnb(fun,x0,x_status,lb,ub,A,B,Aeq,Beq,nonlcon,settings,options,P1,P2,...)
%
% BNB solves problems of the form:
% Minimize F(x) subject to: lb <= x0 <=ub
% A*x <= B Aeq*x=Beq
% C(x)<=0 Ceq(x)=0
% x(i) is continuous for xstatus(i)=0
% x(i) integer for xstatus(i)= 1
% x(i) fixed for xstatus(i)=2
%
%
% fun is the function to be minimized and should return a scalar. F(x)=feval(fun,x).
% x0 is the starting point for x. x0 should be a column vector.
% x_status is a column vector describing the status of every variable x(i).
% lb and ub are column vectors with lower and upper bounds for x.
% A and Aeq are matrices for the linear constrains.
% B and Beq are column vectors for the linear constrains.
% nonlcon is the function for the nonlinear constrains.
% [C(x);Ceq(x)]=feval(nonlcon,x). Both C(x) and Ceq(x) should be column vectors.
%
% errmsg is a string containing an error message if BNB found an error in the input.
% Z is the scalar result of the minimization, X the values of the accompanying variables.
% t is the time elapsed while the algorithm BNB has run and
% c is the number of BNB cycles.
% fail is the number of nonconvergent leaf subproblems.
%
% settings is a row vector with settings for BNB:
% settings(1) (standard 0) if 1: if the subproblem does not converge do not branch it and
% raise fail by one. Normally BNB will always branch a nonconvergent subproblem
% so it can try again to find a solution.
% A subproblem that is a leaf of the branchandboundtree cannot be branched. If such
% a problem does not converge it will be considered infeasible and fail will be raised by one.
global maxSQPiter;
% STEP 0 CHECKING INPUT
Z=[]; X=[]; t=0; c=0; fail=0;
if nargin<2, errmsg='BNB needs at least 2 input arguments.'; return; end;
if isempty(fun), errmsg='No fun found.'; return;
elseif ~ischar(fun), errmsg='fun must be a string.'; return; end;
if isempty(x0), errmsg='No x0 found.'; return;
elseif ~isnumeric(x0)  ~isreal(x0)  size(x0,2)>1
errmsg='x0 must be a real column vector.'; return;
136
end;
xstatus=zeros(size(x0));
if nargin>2 & ~isempty(xstat)
if isnumeric(xstat) & isreal(xstat) & all(size(xstat)<=size(x0))
if all(xstat==round(xstat) & 0<=xstat & xstat<=2)
xstatus(1:size(xstat))=xstat;
else errmsg='xstatus must consist of the integers 0,1 en 2.'; return; end;
else errmsg='xstatus must be a real column vector the same size as x0.'; return; end;
end;
lb=zeros(size(x0));
lb(find(xstatus==0))=inf;
if nargin>3 & ~isempty(xl)
if isnumeric(xl) & isreal(xl) & all(size(xl)<=size(x0))
lb(1:size(xl,1))=xl;
else errmsg='lb must be a real column vector the same size as x0.'; return; end;
end;
if any(x0<lb), errmsg='x0 must be in the range lb <= x0.'; return;
elseif any(xstatus==1 & (~isfinite(lb)  lb~=round(lb)))
errmsg='lb(i) must be an integer if x(i) is an integer variabele.'; return;
end;
lb(find(xstatus==2))=x0(find(xstatus==2));
ub=ones(size(x0));
ub(find(xstatus==0))=inf;
if nargin>4 & ~isempty(xu)
if isnumeric(xu) & isreal(xu) & all(size(xu)<=size(x0))
ub(1:size(xu,1))=xu;
else errmsg='ub must be a real column vector the same size as x0.'; return; end;
end;
if any(x0>ub), errmsg='x0 must be in the range x0 <=ub.'; return;
elseif any(xstatus==1 & (~isfinite(ub)  ub~=round(ub)))
errmsg='ub(i) must be an integer if x(i) is an integer variabale.'; return;
end;
ub(find(xstatus==2))=x0(find(xstatus==2));
A=[];
if nargin>5 & ~isempty(a)
if isnumeric(a) & isreal(a) & size(a,2)==size(x0,1), A=a;
else errmsg='Matrix A not correct.'; return; end;
end;
B=[];
if nargin>6 & ~isempty(b)
if isnumeric(b) & isreal(b) & all(size(b)==[size(A,1) 1]), B=b;
else errmsg='Column vector B not correct.'; return; end;
end;
if isempty(B) & ~isempty(A), B=zeros(size(A,1),1); end;
Aeq=[];
if nargin>7 & ~isempty(aeq)
137
if isnumeric(aeq) & isreal(aeq) & size(aeq,2)==size(x0,1), Aeq=aeq;
else errmsg='Matrix Aeq not correct.'; return; end;
end;
Beq=[];
if nargin>8 & ~isempty(beq)
if isnumeric(beq) & isreal(beq) & all(size(beq)==[size(Aeq,1) 1]), Beq=beq;
else errmsg='Column vector Beq not correct.'; return; end;
end;
if isempty(Beq) & ~isempty(Aeq), Beq=zeros(size(Aeq,1),1); end;
nonlcon='';
if nargin>9 & ~isempty(nonlc)
if ischar(nonlc), nonlcon=nonlc;
else errmsg='fun must be a string.'; return; end;
end;
settings = [0 0];
if nargin>10 & ~isempty(setts)
if isnumeric(setts) & isreal(setts) & all(size(setts)<=size(settings))
settings(setts~=0)=setts(setts~=0);
else errmsg='settings should be a row vector of length 1 or 2.'; return; end;
end;
maxSQPiter=1000;
%
% 10/17/01 KMS: BEGIN
%
% options=optimset('fmincon');
options = optimset( optimset('fmincon'), 'MaxSQPIter', 1000);
%
% 10/17/01 KMS: END
%
if nargin>11 & ~isempty(opts)
if isstruct(opts)
if isfield(opts,'MaxSQPIter')
if isnumeric(opts.MaxSQPIter) & isreal(opts.MaxSQPIter) & ...
all(size(opts.MaxSQPIter)==1) & opts.MaxSQPIter>0 & ...
round(opts.MaxSQPIter)==opts.MaxSQPIter
maxSQPiter=opts.MaxSQPIter;
opts=rmfield(opts,'MaxSQPIter');
else errmsg='options.maxSQPiter must be an integer >0.'; return; end;
end;
options=optimset(options,opts);
else errmsg='options must be a structure.'; return; end;
end;
evalreturn=0;
eval(['z=',fun,'(x0,varargin{:});'],'errmsg=''fun caused error.''; evalreturn=1;');
if evalreturn==1, return; end;
138
if ~isempty(nonlcon)
eval(['[C, Ceq]=',nonlcon,'(x0,varargin{:});'],'errmsg=''nonlcon caused error.''; evalreturn=1;');
if evalreturn==1, return; end;
if size(C,2)>1  size(Ceq,2)>1, errmsg='C en Ceq must be column vectors.'; return; end;
end;
% STEP 1 INITIALIZATION
currentwarningstate=warning;
warning off;
tic;
lx = size(x0,1);
z_incumbent=inf;
x_incumbent=inf*ones(size(x0));
I = ceil(sum(log2(ub(find(xstatus==1))lb(find(xstatus==1))+1))+size(find(xstatus==1),1)+1);
stackx0=zeros(lx,I);
stackx0(:,1)=x0;
stacklb=zeros(lx,I);
stacklb(:,1)=lb;
stackub=zeros(lx,I);
stackub(:,1)=ub;
stackdepth=zeros(1,I);
stackdepth(1,1)=1;
stacksize=1;
xchoice=zeros(size(x0));
if ~isempty(Aeq)
j=0;
for i=1:size(Aeq,1)
if Beq(i)==1 & all(Aeq(i,:)==0  Aeq(i,:)==1)
J=find(Aeq(i,:)==1);
if all(xstatus(J)~=0 & xchoice(J)==0 & lb(J)==0 & ub(J)==1)
if all(xstatus(J)~=2)  all(x0(J(find(xstatus(J)==2)))==0)
j=j+1;
xchoice(J)=j;
if sum(x0(J))==0, errmsg='x0 not correct.'; return; end;
end;
end;
end;
end;
end;
errx=optimget(options,'TolX');
handleupdate=[];
if ishandle(settings(2))
taghandlemain=get(settings(2),'Tag');
if strcmp(taghandlemain,'main BNB GUI')
handleupdate=guiupd;
handleupdatemsg=findobj(handleupdate,'Tag','updatemessage');
139
bnbguicb('hide main');
drawnow;
end;
end;
optionsdisplay=getfield(options,'Display');
if strcmp(optionsdisplay,'iter')  strcmp(optionsdisplay,'final')
show=1;
else show=0; end;
% STEP 2 TERMINATION
while stacksize>0
c=c+1;
% STEP 3 LOADING OF CSP
x0=stackx0(:,stacksize);
lb=stacklb(:,stacksize);
ub=stackub(:,stacksize);
x0(find(x0<lb))=lb(find(x0<lb));
x0(find(x0>ub))=ub(find(x0>ub));
depth=stackdepth(1,stacksize);
stacksize=stacksize1;
percdone=round(100*(1sum(0.5.^(stackdepth(1:(stacksize+1))1))));
% UPDATE FOR USER
if ishandle(handleupdate) & strcmp(get(handleupdate,'Tag'),'update BNB GUI')
t=toc;
updatemsg={ ...
sprintf('searched %3d %% of tree',percdone) ...
sprintf('Z : %12.4e',z_incumbent) ...
sprintf('t : %12.1f secs',t) ...
sprintf('c : %12d cycles',c1) ...
sprintf('fail : %12d cycles',fail)};
set(handleupdatemsg,'String',updatemsg);
drawnow;
else
disp(sprintf('*** searched %3d %% of tree',percdone));
disp(sprintf('*** Z : %12.4e',z_incumbent));
disp(sprintf('*** t : %12.1f secs',t));
disp(sprintf('*** c : %12d cycles',c1));
disp(sprintf('*** fail : %12d cycles',fail));
end;
% STEP 4 RELAXATION
[x z convflag]=fmincon(fun,x0,A,B,Aeq,Beq,lb,ub,nonlcon,options,varargin{:});
% STEP 5 FATHOMING
140
K = find(xstatus==1 & lb~=ub);
separation=1;
if convflag<0  (convflag==0 & settings(1))
% FC 1
separation=0;
if show, disp('*** branch pruned'); end;
if convflag==0,
fail=fail+1;
if show, disp('*** not convergent'); end;
elseif show, disp('*** not feasible');
end;
elseif z>=z_incumbent & convflag>0
% FC 2
separation=0;
if show
disp('*** branch pruned');
disp('*** ghosted');
end;
elseif all(abs(round(x(K))x(K))<errx) & convflag>0
% FC 3
z_incumbent = z;
x_incumbent = x;
separation = 0;
if show
disp('*** branch pruned');
disp('*** new best solution found');
end;
end;
% STEP 6 SELECTION
if separation == 1 & ~isempty(K)
dzsep=1;
for i=1:size(K,1)
dxsepc = abs(round(x(K(i)))x(K(i)));
if dxsepc>=errx  convflag==0
xsepc = x; xsepc(K(i))=round(x(K(i)));
dzsepc = abs(feval(fun,xsepc,varargin{:})z);
if dzsepc>dzsep
dzsep=dzsepc;
ixsep=K(i);
end;
end;
end;
% STEP 7 SEPARATION
141
if xchoice(ixsep)==0
% XCHOICE==0
branch=1;
domain=[lb(ixsep) ub(ixsep)];
sepdepth=depth;
while branch==1
xboundary=(domain(1)+domain(2))/2;
if x(ixsep)<xboundary
domainA=[domain(1) floor(xboundary)];
domainB=[floor(xboundary+1) domain(2)];
else
domainA=[floor(xboundary+1) domain(2)];
domainB=[domain(1) floor(xboundary)];
end;
sepdepth=sepdepth+1;
stacksize=stacksize+1;
stackx0(:,stacksize)=x;
stacklb(:,stacksize)=lb;
stacklb(ixsep,stacksize)=domainB(1);
stackub(:,stacksize)=ub;
stackub(ixsep,stacksize)=domainB(2);
stackdepth(1,stacksize)=sepdepth;
if domainA(1)==domainA(2)
stacksize=stacksize+1;
stackx0(:,stacksize)=x;
stacklb(:,stacksize)=lb;
stacklb(ixsep,stacksize)=domainA(1);
stackub(:,stacksize)=ub;
stackub(ixsep,stacksize)=domainA(2);
stackdepth(1,stacksize)=sepdepth;
branch=0;
else
domain=domainA;
branch=1;
end;
end;
else
% XCHOICE~=0
L=find(xchoice==xchoice(ixsep));
M=intersect(K,L);
[dummy,N]=sort(x(M));
part1=M(N(1:floor(size(N)/2))); part2=M(N(floor(size(N)/2)+1:size(N)));
sepdepth=depth+1;
stacksize=stacksize+1;
142
stackx0(:,stacksize)=x;
O = (1sum(stackx0(part1,stacksize)))/size(part1,1);
stackx0(part1,stacksize)=stackx0(part1,stacksize)+O;
stacklb(:,stacksize)=lb;
stackub(:,stacksize)=ub;
stackub(part2,stacksize)=0;
stackdepth(1,stacksize)=sepdepth;
stacksize=stacksize+1;
stackx0(:,stacksize)=x;
O = (1sum(stackx0(part2,stacksize)))/size(part2,1);
stackx0(part2,stacksize)=stackx0(part2,stacksize)+O;
stacklb(:,stacksize)=lb;
stackub(:,stacksize)=ub;
stackub(part1,stacksize)=0;
stackdepth(1,stacksize)=sepdepth;
end;
elseif separation==1 & isempty(K)
fail=fail+1;
if show
disp('*** branch pruned');
disp('*** leaf not convergent');
end;
end;
end;
% STEP 8 OUTPUT
t=toc;
Z = z_incumbent;
X = x_incumbent;
errmsg='';
if ishandle(handleupdate)
taghandleupdate=get(handleupdate,'Tag');
if strcmp(taghandleupdate,'update BNB GUI')
close(handleupdate);
end;
end;
eval(['warning ',currentwarningstate]);