Professional Documents
Culture Documents
51
on
20
ay
M
2
-2
"Advanced Optimization Techniques
18
r,
pu
(AOT-15)"
ai
,J
May 18-22, 2015
IT
N
,M
15
T'
AO
Short Term Course
51
20
ay
Advanced Optimization Techniques
M
2
-2
18
r,
pu
Day 1 : (18-May-2015)
ai
,J
IT
N
,M
15
T'
AO
51
20
ay
M
2
-2
18
r,
pu
ai
,J
Short Term Course on Advanced
IT
N
Optimization Techniques
,M
15
T'
AO
What optimization is all about
51
• OPTIMIZATION is the use of specific methods to
20
ay
determine the most cost-effective and efficient
M
2
solution to a problem or design for a process.
-2
18
• This technique is one of the major quantitative
r,
pu
tools in industrial decision making.
ai
,J
IT
optimization.
Why optimize????
51
•
20
To improve the initial design of equipment
ay
•
M
To enhance the operation of that equipment
2
-2
•
18
To realize the largest production
r,
pu
• The greatest profit
ai
,J
• The the least energy usage and so on.
IT
N
,M
15
T'
AO
Types of optimization problems
5
1
LINEAR NON-LINEAR
20
PROBLEMS
ay
PROBLEMS
M
2
-2
18
PROBLEMS WITHOUT PROBLEMS WITH
r,
CONSTRAINS CONSTRAINS
pu
ai
,J
Can be solved by:
IT
• Graphical
N
,M
• Simplex
• Steepest decent
T'
ay
20
15
PROGRAMMING
LP MODEL COMPONENTS AND
FORMULATION
5
1
20
• Decision variables – Variables representing levels of
ay
activity of a firm.
M
2
-2
• Objective function - A linear mathematical relationship
18
r,
describing an objective of the firm, in terms of decision
pu
ai
variables, that is to be maximized or minimized
,J
IT
5
LP applied extensively to problems areas -
1
20
ay
medical, transportation, operations,
M
2
financial, marketing, accounting,
-2
18
human resources, and agriculture.
r,
pu
Development and solution of all LP models can be
ai
,J
IT
examined in a four step process:
N
,M
(3) solution.
(4) interpretation.
BASIC STEPS OF DEVELOPING A LP MODEL
5
1
Formulation
20
ay
– Process of translating problem scenario into simple LP model
M
2
framework with a set of mathematical relationships.
-2
18
Solution
r,
pu
– Mathematical relationships resulting from formulation
ai
,J
process are solved to identify optimal solution.
IT
N
,M
15
T'
AO
PROBLEM DEFINITION:
FERTILIZER MIX EXAMPLE (1 of 7)
51
Two brands of fertilizer available - Super-Gro, Crop-Quick.
20
ay
Field requires at least 16 pounds of nitrogen and 24 pounds of
M
2
phosphate.
-2
18
Super-Gro costs $6 per bag, Crop-Quick $3 per bag.
r,
pu
Problem: How much of each brand to purchase to minimize total
ai
,J
cost of fertilizer given the following data ?
IT
Chemical Contribution
N
,M
15
Nitrogen Phosphate
T'
Brand
AO
(lb/bag) (lb/bag)
Super-gro 2 4
Crop-quick 4 3
PROBLEM DEFINITION:
FERTILIZER MIX EXAMPLE (2 of 7)
51
20
Decision Variables:
ay
M
x1 = bags of Super-Gro
2
-2
x2 = bags of Crop-Quick
18
r,
pu
ai
The Objective Function:
,J
Minimize Z = $6x1 + 3x2
IT
N
,M
15
Model Constraints:
T'
AO
5
FERTILIZER MIX EXAMPLE (3 of 7)
1
20
ay
M
2
-2
18
Minimize Z = $6x1 + $3x2
r,
subject to: 2x1 + 4x2 16
pu
ai
4x2 + 3x2 24
,J
IT
x1, x2 0
N
,M
15
T'
AO
15
20
ay
M
2
-2
Minimize Z = $6x1 + $3x2
18
subject to: 2x1 + 4x2 16
r,
pu
4x2 + 3x2 24
ai
,J
x1, x2 0
IT
N
,M
15
T'
AO
15
20
ay
M
2
-2
18
Minimize Z = $6x1 + $3x2
r,
subject to: 2x1 + 4x2 16
pu
ai
4x2 + 3x2 24
,J
IT
x1, x2 0
N
,M
15
T'
AO
51
• For some linear programming models, the general
20
ay
rules do not apply.
M
2
-2
• Special types of problems include those with:
18
r,
Redundancy
pu
Infeasible solutionsai
,J
IT
N
Unbounded solutions
,M
15
5
1
20
ay
M
Maximize Profit
2
-2
18
= 2X + 3Y
r,
subject to:
pu
ai
,J X + Y 20
2X + Y 30
IT
N
,M
X 25
15
X, Y 0
T'
AO
Infeasibility: A condition that arises when an LP
problem has no solution that satisfies all of its
constraints.
51
20
ay
M
2
-2
18
X + 2Y 6
r,
pu
2X + Y 8
ai
,J
X 7
IT
N
,M
15
T'
AO
Unboundedness: Sometimes an LP model
will not have a finite solution
51
20
ay
Maximize profit
M
2
-2
= $3X + $5Y
18
r,
pu
subject to:
ai
,J
X 5
IT
N
,M
Y 10
15
T'
AO
X + 2Y 10
X, Y 0
MULTIPLE OPTIMAL SOLUTIONS
5
• An LP problem may have more than one optimal
1
20
ay
solution.
M
2
-2
18
– Graphically, when the isoprofit (or isocost) line
r,
pu
ai
runs parallel to a constraint in the problem
IT
,J
N
51
20
ay
Maximize profit =
M
2
$3x + $2y
-2
18
Subject to:
r,
pu
6X + 4Y 24
ai
,J
X 3
IT
N
,M
X, Y 0
15
T'
AO
EXAMPLE: MULTIPLE OPTIMAL
SOLUTIONS
5
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
51
• Starting point: set of equations ( objective
20
ay
function along with the equality constraints of
M
the problem in canonical form)
2
-2
18
• Convert to a minimization problem
r,
pu
• Convert sign of inequalities to get nonnegative
ai
,J
values of bi
IT
N
5
1
−2x1 + x2 − 5x3 ≥ −6
20
4x1 + x2 + x3 ≤ 6
ay
xi ≥ 0, i = 1, 2, 3
M
2
Solution
-2
18
Change the sign of the objective function to convert it to a
minimization problem and the signs of the inequalities (where
r,
pu
necessary) so as to obtain nonnegative values of bi.
ai
Minimize f = −x1 − 2x2 − x3
,J
IT
subject to
N
2x1 + x2 − x3 ≤ 2
,M
2x1 − x2 + 5x3 ≤ 6
15
T'
4x1 + x2 + x3 ≤ 6
AO
xi ≥ 0, i = 1 to 3
Introduce slack variables x4 ≥ 0, x5 ≥ 0, and x6 ≥ 0
Canonical form:
2x1 + x2 − x3 + x4 = 2
2x1 − x2 + 5x3 + x5 = 6
51
4x1 + x2 + x3 + x6 = 6
20
−x1 − 2x2 − x3 −f = 0
ay
x4, x5, x6, and −f treated as basic variables
M
2
Basic solution:
-2
18
x4 = 2, x5 = 6, x6 = 6 (basic variables)
r,
x1 = x2 = x3 = 0 (nonbasic variables)
pu
f=0
ai
,J
c′′ 1 = −1, c′′ 2 = −2, c′′ 3 = −1, not optimum
IT
Thus x2 enters the next basic set. To obtain the new canonical form, we
AO
5
2
1
x6 2 0 2 -1 0 1 0 4
20
-f 3 0 -3 2 0 0 1 4
ay
M
most negative ci” (x3 enters the next basis)
2
-2
18
r,
pu
Variables
ai
Basic bi” / ais” for
Variables x1 x2 x3 x4 ,J
x5 x6 -f bi” ais”>0
IT
N
x4 2 1 -1 1 0 0 0 2 2
,M
x5 2 -1 5 0 1 0 0 6
15
x6 4 1 1 0 0 1 0 6 6
T'
-f
AO
-1 -2 -1 0 0 0 1 0
x2 3 1 0 5/4 1/4 0 0 4
x3 1 0 1 1/4 1/4 0 0 2
51
x6 0 0 0 -3/2 -1/2 1 0 0
20
-f 6 0 0 11/4 3/4 0 1 10
ay
M
Since all ci”≥0, the present solution is optimum.
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Unbounded Solution
5
subject to
1
20
x 1 – x2 ≤ 1
3x1 – 2x2 ≤ 6
ay
x1 ≥ 0, x2 ≥ 0
M
2
-2
18
Solution Introducing slack variables x3 ≥ 0 and x4 ≥ 0,
r,
x1 – x2 + x3 =1
pu
3x1 – 2x2 + x4 =6
ai
,J
-3x1 – 2x2 -f = 0
IT
N
x3 = 1, x4 = 6 (basic variables)
15
f=0
Basic Variables bi” / ais” for
Variables x1 x2 x3 x4 -f bi” ais”>0
x3 1 -1 1 0 0 1 1 smallest
5
value
x4 3 -2 0 1 0 6 2
1
20
ay
-f -3 -2 0 0 0 1
M
2
most negative
-2
18
Result of pivoting:
r,
pu
x1 1 -1 1 0 0 1
ai
x4 0 1 -3 ,J 1 0 3 3 smallest
IT
value
N
-f 0 -5 3 0 1 3
,M
15
most negative
T'
AO
Result of pivoting:
1 5
basis)
20
x2 0 1 -3 1 0 3
-f 0 0 -12 5 1 18
ay
M
2
-2
18
The value of f can be decreased indefinitely without violating any of the constraints
r,
if we bring x3 into the basis.
pu
ai
NO BOUNDED SOLUTION
,J
IT
N
,M
15
T'
AO
Infinite Number of Solutions
Example Minimize f = -40x1 – 100x2
subject to
5
10x1 + 5x2 ≤ 2500
1
20
4x1 + 10x2 ≤ 2000
ay
2x1 + 3x2 ≤ 900
M
x1≥ 0, x2 ≥ 0
2
-2
18
Solution Introducing slack variables x3 ≥ 0, x4 ≥ 0 and x5 ≥ 0,
r,
pu
10x1 + 5x2 + x3 = 2500
ai
4x1 + 10x2 +x4 = 2000
2x1 + 3x2
,J +x5 = 900
IT
-40x1 – 100x2 -f = 0
N
,M
15
T'
AO
Basic Variables bi” / ais” for
Variables X1 x2 x3 x4 x5 -f bi” ais”>0
x3 10 5 1 0 0 0 2,500 500
smallest
5
x4 4 10 0 1 0 0 2,000 200
1
value
20
x5 2 3 0 0 1 0 900 300
ay
-f -40 -100 0 0 0 1 0
M
2
most negative
-2
18
Result of pivoting:
r,
pu
x3 8 0 1 -1/2 0 0 1,500
ai
x2 4/8 1 0 ,J 1/10 0 0 200
IT
x5 8/10 0 0 -3/10 1 0 300
N
-f 0 0 0 10 0 1 20,000
,M
15
T'
15
20
Basic Variables
ay
Variables X1 x2 x3 x4 x5 -f bi”
M
2
1 0 1/8 -1/16 0 0 1500/8
-2
x1
18
x2 0 1 -1/20 1/8 0 0 125
0 0 -1/10 -1/4 1 0 150
r,
x5
pu
-f 0 0 0 10 0 1 20,000
ai
,J
The solution corresponding to this canonical form is given by
IT
N
,M
fmin = -20,000
AO
Thus the value of f has not changed compared to the preceding value
0 1500 / 8
200 125
X 1 1500 and X 2 0
0 0
5
1
20
300 150
ay
M
X * X 1 (1 ) X 2
2
-2
1500 1500
18
x
* (1 ) (1 )
1 8 8
r,
*
pu
2 200 (1 )125 125 75
x
ai
X * x3* 1500
,J
1500
IT
* 0 0
N
x
4
,M
T'
0 1
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
SIMPLEX
ay
20
15
TWO PHASE OF
TWO PHASES OF THE SIMPLEX
METHOD
51
When initial feasible canonical form may not be
20
ay
readily available (LP problem does not have slack
M
variables for some of the equations or when the
2
-2
18
slack variables have negative coefficients).
r,
pu
Phase I: simplex algorithm used to find if the
ai
feasible solution exists.
,J
IT
that is optimal.
Example Minimize f = 2x1 + 3x2 + 2x3 - x4 + x5
subject to constrains
5
3x1 - 3x2 + 4x3 + 2x4 - x5 = 0
1
20
x1 + x2 + x3 + 3x4 + x5 = 2
ay
xi≥0, i = 1 to 5
M
2
-2
Solution Introducing the artificial variables y1≥0 and y2≥0, the equations can be
18
written as follows:
r,
3x1 - 3x2 + 4x3 + 2x4 - x5 + y1 =0
pu
x1 + x2 + x3 + 3x4 + x5 + y2 =2
ai
2x1 + 3x2 + 2x3 - x4 + x5
,J -f =0
By defining the infeasibility form w as
IT
N
w = y 1 + y2
,M
x1 + x2 + x3 + 3x4 + x5 + y2 =2
AO
Phase I :
1 5
Basic Variables Artificial Variables bi” / ais” for
20
Variables X1 x2 x3 x4 x5 y1 y2 bi” ais”>0
ay
M
y1 3 -3 4 2 -1 1 0 0 smallest
0
2
value
y2 1 1 1 3 1 0 1 2
-2
2/3
18
-f 2 3 2 -1 1 0 0 0
r,
pu
-w -4 4 -5 -5 0 0 0 -2
ai
,J
IT
most negative
N
,M
Since there is a tie between d′′ 3 and d′′ 4 , d′′ 4 is selected arbitrarily as the most
15
5
-f 7/2 3/2 4 0 1/2 1/2 0 0
1
20
-w 7/2 -11/2 5 0 -5/2 5/2 0 -2
ay
most negative
M
2
-2
Result of pivoting (since y1 and y2 are dropped from basis, the columns corresponding
18
to them need not be filled):
r,
pu
ai
x4 6/11 0 7/11 1 2/11 Dropped 6/11
x2 -7/11 1 ,J
-10/11 0 5/11 4/11
IT
N
,M
The present basic feasible solution does not contain any of the artificial variables y1 and y2
The value of w is reduced to 0
Phase I is completed
Phase II : Dropping the w row
Basic Variables bi” / ais” for
Variables X1 x2 x3 x4 x5 bi” ais”>0
5
x4
1
-7/11 1 -10/11 0 5/11 125 4/5 smallest
20
x2 value
ay
-f 98/22 0 118/22 0 -4/22 -6/11
M
2
-2
most negative
18
Result of pivoting:
r,
pu
ai
x4 4/5 -2/5 1 1 0 2/5
x5 -7/5 11/5 -2
,J 0 1 4/5
IT
N
Now, since all ci” are nonnegative, phase II is completed. The (unique) optimal
AO
5
1
20
unprofitable that any feasible solution to the real problem would
be preferred....unless the original instance possessed no feasible
ay
M
solutions at all.
2
• We need to assign, in the objective function, coefficients to the
-2
18
artificial variables that are either very small (maximization
r,
problem) or very large (minimization problem); whatever this
pu
value, let us call it Big M.
ai
• Indeed, the penalty is so costly that unless any of the respective
,J
IT
variables' inclusion is warranted algorithmically, such variables
N
5
X1 + 2X2 ≥ 60
1
20
Step1: Convert the LP problem into a system of linear equations.
ay
M
We do this by rewriting the constraint inequalities as equations by subtracting
2
new “surplus & artificial variables" and assigning them zero & +M coefficients
-2
respectively in the objective function as shown below
18
r,
So the Objective Function would be:
pu
ai
Z=600X1+500X2+0.S1+0.S2+MA1+MA2
,J
subject to constraints,
IT
N
2X1+ X2-S1+A1 = 80
,M
X1+2X2-S2+A2 = 60
15
T'
X1,X2,S1,S2,A1,A2 ≥ 0
AO
Step 2: Obtain a Basic Solution to the problem.
We do this by putting the decision variables X1=X2=S1=S2=0,
so that A1= 80 and A2=60.
5
These are the initial values of artificial variables.
1
20
Step 3: Form the Initial Tableau as shown.
ay
M
2
-2
18
r,
Cj
pu
600 500 0 0 M M
ai
Basic Basic
Variable Sol’n X1 ,J X2 S1 S2 A1 A2 Ratio
IT
CB (B) (XB)
N
,M
M A1 80 2 1 -1 0 1 0 80
15
M A2 60 1 2 0 -1 0 1 60
T'
AO
Zj 3M 3M -M -M M M
Cj- Zj 600-3M 500-3M M M 0 0
It is clear from the tableau that X2 will enter and A2 will leave
the basis. Hence 2 is the key element in pivotal column.
51
Now,the new row operations are as follows:
20
R2(New) = R2(Old)/2
ay
M
R1(New) = R1(Old) - 1*R2(New)
2
-2
18
r,
Cj
pu
600 500 0 0 M
ai
Basic Basic
Variable Sol’n X1 ,JX2 S1 S2 A1 Ratio
IT
CB (B) (XB)
N
,M
5
Now,the new row operations are as follows:
1
20
R1(New) = R1(Old)*2/3
ay
M
R2(New) = R2(Old) – (1/2)*R1(New)
2
-2
Cj
18
600 500 0 0
Basic Basic
r,
pu
Variable Sol’n X1 X2 S1 S2 Ratio
CB
ai
(B) (XB)
X1
,J
600 100/3 1 0 -2/3 1/3
IT
500
N
15
both the artificial variables have been removed, an optimum
20
solution has been arrived at with
ay
M
X1=100/3 , X2=40/3 and Z=80,000/3.
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
PRIMAL
ay
20
15
DUAL, DUALITY &
Duality
15
20
Primal Dual
ay
M
2
Decision variables: x Decision variables: y
-2
18
r,
pu
min Z = cx max W = yb
ai
s.to Ax b n ,J s.to yA c m
IT
x0 m y0 n
N
,M
15
T'
AO
Primal Problem
15
a x a x ··· a x b
20
11 1 12 2 n 1
1n
ay
a x a x ··· a x b
M
21 1 22 2 2n n 2
2
-2
18
a x a x ··· a x b
r,
m1 1 m2 2 mn n m
pu
c x c x · · · c x f
ai
11 2 2 n n ,J
IT
i
,M
15
T'
AO
Dual Problem
15
20
a y a y ··· a y c
11 1 12 2 m1 m 1
ay
M
a y a y · · · a y c
12 1 22 2 m2 m 2
2
-2
18
r,
a y a y · · · a y c
pu
1m 1 2m 2 mn m n
ai
b x b x · · · b x v ,J
11 2 2 m m
IT
i
15
T'
AO
1 5
20
ay
Primal Dual
M
2
cT
YT
-2
• Objective function: Minimize X Maximize b TA c
18
x Y i i
• Variable ≥i 0 i th constraint ≤ (inequality)
T
x Y A ci
r,
• Variable iunrestricted in sign i th constraint = i
y (equality)
pu
A b
• j th constraint, Xi =
A
j
b(equality) j th variable y j in sign
unrestricted
ai
• j
j th constraint, Xi ≥ (inequality) j th variable ≥0 j
[ A ...A ] ,J AT [ A1, .. . , Am ]T
• Coefficient matrix A ≡ 1 m Coefficient matrix ≡
IT
15
20
Primal Dual
ay
M
Min Z = 3x1+ 5x2 Max W = 4y1+12y2+ 18y3
2
-2
≥4
18
s. to x1 s. to y1+ 3y3 ≤3
≥ 12
r,
2x2 2y2+ 2y3 =5
pu
3x1+ 2x2 = 18
ai
,J
x1 ≥ 0,x2 is unrestricted in y1, y2, ≥ 0, y3 is unrestricted in
IT
N
sign sign
,M
15
T'
AO
Dual Simplex Method
51
Steps:-
20
1. Select row r as the pivot row such that br = min bi < 0
ay
M
2. Select column s as the pivot column such that
2
-2
c c
18
s =min ai
r,
a ri
pu
rs
ai
,J
If all arj≥ 0, the primal will not have any feasible (optimal)
IT
solution.
N
,M
go to step 1.
Example
51
20
Min f = x1+ 2x2
ay
s. to 2x1 + x2 ≥ 4
M
x1+ 2x 2 ≥ 7
2
x1 ≥ 0,x2 ≥ 0
-2
18
Solution-
r,
Introducing surplus variables x3 and x4
pu
Min f = x1+ 2x2
ai
s. to -2x1 - x2 + x3 = -4 ,J
IT
-x1 -2x2+ x4 = -7
N
,M
x1,x2 ,x3,x4 ≥ 0
15
T'
No. of eqations =2
Hence, 2 non-basic variables i.e x1 and x2
Example
1 5
20
Basic
ay
Variable x1 x2 x3 x4 -f b
M
2
-2
x3 -2 -1 1 0 0 -4
18
Min. pivot row
r,
x4 -1 -2 0 1 0 -7
pu
ai
-f 1 2 ,J 0 0 1 0
IT
c
N
a 1 1
,M
15
T'
pivot column
AO
Example
1 5
20
Basic
ay
Variable x1 x2 x3 x4 -f b
M
2
-2
x3 0 3 1 -2 0 10
18
x1 1 2 0 -1 0 7
r,
pu
ai
-f 0 0 0
,J 0 1 -7
IT
N
,M
All values of b ≥ 0
15
i
Therefore ,the optimal solution is obtained
T'
AO
2
-2
18
Mathematical Programming
r,
pu
ai
,J
IT
Dr. Parul Mathuria
N
,M
MNIT, Jaipur
T'
AO
Introduction
5
• The General Algebraic Modeling System (GAMS) is a
1
20
high-level modeling system for mathematical
ay
optimization.
M
2
• Originally developed by a group of economists from the
-2
18
World Bank in order to facilitate the resolution of large
r,
and complex non linear models on personal computer.
pu
ai
• Used for modeling problems following types
,J
IT
– Linear optimizations
N
,M
5
1
• Advantages of GAMS
20
ay
(i) Simplicity of implementation,
M
(ii) Portability and transferability between users and
2
-2
systems
18
(iii) Easiness of technical update because of the
r,
pu
constant inclusion of new algorithms
ai
,J
• The separation of logic and data has allowed a problem
IT
51
20
• The general structure of the GAMS program
ay
• Detailed illustration
M
2
• Description of the output file.
-2
18
r,
pu
GAMS
Input file: Output file:
ai
Compilation
MODEL ,J RESULTS
Of model
IT
N
,M
15
T'
AO
Optimization
SOLVER
Architecture of GAMS programming
15
• The structure of the input GAMS Code decomposed in three modules
20
corresponding respectively to
ay
– data entry
M
– model specification
2
-2
– solve procedure
18
• General rules
r,
• it is necessary to proceed to the declaration of any element before using
pu
it. In particular, sets must be declared at the very beginning of the
ai
program. ,J
• To end every statement by a semicolon in order to avoid unexpected
IT
N
compilation error.
,M
same line. This property can help to reduce the length of the code or
T'
facilitate printing.
AO
5
Stage 1: Data/ Sets/ Variables/ Equations Entry
1
20
• Specification of indices and data (parameters, sets).
ay
• Listing of names and types of variables and equations (constraints
M
2
and objective function).
-2
18
• Entering Data
r,
1. Sets
pu
ai
2. Scalars ,J
3. Parameters
IT
N
• Entering Variables
T'
AO
1. Variables Types
2. Variable Attributes
GAMS Model Structure
5
1
• Entering equations
20
ay
1. Types of equations
M
2
2. Equation values
-2
18
Stage 2: Defining Model
r,
pu
• VARIABLES declaration
ai
,J
• Definition of the equations (constraints and objective
IT
N
function).
,M
1. Types of problems
AO
2. Model definition
GAMS Model Structure
51
20
Stage 3: Resolution
ay
• SOLVE statement
M
2
• Call to the optimization solver.
-2
18
• Results display
r,
pu
ai
Code must be saved with the extension .gms
,J
IT
Conventions for naming the extensions of the files are as
N
,M
follows:
15
5
minimize Z X 12 X 22 X 32
1
20
ay
M
2
-2
X3
0
18
s.t. 1-
r,
X2
pu
ai
X1 X 3 0
,J
IT
N
X1 X X 2 X 3 4 0
2
,M
2
15
0 X1 5
T'
AO
0 X2 3
0 X3 3
Example 1
51
20
Initial starting point:
ay
M
X1 4
2
-2
18
r,
X2 2
pu
ai
,J
IT
X3 2
N
,M
15
T'
AO
Example 1
5
First we should note that the inequality
1
20
ay
X3
M
1- 0 (2)
2
-2
X2
18
can actually be rearranged in linear form as:
r,
pu
X2 X3 0
ai
,J
(3)
IT
N
,M
1 5
20
ay
Comments
M
2
-2
18
r,
pu
ai
,J
IT
N
Main code
,M
15
T'
AO
Example 1 - Dollar Control Options
1 5
20
ay
M
2
$title: This option sets the title in the page header of the listing file.
-2
18
$onsymxref [On/Off Symbol Reference]:
r,
This option controls the following,
pu
ai
•Collection of cross references for identifiers like sets, parameters, and variables.
,J
•Cross-reference report of all collected symbols in listing file.
IT
N
•Listing of all referenced symbols and their explanatory text by symbol type in listing file.
,M
Controls the complete listing of all symbols that have been defined and their text.
AO
15
20
ay
M
2
-2
•The keyword VARIABLES is used to list our variables x1, x2 and x3. Note
18
that Z, the objective function value must also be included.
r,
pu
•The keyword POSITIVE VARIABLES is used to specify the non-negativity
ai
of x1, x2, x3. ,J
IT
N
•Finally, note that the semicolon ; must be included to specify the end of
AO
the lists.
Example 1 - Input file
15
20
ay
M
2
-2
18
r,
pu
ai
,J
The keyword EQUATIONS is for listing the names of Syntax Meaning
IT
the constraints and objective function. =E= =
N
,M
=G= >
•Constraints: CON1, CON2, CON3
15
=L= <
T'
- Subtraction
* Multiplication
/ Division
Note the semi-colon ; is needed at the end. ** exponent
Example 1 - Input file
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
Next in the input file we specify the upper bounds and initial values.
N
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
The keyword MODEL is used to name our model and to specify which equations
N
should be used. In this case we name our model as TEST and specify that all
,M
equations be used.
15
The OPTION statements are used to suppress output for debugging the
T'
1 5
20
ay
M
2
-2
18
Finally, we invoke the optimization algorithm with the SOLVE statement. Here the format is
r,
pu
as follows:
ai
,J
IT
or
15
5
8 038 ---- EQU CON2 . 2.526 +INF .
1
9 VARIABLES X1, X2, X3, Z ; GAMS 2.25 PC AT/XT 09/16/97 ---- EQU CON3 4.000 4.000 4.000 2.637
20
10 POSITIVE VARIABLES X1, X2, X3 ; 23:48:05 PAGE 3 ---- EQU OBJ . . . 1.000
11 Test Problem LOWER LEVEL UPPER MARGINAL
12 EQUATIONS CON1, CON2, CON3, OBJ ; Solution Report SOLVE TEST USING NLP FROM LINE 34 ---- VAR X1 . 2.526 5.000 .
ay
13 ---- VAR X2 . 0.916 3.000 EPS
14 CON1.. X2 - X3 =G= 0 ; SOLVE SUMMARY ---- VAR X3 . . 3.000 EPS
M
15 CON2.. X1 - X3 =G= 0 ; MODEL TEST OBJECTIVE Z ---- VAR Z -INF 7.218 +INF .
16 CON3.. X1 - X2**2 + X1*X2 - 4 =E= 0 ; TYPE NLP DIRECTION MINIMIZE
2
17 OBJ.. Z =E= SQR(X1) + SQR(X2) + SQR(X3) ; SOLVER MINOS5 FROM LINE 34 **** REPORT SUMMARY : 0 NONOPT
-2
18 **** SOLVER STATUS 1 NORMAL COMPLETION 0 INFEASIBLE
19 * Upper bounds **** MODEL STATUS 2 LOCALLY OPTIMAL 0 UNBOUNDED
18
20 X1.UP = 5 ; **** OBJECTIVE VALUE 7.2177 0 ERRORS
21 X2.UP = 3 ; RESOURCE USAGE, LIMIT 0.336 1000.000 GAMS 2.25 PC AT/XT 09/16/97
22 X3.UP = 3 ; ITERATION COUNT, LIMIT 15 1000 23:48:05 PAGE 4
r,
23 EVALUATION ERRORS 0 0 Test Problem
pu
24 * Initial point M I N O S 5.3 (Nov 1990) Ver: 225-DOS-02 Solution Report SOLVE TEST USING NLP FROM LINE 34
25 X1.L = 4 ; =====
ai
26 X2.L = 2 ; B. A. Murtagh, University of New South Wales
27 X3.L = 2 ; and
28
29 MODEL TEST / ALL / ;
,J
P. E. Gill, W. Murray, M. A. Saunders and M. H. Wright
Systems Optimization Laboratory, Stanford University.
EXECUTION TIME
038
= 0.050 SECONDS VERID TP5-00-
IT
30 DEMONSTRATION MODE
31 OPTION LIMROW = 0; You do not have a full license for this program. USER: CACHE DESIGN CASE STUDIES SERIES G911007-
N
COMPILATION TIME = 0.060 SECONDS VERID TP5-00- Estimate work space needed -- 39 Kb OUTPUT C:\GAMS\TEST.LST
038 Work space allocated -- 160 Kb
T'
Test Problem
Model Statistics SOLVE TEST USING NLP FROM LINE 34
GAMS 2.25 PC AT/XT 09/16/97
GAMS 2.25 PC AT/XT 09/16/97
23:48:05 PAGE 2
23:48:05 PAGE 1
Test Problem
Test Problem
Model Statistics SOLVE TEST USING NLP FROM LINE 34
4
5 * Example from Problem 8.26 in "Engineering Optimization"
6 * by Reklaitis, Ravindran and Ragsdell (1983)
7 *
MODEL STATISTICS
5
8
1
9 VARIABLES X1, X2, X3, Z ;
20
BLOCKS OF EQUATIONS 4 SINGLE EQUATIONS 4
10 POSITIVE VARIABLES X1, X2, X3 ;
BLOCKS OF VARIABLES 4 SINGLE VARIABLES 4
11
ay
NON ZERO ELEMENTS 10 NON LINEAR N-Z 5
12 EQUATIONS CON1, CON2, CON3, OBJ ;
DERIVATIVE POOL 6 CONSTANT POOL 2
M
13
CODE LENGTH 57
14 CON1.. X2 - X3 =G= 0 ;
2
-2
15 CON2.. X1 - X3 =G= 0 ;
GENERATION TIME = 0.000 SECONDS
16 CON3.. X1 - X2**2 + X1*X2 - 4 =E= 0 ;
18
17 OBJ.. Z =E= SQR(X1) + SQR(X2) + SQR(X3) ;
18
r,
EXECUTION TIME = 0.110 SECONDS VERID TP5-00-
19 * Upper bounds
pu
038
20 X1.UP = 5 ;
ai
21 X2.UP = 3 ;
22 X3.UP = 3 ;
23
,J The derivative pool refers to the fact that
IT
24 * Initial point
analytical gradients for the nonlinear
N
25 X1.L = 4 ;
,M
26 X2.L = 2 ;
27 X3.L = 2 ;
model have been generated by GAMS.
15
28
29 MODEL TEST / ALL / ;
T'
30
AO
31 OPTION LIMROW = 0;
32 OPTION LIMCOL = 0;
33
34 SOLVE TEST USING NLP MINIMIZING Z;
1 5
TYPE NLP DIRECTION MINIMIZE ---- EQU CON1 . 0.916 +INF .
20
SOLVER MINOS5 FROM LINE 34 ---- EQU CON2 . 2.526 +INF .
---- EQU CON3 4.000 4.000 4.000 2.637
ay
**** SOLVER STATUS 1 NORMAL COMPLETION ---- EQU OBJ . . . 1.000
**** MODEL STATUS 2 LOCALLY OPTIMAL
M
**** OBJECTIVE VALUE 7.2177 LOWER LEVEL UPPER MARGINAL
2
-2
RESOURCE USAGE, LIMIT 0.336 1000.000 ---- VAR X1 . 2.526 5.000 .
18
ITERATION COUNT, LIMIT 15 1000 ---- VAR X2 . 0.916 3.000 EPS
EVALUATION ERRORS 0 0 ---- VAR X3 . . 3.000 EPS
r,
---- VAR Z -INF 7.218 +INF .
pu
M I N O S 5.3 (Nov 1990) Ver: 225-DOS-02
=====
ai
**** REPORT SUMMARY : 0 NONOPT
B. A. Murtagh, University of New South Wales ,J 0 INFEASIBLE
and 0 UNBOUNDED
IT
P. E. Gill, W. Murray, M. A. Saunders and M. H. Wright 0 ERRORS
N
DEMONSTRATION MODE
The column MARGINAL gives the dual
15
51
038
20
ay
USER: CACHE DESIGN CASE STUDIES SERIES G911007-
1447AX-TP5
M
GAMS DEMONSTRATION VERSION
2
-2
**** FILE SUMMARY
18
INPUT C:\GAMS\TEST.GMS
r,
OUTPUT C:\GAMS\TEST.LST
pu
ai
,J
IT
N
,M
15
T'
AO
Working with GAMS - Basics
51
•
20
Running Model
ay
•
M
Finding Compilation error
2
-2
•
18
Finding Feasibility of Model
r,
pu
• Result interpretation
ai
,J
IT
N
,M
15
T'
AO
Working with GAMS – Flow control
51
•
20
Loop statement
ay
•
M
If else statement
2
-2
•
18
While statement
r,
pu
• For statement
ai
,J
IT
N
,M
15
T'
AO
Working with GAMS - Interfacing
5
1
• Excel interfacing
20
ay
M
1. Syntax
2
-2
18
2. Reading excel files
r,
pu
3. Unloading results into excel files
ai
,J
• MATLAB interfacing
IT
N
,M
1. Syntax
15
T'
51
• Transportation Problem
20
ay
1.Problem definition
M
2
-2
To allocate ships to transport personnel from
18
different port to a training center.
r,
pu
ai
2. Objective ,J
IT
15
• Fundamental building block.
20
• A set contains elements which may be character or numeric.
ay
M
1. Sets set_name “ Label” /a, b, c/ ; “ declaration”
2
-2
set_name is alphanumeric starting with a letter. All
18
identifiers in the model follow the same.
r,
pu
2. Set t "time" /1991 * 2000 /; “ sequence”
ai
3. Sets ,J “ multiple sets”
IT
s "Sector" / sector1
N
,M
sector 2 /
15
r "regions" / northern
T'
AO
eastern/ ;
4. Set sector_region (s,r) “one to one mapping”
Sets
5
1
• In the given example
20
ay
Sets i ports / Chennai, Surat, Kandla, Mumbai /
M
2
j voyages / v-01* v-15 /
-2
18
k ship classes / small, medium, large /
r,
pu
sc (i, k) ship capability /
ai
,J
Chennai.(small, medium, large)
IT
N
Mumbai.large /
15
T'
15
• Scalar is data type for single data entry.
20
1. Scalar scalar_name “label” /20/;
ay
2. Scalar rate “rate of interest”;
M
2
rate = 0.05;
-2
In the given example
18
Scalar w1 ship assignment weight / 1.00 /
r,
pu
w2 ship distance traveled weight / .01 /
ai
w3 personnel distance travel weight / .0001 / ;
,J
• Parameter is data type for list oriented data
IT
N
/Rajasthan 32,
T'
AO
M.P. 51/;
Default value of a parameter is zero.
Parameters
51
2. Parameter
20
salaries (employee, manager, department)
ay
/Kamal .Sundar .toy = 6000
M
Kunal .Smita .Furniture = 9000
2
-2
Komal .Gaurav .cosmetics = 8000 / ;
18
In the given example
Parameter d(i) number of men at port i needing transport
r,
pu
/ Chennai = 475, Surat = 659
ai
Kandla = 672, Mumbai = 1123 /
,J
shipcap(k) ship capacity in men
IT
/ small 100
N
,M
medium 200
large 600 /
15
5
1
• Tabular data can be declared and initialized using “Table”
20
statement.
ay
M
• The table statement is the only statement in the GAMS
2
-2
language that is not free format.
18
• The character positions of the numeric table entries must
r,
pu
overlap the character positions of the column headings.
ai
• The sequence of signed numbers forming a row must be on
,J
IT
the same line.
N
,M
5
1
20
1.Sets m subjects
ay
/Maths, Physics , Chemistry, Biology/
M
2
i students
-2
18
/ Karan , Radha , Milan/ ;
r,
pu
Table marks (m,i) marks obtained out of 100
ai
,J
Karan Radha Milan
IT
N
Maths 75 89 91
,M
Physics 67 72 82
15
T'
Chemistry 77 89 90
AO
Biology 91 90 90 ;
Table
51
• In the given example
20
Table a(j,*) assignment of ports to voyages
ay
dist Chennai Surat Kandla Mumbai
M
v-01 370 1
2
v-02 460 1
-2
v-03 600 1
18
v-04 750 1
r,
v-05 515 1 1
pu
v-06 640 1 1
ai
v-07 810 1 1
v-08 665 1 1 ,J
IT
v-09 665 1 1
N
v-10 800 1 1
,M
v-11 720 1 1 1
15
v-12 860 1 1 1
v-13 840 1 1 1
T'
AO
v-14 865 1 1 1
v-15 920 1 1 1 1
Variables
5
1
• Entities whose values become known after model is solved.
20
• There are five types of variables.
ay
M
1. Free (Default)
2
-2
No bounds on variable.
18
2. Positive Variable
r,
pu
Only positive values are allowed.
ai
3. Negative Variable ,J
IT
4. Binary Variable
15
5. Integer Variable
Discrete variables that can take integer values.
Variables
5
1
• Variable a a is unknown;
20
ay
• Positive variable u(i,j) i,j are elements of sets ;
M
2
• All default bounds set at declaration time can be changed
-2
18
using assignment statements.
r,
• u.up (i,j) = 1000 ;
pu
ai
• a.lo = inf ; ,J
IT
are equal.
15
T'
15
•
20
.lo - The lower bound for the variable.
ay
• .up - The upper bound for the variable.
M
2
• .fx - The fixed value for the variable.
-2
18
• .l - The activity level for the variable. This is also equivalent
r,
pu
to the current value of the variable.
ai
• .m - The marginal value (also called dual value) for the
,J
IT
variable.
N
,M
5
1
• In the given problem
20
ay
Variables z(j,k) number of times voyage jk is used
M
2
y(j,k,i) number of men transported from port i via
-2
18
voyage jk
r,
pu
obj ;
ai
Integer Variables z; ,J
IT
positive variables y ;
N
,M
15
T'
AO
Equations
15
• Equations are the GAMS names for the symbolic algebraic
20
ay
relationships that is used to generate the constraints in the
M
model.
2
-2
• Declaration of equation
18
r,
equations
pu
ai
cost 'total cost definition'
,J
IT
invb(q) 'inventory balance'
N
,M
• Equation Definition
T'
AO
cost … a =e= b + c ;
Equations
5
1
• =e= Equality - RHS must equal LHS
20
ay
• =g= Greater than - LHS must be greater than or equal to
M
2
RHS.
-2
18
• =l= Less than – LHS must be less than or equal to RHS.
r,
pu
• =n= - No relationships enforced between LHS and RHS.
ai
,J
• =x= External equation - Only supported by selected
IT
N
solvers.
,M
15
solvers.
Equations
5
1
• Scalar equations
20
ay
total.. c =e= sum(i, y(i)) ;
M
• Indexed equations
2
-2
18
The index sets to the left of the '..' are called the
r,
pu
domain of definition of the equation.
ai
Domain checking ensures that the domain over which
,J
IT
15
• The dollar operator operates with a logical condition. The term
20
$(condition) can be read as 'such that condition is valid 'where
ay
M
condition is a logical condition.
2
• The dollar logical conditions cannot contain variables. Variable
-2
18
attributes (like .l and .m) are permitted however.
r,
• if (b > 1.5), then a = 2 can be written as
pu
ai
a$(b > 1.5) = 2 ; ,J
• d(i)$( w(i)$q(i) ) = v(i); is a nested dollor condition
IT
N
15
• Dollor on right
20
1. For an assignment statement with a dollar condition on the RHS, an
ay
assignment is always made.
M
2. For example x = 2$(y > 1.5) ;
2
-2
in words means
18
if (y > 1.5) then (x = 2), else (x = 0)
r,
• Dollor on left
pu
ai
1. For an assignment statement with a dollar condition on the LHS, no
,J
assignment is made unless the logical condition is satisfied.
IT
in words means
15
5
1
• A dollar operator within an equation is analogous to the dollar
20
ay
control on the right of assignments.
M
• mb(i).. x(i) =g= y(i) + (e(i) - m(i))$t(i) ;
2
-2
18
means that the term is added to the RHS of the equation only
r,
for those elements of i that belong to t(i).
pu
ai
• Dollor control over domain definition is analogous to the
,J
IT
dollor control on the left of the assignments.
N
,M
assignment is made.
Conditional Equation
1 5
• In the given example, equations are declared and defined as
20
Equations
ay
objdef
M
demand (i) pick up all the men at port i
2
-2
voycap(j,k) observe variable capacity of voyage jk
18
shiplim(k) observe limit of class k ;
r,
pu
demand(i).. Sum ((j,k)$ (a(j,i)$v c(j,k)), y(j,k,i)) =g= d(i) ;
ai
voycap(j,k)$vc(j,k).. Sum (i$a(j,i), y(j,k,i)) =l= shipcap(k)*z(j,k) ;
,J
shiplim(k).. Sum (j$vc(j,k), z(j,k)) =l= n(k) ;
IT
N
w2*sum((j,k)$vc(j,k), a(j,"dist")*z(j,k)) +
15
w3*sum((j,k,i)$(a(j,i)$vc(j,k)), a(j,"dist")*y(j,k,i)) ;
T'
AO
Model Statement
5
1
• The model statement is used to collect equations into groups
20
ay
and to label them so that they can be solved.
M
• Model transport "a transportation model" / all / ;
2
-2
18
• The model is called transport and the keyword all is a
r,
shorthand for all known (declared) equations.
pu
ai
• In the given example, if model name is navy then
,J
IT
Model navy /all/ ;
N
,M
15
T'
AO
Solve Statement
5
1
•
20
There are different types of problems and their identifiers.
ay
• LP Linear programming.
M
2
• QCP Quadratic constraint programming.
-2
18
• NLP Nonlinear programming. .
r,
pu
• DNLP Nonlinear programming with discontinuous
ai
derivatives. ,J
IT
bounds.
T'
AO
Solve Statement
15
• MIP Mixed integer programming.
20
• RMIQCP Relaxed mixed integer quadratic constraint programming.
ay
M
• RMINLP Relaxed mixed integer nonlinear programming.
2
• MIQCP Mixed integer quadratic constraint programming.
-2
18
• MINLP Mixed integer nonlinear programming.
r,
• RMPEC Relaxed Mathematical Programs with Equilibrium
pu
Constraints.
ai
,J
• MPEC Mathematical Programs with Equilibrium
IT
Constraints.
N
,M
15
• GAMS itself does not solve the problem, but passes the problem
20
definition to one of a number of separate solver programs.
ay
• The objective variable must be scalar and type
M
2
free, and must appear in the least one of the
-2
18
equations in the model.
• For example
r,
pu
Solve transport using lp minimizing cost ;
ai
• In the given example ,J
IT
variables.
T'
5
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
1
Example in GAMS
5
GAMS Outputs
15
• Echo print of input file
20
ay
It is always the first part of the output file. It is just a listing of
M
the input with the lines numbers added.
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
GAMS Outputs
15
• Equation listing
20
ay
The equation listing is an extremely useful debugging aid. It
M
shows the variables that appear in each constraint, and what
2
-2
the individual coefficients and RHS value evaluate to after the
18
r,
data manipulations have been done.
pu
ai
,J
IT
N
,M
15
T'
AO
GAMS Output
15
• Column Listing
20
ay
This is a list of the individual coffiecients sorted by column
M
rather than by row.
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
GAMS Output
5
1
• Model Statistics
20
ay
Its use is to provide details on the size and nonlinearity of the
M
model.
2
-2
18
The BLOCK counts refer to GAMS equations and variables, the
r,
SINGLE counts to individual rows and columns in the problem
pu
ai
generated. IT
,J
N
,M
15
T'
AO
GAMS Outputs
5
1
• Solve Summary
20
ay
It gives the summary of model solution such that model type
M
of the model being solved, name of the solver used to solve
2
-2
the model, name of the objective variable, direction of
18
r,
optimization being performed etc.
pu
ai
,J
IT
N
,M
15
T'
AO
GAMS Output
5
1
• Solution Listing
20
ay
It is row-by-row then column-by-column listing of the solutions
M
returned to GAMS by the solver program.
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
• Display
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
GAMS Output
20
1 5
Dollor Control Option
5
1
• The Dollar Control Options are used to indicated compiler
20
ay
directives and options.
M
• Dollar control options are not part of the GAMS language so
2
-2
they do not appear on the compiler listing unless an error had
18
r,
been detected.
pu
• For example $call 'dir'
ai
,J
IT
This command makes a directory listing on a PC.
N
,M
15
T'
AO
Installation of GAMS
• Installation notes available for Windows, Mac, and UNIX
1 5
20
• Manuals are also available online
ay
• Without a valid GAMS license the system will operate as a free demo system:
M
– Model limits:
2
-2
• Number of constraints and variables: 300
18
• Number of nonzero elements: 2000 (of which 1000 nonlinear)
r,
• Number of discrete variables: 50 (including semi continuous, semi integer and member of SOS-Sets)
pu
– Additional Global solver limits: Number of constraints and variables: 10
ai
• The GAMS log will indicate that your system runs in demo mode:
,J
IT
GAMS 24.3.3 Copyright (C) 1987-2014 GAMS Development. All rights reserved Licensee:
N
• GAMS will terminate with a licensing error if you hit one of the limits above:
T'
AO
*** Status: Terminated due to a licensing error *** Inspect listing file for more information
• http://www.gams.com/download/
References
51
20
1. A GAMS tutorial
ay
M
www.gams.com/dd/docs/gams/Tutorial.pdf
2
-2
18
2. GAMS – A USER’s GUIDE
r,
pu
pages.cs.wisc.edu/~swright/635/docs/GAMSU
ai
sersGuide.pdf ,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
• OPTIMIZATION
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
1 5
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
15
NON-LINEAR
20
LINEAR
ay
PROBLEMS PROBLEMS
M
2
-2
PROBLEMS
18
PROBLEMS WITH
WITHOUT
r,
CONSTRAINS
pu
CONSTRAINS
ai
,J
Can be solved by:
IT
N
• Graphical
method ,M
Can be solved by:
• Newton’s method
Can be solved by:
15
• Simplex • Langrangian method
• Steepest decent
T'
r,
pu
ai
,J
2
f i
S i -f i S i-1
IT
2
f i 1
N
,M
15
X i 1 X i *i S i
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
51
20
ay
M
2
-2
2
f i
S i -f i
18
2
S i-1
f i 1
r,
pu
ai
2
f 2
,J
S 2 -f 2 S1
IT
2
f 1
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
Hooke and Jeeves’Method
20
1 5
Hooke and Jeeves’ Method
15
20
ay
M
2
-2
18
r,
pu
ai
,J
x1
IT
x
N
X1 2
,M
x n
15
T'
AO
Hooke and Jeeve’s method
1 5
20
ay
M
2
-2
18
r,
pu
ai
Yk ,i 1 xi u i if f
f (Yk ,i 1 xi u i ) f f (Yk ,i 1 )
,J
Yk ,i Yk ,i 1 xi u i if f f (Yk ,i 1 xi u i ) f f (Yk ,i 1 ) f f (Yk ,i 1 xi u i )
IT
N
Yk ,i 1 if f f (Yk ,i 1 ) min( f , f )
,M
15
T'
AO
Hooke and Jeeves’ Method
15
20
ay
M
X k 1 Yk ,n
2
-2
18
r,
pu
S X k 1 - X k
ai
,J
and find a point Yk 1, 0 as
IT
Yk 1, 0 X k 1 S
N
,M
15
T'
AO
Yk 1, 0 X k 1 S
Hooke and Jeeves’ Method
51
20
ay
M
2
-2
18
r,
pu
ai
max (x )
,J
i i
IT
N
,M
15
T'
AO
Example
51
20
f ( x1 , x2 ) x1 x2 2 x12 2 x1 x2 x22
ay
M
2
-2
0
18
X1
r,
0
pu
ai
,J
IT
N
,M 0
X1
15
0
T'
AO
Example
1 5
20
0
ay
Y10 X1
M
0
2
-2
18
f f (Y10 x1u 1 ) f (0.8,0.0) 2.08
r,
pu
f f (Y10 x1u 1 ) f (0.8,0.0) 0.48
ai
,J
IT
N
,M
f f (Y11 x2 u 2 ) f (0.0,0.8) 0.16
15
0.0
T'
Y12
AO
0.8
Example
1 5
20
Step 4: As Y12 is different from X1, the new base point is taken as:
0.0
ay
X2
M
0.8
2
-2
Step 5: A pattern direction is established as:
18
0.0 0.0 0.0
r,
S X 2 X1
pu
0.8 0.0 0.8
ai
,J
IT
The optimal step length * is found by minimizing
N
,M
f (X2 S) f (0.0,0.8 0.8 ) 0.642 0.48 0.16
15
T'
AO
15
20
Step 6: Set k = 2, f = f2 = f (Y20) = -0.25, and repeat step 3. Thus, with i=1,we
evaluate
ay
f f (Y20 x1u1 ) f (0.8,0.5) 2.63
M
2
f f (Y20 x1u1 ) f (0.8,0.5) 0.57
-2
18
r,
pu
0 .8
Y
ai
Since f -< f < f +, we take 21 0.5
,J
IT
N
Next, we set i=2 and
,M
evaluate f = f (Y ) = - 0.57 and
f f (Y21 x2u 2 )21 f (0.8,1.3) 1.21
15
0 .8
T'
Y22
AO
1 .3
As f + < f , we take .
Since f (Y22) = -1.21 < f (X2)X=-0.25, we take
0.8 the new base point as:
3 Y22
1 .3
Example
5
1
20
Step 6 continued: After selection of the new base point, we go to step 5.
ay
M
This procedure has to be continued until the optimum point is found.
2
-2
1.0
18
X opt
r,
1 . 5
pu
ai
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
5
Let the function in f(x) = f(x1,x2,................xn)
1
20
Local approximation: 2nd order Taylor series:
ay
M
2
1 T
f ( x x ) f ( x ) f
-2
x x H x f ( x )
T
18
2
r,
pu
First order sufficiency condition for the function is the derivative of function
ai
with respect to ∆X should be zero.applied to approximation:
,J
IT
N
f (x) 0 f Hx 0 x H f ,M 1
2 2 f 2 f 2 f
2x
15
2 f
AO
f 2 f
H yx yz
2 y 2
zfx 2 f f
2
zy z 2
f ( x, y , z )
s H 1f
5
f
1
20
Evaluated at x x
ay
f
f
M
x k 1 x k s k y
2
● Update:
f
-2
sk
18
xk x k 1
z
r,
Xk is kth iteration
pu
ai
s H 1f
2
,J
Initially k will be 0. 2 f 2 f 2 f
2x
IT
xy xz
N
,M
● Newton’s method has quadratic
f
H yx 2 f 2 f
15
yz
2 y 2
T'
zfx f
*
optimum: x k 1 x 2 f 2
c
xk x * 2 zy z 2
5
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M 2 x1 x2 1
15
f (x) x1 2 x2 x3
T'
AO
x2 2 x3 1
5
1
20
2 2 f 2 f 2 f 2 1 0
2x
ay
xy xz
M
f
H yx 2 f 2 f H ( x) f (x) 1 2 1
2
2
-2
yz
2 y 2
18
0 1 2
zfx f
r,
2 f 2
pu
zy z 2
ai
,J
And we will need the inverse of the Hessian:
IT
N
2 1 0 ,M 1 3 1 1
15
4 2 4
H (x) 1 2 1
T'
1
1 1
AO
1
1 2 2
0 1 2 1 3
4 2 4
• Using the formula,
1
51
H-1 = (matrix of co-factors)T
20
|H|
ay
M
2
-2
1
18
= (matrix of co-factors)T
4
r,
pu
T
ai
3 2 1
,J
1
IT
4 2 4 2
N
,M
1 2 1
15
T'
AO
51
0
20
ay
x 0
M
0
2
-2
0
18
r,
pu
ai
,J
Calculate the gradient for the 1st iteration:
IT
N
,M
0 0 1
15
1
f (x 0 ) 0 0 0
T'
AO
f ( x ) 0
0
0 0 1
1
15
20
1
x x f ( x ) f ( x 0 )
ay
1 0 2 0
M
2
-2
3 1 1
18
0 4 2 4 1
r,
0 1 1 1 0
pu
ai
2 2
,J
0 1 1 3 1
IT
4 4
N
15
,M 2
1
T'
AO
x1 1
1
1 5
20
ay
2 1 1 0
M
2
-2
f (x ) 1 2 1 0
1
18
r,
1 2 1 0
pu
ai
,J
IT
N
,M
Since the gradient is zero, the method has converged
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
5
r x1 x2 f(x)
1
20
0 3 2 0
ay
10 -.415 -1.415 6.832
M
2
100 -.491 -1.491 9.241
-2
18
1000 -.499 -1.499 12.243
r,
10000 -.5 -1.5 12.250
pu
ai
100000 -.5 -1.5 12.250
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
r x1 x2 f(x)
1000 .5 31.646 1001.719
5
100 .5 10.075 101.756
1
20
10 .5 3.391 11.749
ay
M
0 .5 1.225 1.752
2
-2
.001 .5 1.225 1.752
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
GRG METHOD
18
-2
2
M
ay
20
15
Generalized Reduced Gradient Method
51
One of popular search methods for optimizing constrained functions is the
20
Generalized Reduced Gradient method (GRG).
ay
M
y yx1 , x2 ,..., xn
2
Cost Function :
-2
18
1 x1 , x2 ,..., xn 0
r,
Constraints :
pu
2 x1 , x2 ,..., xn 0
ai
,J
IT
...
N
,M m x1 , x2 ,..., xn 0
15
T'
AO
n : Number of variables
n-m : Number of decision Variables
m : Number of constraints
Assumed: n=4 , m=2
5
We want to optimize cost function
1
20
y yx1 , x2 , x3 , x4
ay
M
2
-2
18
Subject to
r,
pu
1 x1 , x2 , x3 , x4 0
ai
,J
2 x1 , x2 , x3 , x4 0
IT
N
,M
15
T'
AO
5
1
x1 x2 x3 x4
20
ay
i
M
Where ij , Therefore
2
x j
-2
18
r,
11 12 x1 13 14 x3
pu
x
ai
,J
21 22 mm 2 m1 23 24 mnm x4 nm 1
IT
N
,M
15
Thus
T'
AO
x1
13
14
mm k3 x3 k 4 x4
1 1
J mm x x 4 J
x2
23
3
24
n
y
ynew yold xi
i 1 xi
xold
5
x1 x3
1
ynew yold y1 y2 y3 y4
20
x2 x4
ay
M
2
-2
18
r,
pu
With substituting for x1and xfrom
2
previous part, we have
ai
,J
IT
N
ynew yold y,M1
y2 J 1 k3 y3 x3
15
y y J k 1
y x
T'
AO
1 2 4 4 4
Therefore we determine Generalized Reduced Gradient.
5
GRGi y1 ... ym 1m J mm kim1 yi
1
1
20
ay
M
2
-2
for
18
i n m,..., n
r,
pu
ai
n
ynew yold GRG .x
,J
IT
i i
N
i nm
,M
15
T'
15
20
Move towards constraints
ay
M
2
-2
Calculate GRGi
18
GRGi 0 GRGi 0
r,
i i 1 xi 0 xi 0 i i 1
pu
ai
,J
xi 1 xi xi
IT
N
,M Move towards constraints Decreased xi
15
GRGi GRGi
T'
n
yi 1 yi GRGi .xi
AO
yi 1 yi i nm yi 1 yi
GRGi
xOptimum xi 1
Example
5
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Initial step size x1 0.1 Last step size x55 9.5367 10 -8
Consider the following problem:
1 5
20
ay
Minimize: 2x12 + 2x22- 2 x1 x2 - 4x1- 6x2
M
2
-2
18
Subject to : x1 + x2+ x3 = 2
r,
pu
x1 + 5x2+ x4 = 5
ai
,J
x1 , x2 , x 3 , x4 ≥ 0
IT
N
,M
15
We set our starting point as:
T'
AO
x1 = (0,0,2,5)T
▽f(x) = (4x1 - 2x2 - 4, 4x2 - 2x1 - 6, 0, 0)T
Step 1:
At x1= (0,0,2,5)T,
51
▽f(x1) = (-4, -6, 0, 0)T
20
I1 = {3,4}
ay
M
so that B = [a3,a4] and N = [a1,a2]
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Step 2:
x1 = (0,0,2,5)T and
d1 = (4,6,-10,-34)T
51
20
ay
λ max = min –xjk/djk if dk< 0
M
=∞ if dk ≥ 0 ……………..(6)
2
-2
18
λ max = {-x31/d31, -x41/d41}
r,
pu
= min {2/10, 5/34}
ai
,J
= 5/34
IT
Then we solve the following problem:
N
,M
15
Minimize: f(x1+ λ1d1) = 56 λ12 - 52 λ1
T'
AO
Clearly, λ1 = 5/34
x2= x1+ λ1d1 =(10/17,15/17,9/17,0)T
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Step 2:
x2 = (10/17,15/17,9/17,0)T and
5
1
20
d2 = (2565/1156, -513/1156, -513/289,0)T
ay
M
2
-2
λ max = {-x22/d22, -x32/d32}
18
r,
= min {-(15/17)/-(513/1156),-(9/17)/-(513/289)}
pu
ai
= 17/57
,J
IT
Then we solve the following problem:
N
,M
15
Minimize: f(x2+ λ2 d2) = 12.21 λ2 – 5.95 λ2 – 6.436
T'
AO
1
20
λ2 = 68/279 that minimizes the function
ay
M
2
-2
f(x2+ λ2d2) = 12.21 λ22 – 5.95 λ2 – 6.436
18
r,
pu
ai
Thus,
,J
IT
x3= x2+ λ2d2 = (35/31,24/31,3/31,0)T
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
QUADRATIC PROGRAMMING
5
1
A linearly constrained optimization problem
20
ay
with a quadratic objective function is called
M
“Quadratic programming”
2
-2
18
It forms the Basis for the several general Non
r,
pu
linear Programming Algorithms
ai
It is procedure that minimizes a quadratic
,J
IT
function of n variables subjected to m linear
N
,M
equality or inequality or both types of constraints.
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
51
20
ay
M
2
-2
18
Applied the modified simplex technique to example,
r,
pu
yields the sequence of iterations given in Table . The
ai
optimal solution to the original problem is (x*1, x*2) =
,J
IT
(3, 2).
N
,M
15
T'
AO
Short Term Course
51
20
ay
Advanced Optimization Techniques
M
2
-2
18
r,
pu
Day 2 : (19-May-2015)
ai
,J
IT
N
,M
15
T'
AO
Artificial Neural Networks
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
Assistant Professor,
Department of Electrical Engineering, MNIT
1
2
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
What are ANNs?
5
1
20
Models of the brain and nervous system
ay
M
Highly Parallel. Process information like the brain.
2
-2
Learning
18
Very simple principles
r,
pu
ai
Very complex behaviours
IT
,J
N
,M
3
Properties of Artificial Neural Networks
4
15
20
Learning from experience: able to solve complex difficult problems,
ay
with plenty of data that describe the problem
M
Generalizing from examples: Can interpolate from previous learning
2
-2
18
and give the correct response to unseen data
r,
Rapid applications development: ANNs are generic machines and
pu
ai
quite independent of knowledge about problem domain.
,J
IT
Adaptability: Adapts to a changing environment, if is properly
N
,M
designed
15
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
6
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Types of Problems
7
5
1
Mathematical Modeling (Function Approximation)
20
ay
Classification
M
2
Clustering
-2
18
Forecasting
r,
pu
Vector Quantization
ai
,J
Pattern Association
IT
N
Control
,M
15
Optimization
T'
AO
Mathematical Modeling
8
(Function Approximation)
5
1
Modeling – often mathematical relationship between two sets of
20
data unknown analytically
ay
M
No closed form expression available
2
-2
18
Empirical data available defining output parameters for each
r,
input
pu
ai
Data is often noisy ,J
IT
15
Assignment of objects to specific class
20
ay
Given a database of objects and classes of those
M
2
objects
-2
18
r,
pu
ai
Examples- ,J
IT
In marketing: consumer spending pattern classification
N
,M
5
1
Grouping together objects similar to one another
20
ay
Usually based on some “distance” measurement in
M
2
object parameter space
-2
18
Objects and distance relationships available
r,
pu
No prior info on classes or groupings
ai
,J
IT
method
15
T'
AO
Forecasting
11
51
Prediction of future events based on history
20
ay
Laws underlying behavior of system sometimes
M
2
hidden; too many related variables to handle
-2
18
Trends and regularities often masked by noise
r,
pu
Time series forecasting
ai
,J
IT
5
1
Object space divided into several connected regions
20
ay
Objects classified based on proximity to regions
M
2
-2
Closest region or node is “winner”
18
Form of compression of high dimensional input space
r,
pu
ai
Successfully used in many geological and environmental
,J
IT
is often unknown.
15
T'
AO
Pattern Association
13
5
1
20
An associate network is a neural network with
ay
essentially a single functional layer that associates
M
one set of vectors with another set of vectors
2
-2
18
Auto-associative systems useful when incoming data is
r,
a corrupted version of actual object e.g. face
pu
ai
detection , handwriting detection etc.
,J
IT
modification of input.
15
T'
AO
Control
14
51
Manufacturing, Robotic and Industrial machines have
20
complex relationships between input and output
ay
M
variables
2
-2
Output variables define state of machine
18
r,
Input variables define machine parameters determined
pu
by operation conditions, time and human input
ai
,J
IT
operation
T'
AO
Optimization
15
5
1
Requirement to improve system performance or costs
20
subject to constraints
ay
M
Maximize or Minimize
2
-2
Large number of variables affecting objective function
18
(high dimensionality of problem)
r,
pu
Design variables often subject to constraints
ai
,J
Lots of local minima
IT
N
15
T'
AO
16
5
1
An ANN is composed of processing elements called or perceptrons,
20
organized in different ways to form the network’s structure. Each of the
ay
perceptrons receives inputs, processes inputs and delivers a single
M
output.
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Note the three layers: input, intermediate (called the hidden layer) and
output. Several hidden layers can be placed between the input and
output layers.
17
5
1
Each ANN is composed of a collection of perceptrons grouped in
20
layers.
ay
M
Inputs xi arrive
2
through pre-synaptic
-2
connections
18
Synaptic efficacy is
r,
pu
modeled using real
ai
weights wi ,J
The response of the
IT
N
neuron is a nonlinear
,M
function f of its
15
weighted inputs
T'
AO
The Neuron
18
1 5
The neuron is the basic information processing unit of a NN.
20
It consists of:
ay
M
1 A set of synapses or connecting links, each link
2
-2
characterized by a weight: W1, W2, …, WN
18
2 An adder function (linear combiner) which computes the
r,
pu
weighted sum of the inputs:
ai
N
,J u wkxk
k 1
IT
3
,M
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
20
AO
T'
15
,M
N
IT
,J
ai
pu
r,
Activation functions
18
-2
2
M
ay
20
15
Neural Network Types
21
5
1
20
Feed-forward
ay
M
Input signals travel though input layer, hidden layers, and
2
output layer
-2
18
Require training
r,
pu
Feedback
ai
,J
Network pays attention to it own results and uses the
IT
15
20
Single Layer Feed-forward NN
ay
Input layer Output layer
M
of
2
of
-2
source nodes neurons
18
r,
pu
ai
Multi Layer Feed-forward NN ,J
IT
N
,M
Output
15
Input
layer
T'
layer
AO
3-4-2 Network
Hidden Layer
23
AO
T'
15
,M
N
IT
,J
Designing ANN models
ai
pu
r,
18
-2
2
M
ay
20
15
24
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
25
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
26
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
27
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
28
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
29
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
30
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
31
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
32
AO
T'
15
,M
N
IT
,J
ai
pu
Other architectures
r,
18
-2
2
M
ay
20
15
33
AO
T'
15
,M
N
IT
,J
ai
pu
Other architectures
r,
18
-2
2
M
ay
20
15
Other architectures
34
15
Generalized Regression Networks
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Other architectures
35
5
1
20
ay
M
2
-2
18
Self Organising Maps
r,
pu
(Kohonen)
ai
,J
IT
N
,M
15
T'
AO
36
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
37
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Basic Types of Learning
38
51
20
ay
Neural Nets
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
Supervised Unsupervised
15
T'
Learning Learning
AO
39
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Backpropagation Network
40
15
20
Initialize network with random weights
ay
M
For all training cases (called examples):
2
-2
Present training inputs to network and calculate output
18
For all layers (starting with output layer, back to input
r,
layer):
pu
Compare network output with correct output (error function)
ai
,J
Adapt weights in current layer
IT
N
to hidden layers
to inputs
41
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Backpropagation learning
42
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
43
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Classify Patterns with a Neural Network
44
15
20
•Use the nprtool GUI, as described in Using the Neural Network Pattern
ay
Recognition Tool.
M
2
•Use a command-line solution, as described in Using Command-Line
-2
18
Functions.
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Pattern Classification Process
45
51
20
Training Set
ay
M
2
-2
Learning Algorithm
18
r,
pu
ai
Building The Classification Model
,J
IT
N
,M
15
Model Selection
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
47
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
48
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
49
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
50
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
51
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
52
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Criteria that can be used to stop network training
53
15
20
Parameter Stopping Criteria
ay
min_grad Minimum Gradient Magnitude
M
max_fail
2
Maximum Number of Validation Increases
-2
time
18
Maximum Training Time
goal
r,
Minimum Performance Value
pu
epochs Maximum Number of Training Epochs (Iterations)
ai
,J
IT
N
,M
15
T'
AO
54
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
55
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
56
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Using Command-Line Functions
57
15
20
% cancerInputs - input data.
% cancerTargets - target data.
ay
inputs = cancerInputs;
M
targets = cancerTargets;
% Create a Pattern Recognition Network
2
-2
hiddenLayerSize = 10;
18
net = patternnet(hiddenLayerSize);
% Set up Division of Data for Training, Validation, Testing
r,
net.divideParam.trainRatio = 70/100;
pu
net.divideParam.valRatio = 15/100;
ai
net.divideParam.testRatio = 15/100;
% Train the Network ,J
[net,tr] = train(net,inputs,targets);
IT
% Test the Network
N
,M
outputs = net(inputs);
errors = gsubtract(targets,outputs);
15
performance = perform(net,targets,outputs)
% View the Network
T'
AO
view(net)
% Plots
% Uncomment these lines to enable various plots.
% figure, plotperform(tr)
% figure, plottrainstate(tr)
% figure, plotconfusion(targets,outputs)
% figure, ploterrhist(errors)
58
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
network performance only on the test set
59
1 5
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
more accurate results?
60
5
1
• Reset the initial network weights and biases to new values
20
with init and train again.
ay
•Increase the number of hidden neurons.
M
2
•Increase the number of training vectors.
-2
• Increase the number of input values, if more relevant information is
18
available.
r,
pu
•Try a different training algorithm
ai
,J
IT
N
due to different initial weight and bias values and different divisions of
15
15
20
ay
M
2
-2
18
r,
pu
ai
,J The R value is an indication of the relationship
IT
between the outputs and targets. If R = 1, this
N
,M
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
Assistant Professor,
Department of Electrical Engineering, MNIT
1
What is a Support Vector Machine?
5
1
20
It uses concepts from computational learning
ay
M
theory.
2
-2
18
It is an optimally defined surface, typically nonlinear
r,
pu
in the input space, and linear in a higher dimensional
ai
,J
space.
IT
N
2
What is SVM used for ?
• Regression and data-fitting
5
1
• Supervised and unsupervised learning
20
ay
• Image Processing
M
2
•
-2
Speech Recognition
18
• Pattern Recognition Classification
r,
pu
•
ai
Time-Series Analysis
IT
,J
• Radar Point Source Location
N
,M
• Medical Diagnosis
15
T'
3
Binary Linear Classifier
If the training set is linearly separable: Binary classification can be viewed as
the task of separating classes in feature space:
15
20
The hypersurface equation: f(x, w,b) = sign(wTx + b)
ay
M
2
-2
wTx + b = 0
18
wTx + b > 0
r,
pu
wTx + b < 0
ai
,J
IT
N
,M
15
T'
AO
4
Classification Margin
wT x b
Distance from an object to the hyperplane is: r
w
Support Vectors (the closest objects to the hyperplane)
51
20
Margin (the width that the boundary could be increased by before
hitting a data point ), ρ. 2
ay
M
w
2
The maximum margin linear classifier is the linear classifier with
-2
18
the maximum margin.
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
Linear SVM Mathematically
x+ M=Margin Width
5
1
20
ay
X-
M
2
-2
18
r,
pu
ai
,J
IT
What we know:
N
(x x ) w 2
,M
w . x+ + b = +1 M
15
w w
T'
w . x- + b = -1
AO
w . (x+-x-) = 2
6
Linear SVM Classifier Mathematically
Assuming all data is at least distance 1 from the hyperplane, the following
two constraints follow for a training set {(xi ,yi)}
1 5
20
wTxi + b ≥ 1 if yi = 1, wTxi + b ≤ -1 if yi = -1
ay
M
For support vectors, the inequality becomes an equality; since each
2
example’s distance from the hyperplane is: wT x b , the margin is:
-2
r
18
2
w
r,
w
pu
ai
2
,J
Find w and b such that is maximized, for all {(xi ,yi)}
IT
w
N
A better formulation:
T'
AO
15
problem where a Lagrange multiplier λ λi is associated with every constraint
20
in the primary problem:
ay
M
2
-2
Lagrange dual form
primal form
18
formulation
r,
pu
ai
Find λ1… λn such that ,J
IT
n
Q( ) i i j yi y j xi x j
T
,M
i 1 2 i 1 j 1
15
The solution: w =Σ λiyixi, b= yk- wTxk for any xk such that λk 0
The
8 classifying decision function is: f(x) = sign(Σ λiyix i
Tx + b)
Sec. 15.2.1
51
The classifier is a separating hyperplane.
20
ay
M
The most “important” training points are the support vectors; they
2
define the hyperplane.
-2
18
r,
pu
Quadratic optimization algorithms can identify which training
ai
points xi are support vectors with non-zero Lagrangian multipliers
,J
λi.
IT
N
,M
9
Non-linear SVMs: Feature spaces
What if the training set is not linearly separable?
5
The original feature space can always be mapped to some higher-dimensional
1
20
feature space where the training set is separable:
ay
M
2
-2
18
Φ: x → φ(x)
r,
pu
ai
,J
IT
N
,M
15
T'
AO
10
Nonlinear SVM - Overview
5
1
SVM locates a separating hyperplane in the
20
feature space and classify points in that space
ay
M
It does not need to represent the space explicitly,
2
-2
18
simply by defining a kernel function
r,
pu
The kernel function plays the role of the dot
ai
,J
product in the feature space.
IT
N
,M
15
T'
AO
11
The “Kernel Functions”
If every data point is mapped into high-dimensional space via some
5
transformation Φ: x → φ(x), the inner product becomes:
1
20
K(xi,xj)= φ(xi) Tφ(xj).
ay
M
The linear classifier relies on inner product between vectors
2
-2
K(xi,xj)=xiTxj
18
A kernel matrix (function) is some function that corresponds to an
r,
pu
inner product into some feature space.
ai
,J
For each K(xi,xj) checking that K(xi,xj) can be written in the form of
IT
N
51
20
ay
M
Polynomial of power p: K(xi,xj)= (1+ xi Txj)p
2
-2
18
r,
Gaussian (radial-basis function network):
pu
2
xi x j
ai
,J
K (x i , x j ) exp( )
IT
2 2
N
,M
15
T'
13
Non-Linear SVM Classifier Mathematically
Slack variables ηi can be added to allow misclassification of noisy examples.
1 5
20
ay
M
Find w and b such that
2
-2
18
1 T n
(W , ) W W C i is minimized
r,
2 i 1
pu
ηi
ai
,Jfor all {(xi ,yi)}, where
ηi
yi (wT (xi)+ b) ≥ 1- ηi and ηi ≥ 0, i=1,2,…,n.
IT
N
,M
14 2 i 1 i 1 i 1
Solving the Optimization Problem
The dual problem for non-linear SVM classification:
15
20
Find λ1… λn such that
ay
Q( ) i i j yi y j xi , x j is maximized subject to the constraints
n 1n n
M
i 1 2 i 1 j 1
2
-2
(1) Σ λiyi = 0, (2) 0 ≤ λi ≤ C for all λi
18
r,
pu
ai
,J
Again, xi with non-zero λi will be support vectors. Solution
IT
N
15
wo i yi xi
n
bo 1 yi woT xi / yi
T'
i 1
iI iI
AO
o o
15 The decision function is: f(x) = sign(Σ λiyi K(xi ,x) + b) (4)
Model Selection
We select the best model as follows:
5
1
20
1. Selection methods:
ay
• Backward-Forward (BF)
M
2
• Forward-Backward (FB)
-2
18
2. Quality Criteria: We use the minimum description
r,
pu
length (MDL), that is,
ai
m log nk nk
,J
1 n 2
L X log i ik
IT
N
(7)
,M
2 2 nk i 1
15
T'
AO
1 5
Classification Problem f (x) : x y, x R p , y {1,1}
20
ay
Decision functions
or
M
2
f (x) : x y, x R p , y {0,1,..., c 1}
-2
18
Expected Risk:
r,
R( ) | f (x) y | P(x, y )dxdy
pu
ai
,J
Ideal Goal: Find the decision function f(x) minimize the
IT
expected risk
N
,M
1
Empirical Risk Minimization: Remp ( ) | f (xi ) yi |
15
n
T'
AO
To avoid the over-fitting problem, use SRM or minimum description length (MDL),
where the empirical risk should be minimized
17
Model Validation
We validate the classification model using two methods:
5
1
1. Internal Validation:
20
ay
1. The correct classification rate (CCR)
M
c
2
CCR n1 CCk ,
-2
(8)
18
k 1
r,
Where CCk is the number of corrected observations in the class k.
pu
ai
The model with the highest CCR is the better performance.
,J
IT
N
,M
c
SSCE n 1
n CCk ,
2
T'
(9)
AO
k
k 1
5
- Gaussian or polynomial kernel is default
1
20
- if ineffective, more elaborate kernels are needed
ay
- domain experts can give assistance in formulating appropriate
M
similarity measures
2
-2
18
Choice of kernel parameters
r,
pu
- e.g. σ in Gaussian kernel
ai
- σ is the distance between closest points with different classifications
,J
- In the absence of reliable criteria, applications rely on the use of a
IT
19
The Matlab code for least squares support vector machine for
arbitrary Kernel to compute b and λ is written as:
5
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
20
51
20
ay
Advanced Optimization Techniques
M
2
-2
(AOT-15)
18
r,
pu
ai
Simulated Annealing
,J
IT
N
,M
15
51
physical annealing.
20
ay
M
• Annealing: Heat (metal or glass) and allow it to cool slowly,
2
-2
in order to remove internal stresses and toughen it.
18
r,
pu
• SA makes use of an iterative improvement procedure.
ai
,J
IT
1 5
Initial position
20
of the ball Simulated Annealing explores
more. Chooses this move with a
ay
small probability (Hill Climbing)
M
2
-2
18
r,
pu
Greedy Algorithm
ai
gets stuck here!
,J
Locally Optimum
IT
Solution.
N
,M
15
T'
AO
5
SA Parameters
1
20
ay
M
• Zc = objective function value for the current trial solution,
2
-2
18
r,
• Zn = objective function value for the current candidate to be
pu
the next trial solution,
ai
,J
IT
N
,M
solution).
SA - Procedure
Move selection rule: For maximization problem
5
1
20
• If Zn ≥ Zc always accept this candidate.
ay
M
• If Zn < Zc accept the candidate with the following probability:
2
-2
18
Prob{acceptance} = ex
r,
pu
ai
,J
where x = (Zn – Zc) / T
IT
N
,M
51
20
• If Zn ≤ Zc always accept this candidate.
ay
M
• If Zn > Zc accept the candidate with the following probability:
2
-2
18
Prob{acceptance} = ex
r,
pu
ai
,J
where x = (Zc – Zn) / T
IT
N
,M
5
Let us consider the example of a small nonlinear
1
20
programming problem (only a single variable). The
ay
M
problem is to
2
-2
18
r,
Maximize
pu
ai
,J
IT
N
Subject to
0 ≤ x ≤ 31
N = number of constraints = 1
U = 31
51
L=0
20
ay
• Initial trial solution (ITS): x= 15.5 (between U
M
2
-2
and L limit)
18
r,
• Neighborhood structure: Any feasible
pu
ai
solution ,J
IT
N
5
1
20
x= 15.5, thus
ay
Z(c) = f(15.5) = 3741121
M
2
-2
and T(1) = 0.2 Z(c) = 748224
18
r,
pu
ai
SD = (U-L)/6 = 5.167,J
IT
N
,M
15
51
Hence, Z(n) = f(x) = 3055616
20
ay
Here, Z(n) < Z(c), being a maximization problem,
M
2
-2
we will have to test whether to accept a
18
downward step of not.
r,
pu
ai
(Z(n) – Z (c)) / T = -0.916
,J
IT
N
5
SA Parameters
1
20
ay
M
Two kinds of decisions have to be taken heuristically:
2
-2
18
r,
Generic decisions
pu
ai
• Can be taken without a deep insight into the particular problem
IT
,J
• Are tuned experimentally
N
,M
15
51
20
• Initial temperature
ay
M
2
-2
18
• Temperature schedule
r,
pu
ai
,J
IT
• Cooling ratio
N
,M
15
T'
AO
• Stopping criterion
SA Cooling Schedule
15
•
20
Starting Temperature
ay
M
2
-2
•
18
Final Temperature
r,
pu
ai
,J
• Temperature Decrement
IT
N
,M
15
T'
51
• Starting Temperature
20
ay
M
2
-2
– Must be hot enough to allow moves to almost
18
neighbourhood state (else we are in danger of
r,
pu
implementing hill climbing)
ai
,J
– Must not be so hot that we conduct a random
IT
N
temperature
SA Cooling Schedule - Temperature Decrement
5
1
•
20
Temperature Decrement
ay
M
2
-2
– Theory states that we should allow enough
18
r,
iterations at each temperature so that the
pu
ai
system stabilises at that temperature
IT
,J
– Unfortunately, theory also states that the
N
,M
problem size
SA Cooling Schedule - Temperature Decrement
5
1
20
• Temperature Decrement
ay
M
2
-2
18
– We can either do this by doing a large number
r,
of iterations at a few temperatures, a small
pu
ai
number of iterations at many temperatures or
,J
IT
15
Iterations at each temperature
20
ay
M
• A constant number of iterations at each temperature
2
-2
18
• An alternative is to dynamically change the number of
r,
pu
iterations as the algorithm progresses
ai
,J
IT
ay
20
15
Objective Function Value Plot
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
1 5
20
AT INIT_TEMP Unconditional Acceptance
ay
M
2
-2
HILL CLIMBING Move accepted with
18
probability
= e-(^C/temp)
r,
pu
COST FUNCTION, C
HILL CLIMBING
ai
,J
IT
N
,M
HILL CLIMBING
15
T'
AO
AT FINAL_TEMP
NUMBER OF ITERATIONS
Outline of a Basic SA
51
20
ay
• Initialization. Start with a feasible initial trial
M
solution.
2
-2
18
r,
pu
• Iteration. Use the move selection rule to select the
ai
,J
next trial solution. (If none of the immediate
IT
N
5
1
• Check the temperature schedule. When the desired
20
ay
number of iterations have been performed at the
M
current value of T, decrease T to the next value in the
2
-2
temperature schedule and resume performing iterations
18
at this next value.
r,
pu
ai
,J
• Stopping rule. When the desired number of iterations
IT
N
51
20
Coding the Objective Function
ay
M
2
-2
To code the Objective Function we create a MATLAB file
18
named simple_objective.m with the following code in it:
r,
pu
ai
,J
IT
N
,M
function y = simple_objective(x)
15
T'
y = -(12*x^5-975*x^4+28000*x^3-345000*x^2+1800000*x);
AO
Solution using MATLAB
51
20
Open Optimization Toolbox by typing optimtool
ay
command
M
2
-2
18
r,
• Solver: Select simulannealbnd - Simulated
pu
ai
,J
Annealing algorithm solver
IT
N
,M
15
5
1
• Start Point: feed random start point (e.g. 15.5)
20
ay
M
• Bounds
2
-2
18
– Lower Bound: feed lower bound valve (e.g. 0)
r,
– Upper Bound: feed upper bound valve (e.g. 31)
pu
ai
,J
• Options: in options panel we can choose various
IT
N
5
1
20
• SA is based on neighbourhood search and allows uphill moves.
ay
M
2
• It has a strong analogy to the simulation of cooling of material.
-2
18
r,
pu
• Uphill moves are allowed with a temperature dependent
ai
probability. ,J
IT
N
,M
implementation.
AO
ay
20
15
Genetic Algorithm
15
20
ay
M
2
Dr. Rajesh Kumar
-2
18
PhD, PDF (NUS, Singapore)
r,
pu
SMIEEE, FIETE, MIE (I),LMCSI, SMIACSIT, LMISTE, MIAENG
ai
,J
Associate Professor , Department of Electrical Engineering
IT
rkumar@ieee.org, rkumar.ee@gmail.com
http://drrajeshkumar.wordpress.com/
Calculus
5
1
Maximum and minimum of a smooth function is reached at a
20
stationary point where its gradient vanishes
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Hill-climbing
15
20
Calculus based approach
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
AO ?
T'
15
,M
N
IT
,J
ai
pu
r,
18
But…
-2
2
M
ay
20
15
?
?
AO
?
T'
15
,M
N
IT
?
,J
ai
pu
r,
18
But…
-2
?
2
M
ay
20
1
?
5
?
More realistically?
15
?
20
ay
? ?
M
?
2
-2
?
18
?
r,
pu
ai
,J
IT
N
,M
? ?
15
T'
? ?
AO
•Noisy?
•Discontinuous? ?
?
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
51
20
ay
M
2
-2
Scientists have tried to mimic the nature
18
r,
throughout the history.
pu
ai
,J
IT
N
,M
15
T'
AO
The nontraditional optimization algorithms
are
5
1
20
ay
M
Genetic Algorithms
2
-2
18
Neural Networks
r,
pu
Ant Algorithms
ai
,J
Simulated Annealing
IT
N
,M
15
T'
AO
Classes of Search Techniques
1 5
20
ay
Search techniques
M
2
-2
Calculus-based techniques Guided random search techniques Enumerative techniques
18
r,
Direct methods Indirect methods Evolutionary algorithms Simulated annealing Dynamic programming
pu
ai
,J
Finonacci Newton Evolutionary strategies Genetic algorithms
IT
N
,M
Parallel Sequential
15
T'
AO
5
• Genetic algorithms (GA’s) are a technique to solve problems which need
1
20
optimization
ay
• GA’s are a subclass of Evolutionary Computing
M
2
-2
18
• GA’s are based on Darwin’s theory of evolution
r,
pu
ai
,J
IT
• History of GA’s
N
,M
15
20
• The center of this all is the cell nucleus
ay
• The nucleus contains the genetic information
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Biological Background (2) – Chromosomes
15
20
• Each chromosome is build of DNA
ay
• Chromosomes in humans form pairs
M
• There are 23 pairs
2
-2
18
• The chromosome is divided in parts: genes
r,
• Genes code for properties
pu
ai
• The posibilities of the genes for
,J
one property is called: allele
IT
N
1 5
20
• A genotype develops to a phenotype
ay
• Alleles can be either dominant or recessive
M
2
• Dominant alleles will always express from the genotype to the fenotype
-2
18
• Recessive alleles can survive in the population for many generations, without
r,
being expressed.
pu
ai
,J
IT
N
,M
15
T'
AO
Biological Background (4) – Reproduction
51
20
• During reproduction “errors” occur
ay
• Due to these “errors” genetic variation exists
M
2
• Most important “errors” are:
-2
18
• Recombination (cross-over)
r,
pu
• Mutation
ai
,J
IT
N
,M
15
T'
AO
Biological Background (5) – Natural selection
15
20
• There are more individuals born than can survive, so there is a continuous
ay
struggle for life.
M
2
-2
18
• Individuals with an advantage have a greater chance for survive: survival of
r,
the fittest.
pu
ai
,J
IT
N
,M
15
T'
AO
Biological Background (6) – Natural selection
15
• Important aspects in natural selection are:
20
• adaptation to the environment
ay
M
• isolation of populations in different groups which cannot mutually mate
2
-2
18
r,
• If small changes in the genotypes of individuals are expressed easily,
pu
especially in small populations, we speak of genetic drift
ai
,J
IT
N
51
20
ay
Phenotype space
M
Genotype space =
2
{0,1}L
-2
Encoding
18
(representation) 10010001
r,
pu
ai
,J
IT 10010010
010001001
N
,M
011101001
15
T'
Decoding
AO
(inverse representation)
Genetic Algorithms
Notion of Genetic Algorithms
51
20
ay
M
Human
2
-2
18
1 0 1 1 1 0 0 1 0 1 1 0 0 1 1 0 1 0 1 1 1 0 0
r,
pu
ai
,J
IT
23 chromosomes
N
,M
15
T'
20
ay
+ +
M
2
0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
-2
18
r,
pu
ai
,J
IT
1 0 1 0 0 1 1 0 1 1 1 1 0 0 0 0
N
,M
15
T'
AO
0 1 1 1 1 0 0 1 0 0 1 0 1 1 1 1
The evolutionary process can be expedited by
improving the variety of the gene pool.
51
20
It is done via mutation.
ay
M
2
-2
18
r,
pu
Mutation Process
ai
,J
IT
N
,M
1 0 1 1 1 0 0 1
15
T'
AO
1 0 0 1 1 0 0 1
Simple Genetic Algorithm
5
1
{
20
ay
initialize population;
M
2
evaluate population;
-2
18
while Termination CriteriaNotSatisfied
r,
pu
{
ai
,J
select parents for reproduction;
IT
N
evaluate population;
T'
AO
}
}
Genetic algorithms are usually applied for
maximization problems.
51
20
To minimize f(x) (f(x)>0)using GAs, consider
ay
M
maximization of
2
-2
18
1
r,
pu
ai
1+f(x)
,J
IT
N
,M
15
T'
AO
Example
Maximize y1.3+10e^(-xy)+sin (x-y)+3 in
51
20
R=[0,14]×[0,7].
y
ay
(0,5)=000101
M
2
(2,7)=001111
-2
7
18
(6,1)=011001
r,
pu
ai
,J (10,4)=101100
IT
(12,0)=110000
N
,M
(8,6)=100110
15
T'
AO
x
0 2 4 6 8 10 12 14
String (x,y) f(x,y)
000101 (0,5) 21.02
001111 (2,7) 15.46
51
011001 (6,1) 4.11
20
ay
101100 (10,4) 9.17
M
110000 (12,0) 13.21
2
-2
100110 (8,6) 13.31
18
r,
pu
ai
Fitness: ,J
IT
N
,M
21.02
= 0.276
15
T'
21.02+15.46+4.11+9.17+13.21+13.31
AO
String (x,y) f(x,y) Prob.
000101 (0,5) 21.02 0.276
001111 (2,7) 15.46
15
011001 (6,1) 4.11
20
ay
101100 (10,4) 9.17
M
110000 (12,0) 13.21
2
-2
100110 (8,6) 13.31
18
r,
pu
ai
Fitness: ,J
IT
N
,M
21.02
= 0.276
15
T'
21.02+15.46+4.11+9.17+13.21+13.31
AO
String (x,y) f(x,y) Prob. Cum.Prob
000101 (0,5) 21.02 0.276 0.276
001111 (2,7) 15.46 0.203 0.479
5
1
011001 (6,1) 4.11 0.054 0.533
20
ay
101100 (10,4) 9.17 0.120 0.653
M
110000 (12,0) 13.21 0.173 0.826
2
-2
100110 (8,6) 13.31 0.174 1.000
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276
001111 (2,7) 15.46 0.203 0.479
5
1
011001 (6,1) 4.11 0.054 0.533
20
ay
101100 (10,4) 9.17 0.120 0.653
M
110000 (12,0) 13.21 0.173 0.826
2
-2
100110 (8,6) 13.31 0.174 1.000
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276
001111 (2,7) 15.46 0.203 0.479
5
1
011001 (6,1) 4.11 0.054 0.533
20
ay
101100 (10,4) 9.17 0.120 0.653
M
110000 (12,0) 13.21 0.173 0.826
2
-2
100110 (8,6) 13.31 0.174 1.000
18
r,
pu
ai
,J
IT
N
0.269
,M
15
T'
AO
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276
001111 (2,7) 15.46 0.203 0.479
5
1
011001 (6,1) 4.11 0.054 0.533
20
ay
101100 (10,4) 9.17 0.120 0.653
M
110000 (12,0) 13.21 0.173 0.826
2
-2
100110 (8,6) 13.31 0.174 1.000
18
r,
pu
ai
,J
IT
N
0.269
,M
15
T'
AO
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479
5
1
011001 (6,1) 4.11 0.054 0.533
20
ay
101100 (10,4) 9.17 0.120 0.653
M
110000 (12,0) 13.21 0.173 0.826
2
-2
100110 (8,6) 13.31 0.174 1.000
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479
5
1
011001 (6,1) 4.11 0.054 0.533
20
ay
101100 (10,4) 9.17 0.120 0.653
M
110000 (12,0) 13.21 0.173 0.826
2
-2
100110 (8,6) 13.31 0.174 1.000
18
r,
pu
ai
,J
IT
N
0.923
,M
15
T'
AO
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479 100110
5
1
011001 (6,1) 4.11 0.054 0.533
20
ay
101100 (10,4) 9.17 0.120 0.653
M
110000 (12,0) 13.21 0.173 0.826
2
-2
100110 (8,6) 13.31 0.174 1.000
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479 100110
1 5
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String
Ja
ACrossover
000101
,
IT
N
100110
,M
000101
15
0.360-0.539
0.720-0.899
0.540-0.719
0.000-0.179
0.180-0.359
T'
001111
O
110000
101100
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479 100110
5
1
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String String
Ja
ACrossover
000101
,
IT
N
100110 0.707
,M
000101
15
T'
001111
O
110000
101100
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479 100110
5
1
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String String
Ja
ACrossover
000101
,
IT
N
100110
,M
000101
15
T'
001111
O
110000
101100
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479 100110
5
1
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String String
Ja
ACrossover
000101 000110
,
IT
N
100110
,M
000101
15
T'
001111
O
110000
101100
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479 100110
5
1
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String String
Ja
ACrossover
000101 000110
,
IT
N
100110 100101
,M
000101
15
T'
001111
O
110000
101100
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479 100110
5
1
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String String
Ja
ACrossover
000101 000110
,
IT
N
100110 100101
,M
000101
15
T'
001111
O
110000
101100
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479 100110
5
1
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String String String
Ja
ACrossover
Mutation
IT
N
Changes if
110000 111100 111100 p>0.95
101100 100000 100000
String (x,y) f(x,y) Prob. Cum.Prob String
000101 (0,5) 21.02 0.276 0.276 000101
001111 (2,7) 15.46 0.203 0.479 100110
5
1
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String String String
Ja
ACrossover
Mutation
IT
N
5
1
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String String String
Ja
ACrossover
Mutation
IT
N
5
1
011001 (6,1) 4.11 0.054 0.533 000101
20
ay
101100 (10,4) 9.17 0.120 0.653 001111
M
110000 (12,0) 13.21 0.173 0.826 110000
2
-2
100110 (8,6) 13.31 0.174 1.000 101100
18
r,
u
ip
String String String
Ja
ACrossover
Mutation
IT
N
51
000111
20
ay
001101
M
111100
2
-2
110000
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
String (x,y) f(x,y)
000110 (0,6) 23.17
001101 (2,5) 11.05
5
1
000111 (0,7) 25.43
20
ay
001101 (2,5) 11.05
M
111100 (14,4) 9.24 Continue
2
-2
110000 (12,0) 13.21
18
r,
pu
ai
,J
IT
5
42
1
TSP30 Solution (Performance = 420)
20
38
35
ay
120
26
M
21
2
35 100
-2
32
18
7 80
38
r,
46
pu
60
y
44
ai
58
60 40
,J
IT
69
N
76
,M
20
78
71
15
69 0
T'
67 0 10 20 30 40 50 60 70 80 90 100
AO
62 x
84
94
Benefits of Genetic Algorithms
51
Concept is easy to understand
20
ay
Modular, separate from application
M
2
-2
Supports multi-objective optimization
18
r,
pu
Good for “noisy” environments
ai
,J
Always an answer; answer gets better with time
IT
N
,M
5
1
Many ways to speed up and improve a GA-
20
ay
based application as knowledge about problem
M
2
domain is gained
-2
18
r,
Easy to exploit previous or alternate solutions
pu
ai
Flexible building blocks for hybrid applications
,J
IT
N
15
T'
AO
When to Use a GA
5
1
Alternate solutions are too slow or overly complicated
20
ay
Need an exploratory tool to examine new approaches
M
2
Problem is similar to one that has already been successfully
-2
18
solved by using a GA
r,
pu
Want to hybridize with an existing solution
ai
,J
Benefits of the GA technology meet key problem
IT
N
requirements
,M
15
T'
AO
Some GA Application Types
5
Domain Application Types
1
20
Control gas pipeline, pole balancing, missile evasion, pursuit
ay
M
Design semiconductor layout, aircraft design, keyboard
2
-2
configuration, communication networks
18
Scheduling manufacturing, facility scheduling, resource allocation
r,
pu
Robotics trajectory planning
ai
Machine Learning ,J
designing neural networks, improving classification
IT
algorithms, classifier systems
N
15
Select New in the MATLAB File menu.
20
ay
Select M-File. This opens a new M-file in the editor.
M
2
In the M-file, enter the following two lines of code:
-2
18
function z = my_fun(x)
r,
pu
ai
,J
z = x(1)^2 - 2*x(1)*x(2) + 6*x(1) + x(2)^2 -6*x(2);
IT
N
,M
15
path.
FUNCTION HANDLE
51
20
ay
A function handle is a MATLAB value that
M
2
-2
provides a means of calling a function indirectly.
18
r,
The syntax is
pu
ai
,J
handle = @function name
IT
N
,M
15
For example
T'
AO
h = @my_fun
USING THE TOOLBOX
51
There are two ways to use genetic algorithm with
20
ay
the toolbox :
M
2
-2
18
r,
Calling the genetic algorithm function ga at the
pu
command line ai
,J
IT
N
,M
15
interface
AT COMMAND LINE
5
1
The syntax is
20
[ x fval ] = ga( @fitnessfun , nvars , options )
ay
M
2
-2
Where
18
r,
• @fitnessfun is a function handle to the fitness function
pu
• nvars is the number of independent variables for the
ai
,J
fitness function
IT
N
algorithm
15
T'
51
To open we have to write following syntax at
20
ay
the command line
M
2
-2
gatool
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
PAUSING AND STOPPING
5
1
20
ay
Click Pause to temporarily suspend the algorithm.
M
2
-2
To resume the algorithm using the current
18
r,
population at the time you paused, click Resume.
pu
ai
,J
IT
N
15
5
1
Generations -- The algorithm reaches the specified
20
number of generations.
ay
M
Time -- The algorithm runs for the specified amount of
2
-2
time in seconds.
18
Fitness limit -- The best fitness value in the current
r,
pu
generation is less than or equal to the specified value.
ai
Stall generations -- The algorithm computes the
,J
IT
fitness function.
DISPLAYING PLOTS
5
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
1
For the function we wanted to minimise i.e
20
ay
my_fun
M
2
-2
z = x(1)^2 - 2*x(1)*x(2) + 6*x(1) + x(2)^2 - 6*x(2);
18
r,
pu
ai
The final result that will be obtained after using genetic
,J
IT
algorithm will be
N
,M
15
T'
x = [ 0.25836 3.26145]
AO
Control of Induction Motor Drives
15
Optimization of both the structure and the associated parameters of
20
four cascaded controllers for an IM drive. The GA tests and compares
ay
M
controllers having different orders. The GA optimizes both the
2
-2
number and the location of controller’s zeros and poles.
18
While conventional design strategies firstly tune the current controller
r,
pu
and then the speed controller (thus leading to a potentially sub-optimal
ai
couple of controllers), the GA simultaneously optimizes the two
,J
controllers, to obtain the best overall two cascaded- loops control
IT
N
systems
,M
T'
on the hardware
Induction motor drive block diagram
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Crossover, Mutation, and Selection
51
Crossover
20
ay
•Binary flags: multi-point crossover
M
•Real valued chromosomes: either heuristic or
2
-2
arithmetical crossover is applied with equal
18
r,
probability.
pu
ai
Mutation ,J
IT
Selection
T'
AO
•Tournament selection
Main Details of the Evolutionary
Algorithm
5
1
The ability to work on heterogeneous solutions makes the considered
20
GA similar to other evolutionary computation techniques, such as
ay
M
Genetic or Evolutionary Programming (GP or EP).
2
-2
A significant difference between EP and GAs lays in the rate of
18
application of the mutation.
r,
pu
In the preliminary investigation for optimal occurrence rate of
ai
operators, a much faster convergence was obtained with higher
,J
IT
mutation rates.
N
,M
5
The objective function is a weighted sum of five performance
1
20
indices that are directly measured on system’s response to the a
ay
M
sequence of speed steps of different amplitude.
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Overview of the single indexes
Steady state speed response
51
20
ay
M
2
-2
18
r,
pu
This index measures the speed
ai
error along segments of the ,J
speed response settling around
IT
51
20
ay
M
2
-2
18
r,
pu
This index measures the
ai
speed overshoot when a ,J
speed step change is
IT
required
N
,M
15
T'
AO
Overview of the single indexes
Transient speed response duration
51
20
ay
M
2
-2
18
r,
pu
This index measures the
ai
duration of transient ,J
conditions, that is the sum
IT
5
1
20
ay
M
2
-2
18
r,
pu
This index accounts for
ai
undesired ripples and ,J
oscillations of the voltage
IT
torque operations
AO
Overview of the single indexes
Current response
51
20
ay
M
2
-2
18
r,
pu
This index measures the
ai
sum of absolute current ,J
errors
IT
N
,M
15
T'
AO
Experimental Setup
51
PWM Current &
20
Voltage Sensor
ay
Inverter
M
2
-2
18
r,
PC with
pu
ai
ds1104 ,J
Controller
IT
N
,M
15
T'
AO
Load
Induction Bank
Motor
On-line optimization with standard GA:
Induction Motor Drive
51
20
ay
Speed response
M
at the end of
2
-2
the GA
18
optimization
r,
pu
ai
,J
IT
N
,M
15
T'
AO
On-line optimization with standard GA:
Induction Motor Drive
51
Rotor flux
20
response at
ay
M
the end of
2
-2
18
the GA
r,
optimization
pu
ai
,J
IT
N
,M
15
T'
AO
Conclusions
1 5
20
ay
M
2
-2
18
r,
pu
Question: ‘If GAs are so smart, why ain’t they
ai
,J
rich?’
IT
N
,M
number of disciplines.’
- David E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning
“Biogeography-Based
Biogeography-Based Optimization
Optimization”
15
20
ay
M
2
-2
Dr. Ajay Kumar Bansal
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Director
Poornima Institute of Engineering & Technology,
Jaipur
5
1. Biogeography
1
20
ay
2. Biogeography-Based Optimization
M
2
3. BBO algorithm
-2
18
BBO Flow chart
r,
4.
pu
ai
5. Migration ,J
IT
Mutation
N
6.
,M
15
2
Biogeography
Biogeography-Based Optimization
15
The study of the
20
ay
geographic distribution
M
2
of biological organisms
-2
18
r,
•
Mauri*us
pu
ai
,J •
1600s
IT
N
,M
15
T'
AO
3
Biogeography- cont….
Biogeography-Based Optimization
5
The biogeography concept specifies that
1
20
the species migration from one island to
ay
M
another depending on the various factors.
2
-2
18
It also analyses that how species arise or
r,
pu
become extinct.
ai
,J
In biogeography, habitats are ecological
IT
N
,M
51
20
Animals and plants naturally arrange in a
ay
M
certain region or area.
2
-2
18
Major isolated land areas and islands often
r,
pu
evolve their own distinct plants and
ai
,J
animals.
IT
N
5
Biogeography- cont….
Biogeography-Based Optimization
5
1
20
same species living in an area
ay
M
Population ecology focuses on factors
2
-2
affecting how many individuals of a species
18
live in an area
r,
pu
ai
Ecologists have long recognized global and
,J
IT
5
Depending on the climate and environment
1
20
of the different islands animals found a way
ay
M
to change for the better to live more
2
-2
18
properly.
r,
pu
No two species can acquire the same niche;
ai
,J
eventually one will become better adapted
IT
N
,M
change environments.
7
Biogeography- cont….
Biogeography-Based Optimization
5
Emigration occurs because of accumulation of
1
20
random effects on a large number of species with
ay
M
large populations.
2
-2
When species emigrates from an island, it does
18
r,
not completely disappears; only a few
pu
ai
representatives emigrate.
,J
IT
5
Habitat suitability index (HSI) is the parameter
1
20
that defines each habitat.
ay
M
A habitat is probable solution of the problem.
2
-2
18
The features that determine characterizes
r,
habitability are called suitability index variables
pu
ai
(SIVs). ,J
IT
dependent variable.
9
Biogeography- cont….
Biogeography-Based Optimization
5
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
Due to limited resources, populations
1
20
may be evenly distributed to minimize
ay
M
competition, as is found in forests, where
2
-2
18
competition for sunlight produces an even
r,
pu
distribution of trees
ai
,J
The more that the environment changes,
IT
N
,M
11
Large number
Biogeography-Based Optimization
15
20
Island with Island with zero
ay
high population
M
population Large space to
2
-2
No more live
18
space to No Species
can leave
r,
live
pu
ai
,J
Species Species can come
IT
N
5
Islands with high HSI support many species
1
20
ay
Islands with low HSI support only few
M
2
species.
-2
18
Islands with high HSI have many species
r,
pu
that emigrate to nearby habitats because of
ai
,J
the large populations and the large
IT
N
,M
5
Islands with a high HSI
1
20
ay
◦ have a high emigration rate
M
2
◦ low immigration rate.
-2
18
Islands with a low HSI
r,
pu
◦ high immigration rate because of their low
ai
,J
populations
IT
N
,M
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
Biogeography- cont…
20
15
15
Biogeography- cont…
Biogeography-Based Optimization
5
As habitat suitability improves:
1
20
◦ The species count increases
ay
M
◦ Emigration increases (more species leave the habitat)
2
-2
◦ Immigration decreases (fewer species enter the
18
r,
habitat)
pu
ai
,J
IT
I is the maximum possible immigration rate.
N
,M
16
Biogeography- cont…
Biogeography-Based Optimization
5
I occurs when island is empty of any species and
1
20
offer maximum opportunity to species on other
ay
M
islands for immigrating to settle on it; and as
2
-2
number of arrived species on that
18
r,
island increases, opportunity for settlement will
pu
decrease and thus immigration rate will decrease
ai
,J
too.
IT
N
,M
5
1
20
-‐-‐-‐-‐
ay
M
2
-2
18
r,
pu
ai
,J
IT
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Biogeography-Based Optimization- cont…
19
Biogeography-Based Optimization- cont…
Biogeography-Based Optimization
15
will immigrate or emigrate. For small time, limit is taken as
20
zero ( Δt → 0) and reduces to
ay
M
⎧−(λ s + µ s ) Ps + µ s +1 Ps +1 S =0
2
-2
• ⎪
Ps = ⎨−(λ s + µ s ) Ps + µ s +1 Ps +1 + λ s −1 Ps −1
18
1≤ S ≤ S max − 1
r,
⎪ s s s −1
( ) P Ps −1 S = S max
pu
⎩ − λ + µ s + λ
ai
,J
A linear and constant relationship is considered among
IT
N
λk = EK P
15
T'
AO
µk = I ( 1 − K / P )
where K represents total species of kth individual, P represents total species, E is
maximum emigration rate and I is maximum immigration rate.
23/05/15 DEE, MNIT Jaipur 20
BBO Algorithm
Biogeography-Based Optimization
5
1. Define the mutation probability and the elitism parameter.
1
20
2. Initialize the population.
ay
Calculate immigration rate and emigration rate for each island. Good
M
3.
solutions have high emigration rates and low immigration rates. Bad
2
-2
solutions have low emigration rates and high immigration rates.
18
Probabilistically choose the immigrating islands based on the
r,
4.
pu
immigration rates. Use roulette wheel selection based on the
ai
emigration rates to select the emigrating islands.
,J
IT
7.
generation’s elite islands.
8. If the termination criterion is met, terminate; otherwise, go to step 3.
21
Biogeography-Based Optimization
AO
T'
15
,M
N
IT
,J
ai
pu
r,
BBO flow chart
18
-2
2
M
ay
20
15
22
Migration
Biogeography-Based Optimization
5
Using emigration and immigration rates of each
1
20
habitat, BBO probabilistically share the
ay
M
information between habitats.
2
-2
If any habitat is selected for modification, based
18
r,
on immigration rate each SIV in that habitat is
pu
ai
modified probabilistically.
,J
IT
23
Migration
Biogeography-Based Optimization
5
Select Hi with probability α λI
1
20
If Hi is selected
ay
For j = 1 to n
M
Select Hj with probability α µi
2
-2
18
If Hj is selected
r,
Randomly select an SIV σ from Hj
pu
Replace a random SIV in Hi with σ
ai
,J
end
IT
N
end
,M
end
15
T'
AO
24
Mutation
Biogeography-Based Optimization
5
1
20
due to apparently random events
ay
• Using species probabilities, mutation rates are
M
2
calculated.
-2
18
• Using mutation rates, low HSI as well as high
r,
pu
HSI habitats are randomly modified
ai
,J
IT
For j = 1 to m
N
,M
If Hi(j) is selected
AO
BBO PSO GA
5
1
20
Develop 2008 1995 Early 1970s
ay
M
Introduced by Dan Simon J. Kennedy and R. John Henry
2
Eberhart Holland
-2
18
Optimization Bio-inspired OT that Robust Intelligent OT
r,
pu
Technique used idea of Stochastic OT that relies on
ai
(OT) probabilistically
,J based on the the parallelism
sharing features movement and found in
IT
N
based on the
15
solutions’ fitness
T'
values.
AO
26
Comparison between BBO, PSO and GA
Biogeography-Based Optimization
BBO PSO GA
15
grouping of habitat into their similar have any built-in
20
having identical characteristic. tendency
ay
characteristic.
M
2
Mutation Combination of two PSO does not All GAs require
-2
18
ideas of global have genetic some form of
r,
recombination and operators such recombination, as
pu
uniform crossover as crossover or this allows the
ai
which are use the
,J mutation. creation of new
IT
entire of the Particle update solutions that
N
51
2
f ( x ) 10 n ∑ i − 10 cos(2π xi )]
[ x
20
Rastrigin: = +
ay
i =1
M
nonseparable, regular, multimodal
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
28
Rastrigin function optimization result
Biogeography-Based Optimization
n=20 300
15
20
250
ay
M
200
2
Function value
-2
18
150
r,
pu
100
ai
50 ,J
IT
N
0
,M
Best chromosome = 52 52 52 52 52 52 52 52 52 52 52
AO
52 52 52 52 52 52 52 52 52
Minimum cost = 0
5
n −1
1
20
2 2 2
Rosenbrock: f ( x) = ∑ [100( xi +1 − x ) + ( xi − 1) ]
i
ay
M
i =1
nonseparable, regular, unimodal
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
30
Rosenbrock function optimization result
Biogeography-Based Optimization
n=20
5
5
1
x 10
20
5
ay
M
4
2
-2
Minimum Cost
18
3
r,
pu
2
ai
,J
1
IT
N
,M
0
0 50 100 150
15
Generation
T'
Best chromosome = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
AO
1 1 1 1
Minimum cost = 0
5
[1] D. Simon, “Biogeography-based optimization," IEEE Transactions on Evolutionary
1
20
Computation, vol. 12, no. 6, pp. 702-713, December 2008.
ay
[2] R. MacArthur and E. Wilson, The Theory of Biogeography, Princeton, NJ: Princeton Univ.
M
Press, 1967.
2
[3] M.R.Lohokare, S.S.Pattnaik, S.Devi, K.M.Bakwad, D.G.Jadhav,"Biogeography Based
-2
Optimization Technique For Block Based Motion Estimation in Video coding”,NNCI 2010-
18
National Conference on Computational Instrumentation,19-20 March 2010.
r,
pu
[4] Surbhi Gupta, Krishma Buchar, Parvinder S.Sandhu,"Implementing Color Image
ai
Segmentation Using Biogeography Based Optimization”,2011 International Conference on
,J
Software and Computer Applications IPCSIT vol.9(2011).
IT
[5] Haiping Ma, Suhong Ni, and Man Sun,"Equilibrium Species Counts and Migration Model
N
,M
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
23/05/15
M
ay
20
15
DEE, MNIT Jaipur
33
Short Term Course
51
20
ay
Advanced Optimization Techniques
M
2
-2
18
r,
pu
Day 3 : (20-May-2015)
ai
,J
IT
N
,M
15
T'
AO
Differential Evolution
15
20
ay
M
2
-2
18
r,
Dr. Prerna Jain
pu
ai
,J Assistant Professor
Department of Electrical Engineering
IT
N
5
Corresponds to extreme values of one or more objectives
1
20
Simple Definition:
ay
M
“ Consists of maximizing or minimizing a real function by systematically choosing input values
2
-2
from within an allowed set & computing the value of the function”
18
r,
pu
ai
,J
IT
N
,M
15
T'
“More generally, finding "best available" values of some objective function given a defined
AO
domain (or a set of constraints), including a variety of different types of objective functions and
different types of domains”
• Such extreme property has led into its great importance in engineering design,
scientific experiments and business decision making
51
20
ay
Rationale
M
2
-2
18
• Classical optimization techniques
r,
pu
Ø Weighted Sum Method
ai
Ø ,J
ε- Constraint Method
Direct Methods &
IT
Ø Weighted Metric Method
N
51
20
discontinuous, mixed real/integer problems
Ø Large number variables
ay
Ø Mutiminima
M
Ø Multiobjective
2
-2
Ø Suboptimal solution: No Global Optimization
18
Ø Specific problems
r,
Ø No parallel computing
pu
Ø Constraint handling
ai
Ø …………. ,J
IT
Not Suitable for many real world practical problems
N
,M
1 5
20
R. Storn and K. Price, “ Differential Evolution- A simple and efficient
ay
Adaptive Scheme for Global Optimization over Continuous Spaces,”
M
Tech. Report, International Computer Science Institute (Brekely), 1995
2
-2
R. Storn and K. Price, “ Differential Evolution- A simple and efficient
18
Heuristic for Global Optimization over Continuous Spaces,”
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
Stochastic: Random
1
20
Population based : Parallel computing
ay
Developed to optimize real parameter, real valued functions
M
2
Genetic type of algorithm
-2
18
Fast
r,
Extremely well on a wide variety of problems
pu
General problem formulation is:
ai
,J
For an objective function f : X ⊆ R D → R where the feasible X ≠∅
IT
region , the minimization problem is to find
N
,M
where:
AO
f ( x* ) ≠ −∞
51
20
ay
M
2
-2
18
r,
pu
ai
Initialization of the population of candidate solutions
,J
Mutation: Expands the search space
IT
N
Saturday, 23 May 15 7
Problem Formulation
Consider a system with real valued properties as objectives of the system to be
optimized
51
20
g m ; m = 0,1, 2,....., P − 1
ay
Real valued constraints
M
g m ; m = P, P + 1,....., P + C − 1
2
-2
Properties : Need not to optimized but neither shall be degraded
18
All the properties be dependent on the real valued parameters
r,
pu
x j ; j = 0,1, 2,......., D − 1
ai
Bounds of variables ,J
x j ∈ [ x jl , x jh ]
IT
N
,M
x = ( x0 , x1 ,.........., xD −1 )T
AO
Such that g m objectives are optimized and g m constraints are met
1 5
z ( x) = ∑ wm .g m ( x)
20
m=0
ay
And task is now
M
min z ( x)
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Saturday, 23 May 15 9
DE Approach
Initialization
Parallel direct search method
5
1
20
NP parameter vectors(Population members)
ay
xi ,G = [ x1,i ,G , x2,i ,G ,..............xD ,i ,G ] i = 1, 2,3,....., NP
M
2
-2
G is the generation number
18
NP does not change during all the generations
r,
pu
ai
,J
IT
N
,M
Initialization: Randomly select the initial parameter values uniformly on the
intervals while satisfying other equality and inequality constraints for all the
15
population members
T'
x j ∈ [ x jl , x jh ]
AO
Saturday, 23 May 15 10
DE Approach…..
Mutation
51
Crucial idea behind DE is a new mutation scheme for generating trial parameter
20
vectors
ay
Add difference vectors to base individual to produce mutant vector for each target
M
2
vector in the current population in order to explore the search space
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
1 5
20
Associated Mutant vector
vi ,G = [v1,i ,G , v2,i ,G ,..............vD ,i ,G ] i = 1, 2,3,....., NP
ay
M
2
Different Mutation Strategies
-2
18
1. “DE/rand/1
r,
vi ,G = xr1,G + F ( xr 2,G − xr 3,G )
pu
2. “ DE/best/1”
ai
vi ,G = xbest ,G + F ( xr1,G − xr 2,G )
,J
IT
3. “ DE/rand-to-best/1”
N
4. “ DE/best/2”:
15
5. “DE/rand/2”
AO
5
[1,NP] & different from I
1
20
‘F’ is scaling parameter: Real and constant factor which controls the amplification
ay
xbest ,G
M
is the best individual vector at generation G
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Saturday, 23 May 15 13
Recombination/Crossover Operation
Applied to each pair of the target vector & its corresponding mutant vector
xi ,G = [ x1,i ,G , x2,i ,G ,..............xD ,i ,G ] i = 1, 2,3,....., NP
15
vi ,G = [v1,i ,G , v2,i ,G ,..............vD ,i ,G ] i = 1, 2,3,....., NP
20
ay
to generate a trial vector
M
ui ,G = [u1,i ,G , u2,i ,G ,..............uD ,i ,G ] i = 1, 2,3,....., NP
2
-2
18
r,
⎧⎪v j ,i ,G if rand j (0,1) ≤ CR or ( j = jrand )
pu
u j ,i ,G = ⎨
ai
⎪⎩ x j ,i ,G otherwise
,J j=1,2,.............,D
IT
Controls the fraction of parameter values copied from the mutant vector
15
Condition j = jrand ensures that the trial vector ui ,G will differ from its
T'
Saturday, 23 May 15 14
Selection
If values of any parameter of trial vector exceed the corresponding upper & lower
5
bounds, they are uniformly and randomly reinitialized
1
20
Evaluate Fitness (Value of the objective function) of all trial vectors
ay
M
Selection among the set of trial and corresponding target vectors is carried out to
2
-2
choose the best, on the basis of respective objective function values
18
⎧⎪ui ,G if f (ui ,G ) ≤ f ( xi ,G )
r,
xi ,G +1 = ⎨
pu
⎪⎩ xi ,G otherwise
ai
,J
IT
Greedy scheme is the key for fast convergence
N
,M
Convergence
15
T'
AO
These steps are repeated generation after generation until some specific termination
criteria are satisfied.
5
Select uniform randomly r1 ≠ r 2 ≠ r3 ≠ i
1
20
jrand = randint(1, D)
for j = 1 to D do
ay
M
if randreal j (0,1) > CR or j == jrand do
2
U i ( j ) = X r1 ( j ) + F × ( X r 2 ( j ) − X r 3 ( j ))
-2
else
18
Ui ( j) = X i ( j)
r,
end if
pu
end for
ai
end for
for i = 1 to N p do
,J
IT
Evaluate offspring U i
N
,M
Popi = U i
T'
end if
AO
end for
end while
Saturday, 23 May 15 16
Advice:
NP= 5 or 10 times of number of parameter in a vector
5
If solutions get stuck
1
20
F=0.5 and then increase F or NP
ay
M
F = [0.4,1] is a very effective range
2
-2
18
CR= 0.9 or 1 for a quick solution
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Saturday, 23 May 15 17
Advantages
51
Immediately accessible for practical applications
20
ay
Simple structure
M
2
-2
Ease of use
18
r,
Seed to get the solutions
pu
ai
Robustness ,J
IT
N
,M
15
T'
AO
Saturday, 23 May 15 18
Discussions
Performance highly depends on chosen trial vector generation strategy & Control parameters
values ( Two important short comings)
51
Inappropriate values may lead to premature convergence
20
DE researchers have suggested many empirical guidelines
ay
M
Conflicting conclusions confuse scientists and engineers
2
-2
Many modifications have been suggested to avoid manual tuning
18
Ø Fuzzy Adaptive DE
r,
pu
Use fuzzy logic controllers whose input incorporate the relative function values and
ai
individuals of successive generation to adapt the parameters for the mutation and crossover
operations
,J
IT
Adapted pareto DE
N
Ø
,M
Based on the idea of controlling the population diversity & implemented a multi-
15
population approach
T'
AO
Saturday, 23 May 15 19
Discussions……
Hybridization with other techniques in one such direction
51
Trigonometric mutation operation hybridized
20
ay
Combination of DE with Estimation of Distribution Algorithm (EDA)
M
2
Uses a probability model to determine promising regions in order to
-2
18
focus the search process on those areas
r,
pu
Two level orthogonal crossover
ai
,J
Local search into classical DE
IT
N
Saturday, 23 May 15 20
Discussions
Delivers better & faster solution using its different crossover strategies
5
1
20
Has good exploration ability of search space due to its unique mutation & stochastic
ay
crossover
M
2
Increasing system complexity & size retards its capability to map entire unknown
-2
18
variables together
r,
pu
Initially solutions move very fast towards optimal point but fails to perform at later
ai
stages during fine tuning ,J
IT
Crossover operation causes many solutions to lose their strength at a later stage, whose
N
,M
DE has good exploration ability to find the region of global optima
T'
AO
Poor exploitation ability for global optimization ( Another important problem)
Saturday, 23 May 15 21
Discussions….
Many modifications on the basic DE have been proposed to remedy the situation
51
20
Ø DE in combination with sequential quadratic programming (DEC-SQP)
ay
M
Ø Hybrid Differential Evolution (HDE)
2
-2
Ø Variable scaling Hybrid Differential Evolution (VSHDE)
18
r,
Ø Self-Tuning HDE ( STHDE)
pu
ai
Ø Hybrid Differential Evolution with Biogeography Based Optimization (DE/
,J
IT
BBO)
N
,M
Ø etc.
15
T'
AO
5
Ø Based on natural immigration & emigration of species in search of more suitable
1
habitat, evolution & extinction of species
20
Ø Areas well suited have high Habitat Suitability Index (HSI)
ay
M
Ø Variables that characterize habitability are called Suitability Index Variables(SIV)
2
Ø Migration & Mutation are its two operation
-2
18
Ø Especially poor solutions tend to accept more useful information from good solutions
Unique attribute of BBO
r,
Ø
pu
Ø This makes BBO good at exploiting information of current population
ai
Ø Renders it good exploitation property for global optimization
,J
IT
N
DE/BBO
,M
15
Ø
AO
Ø Migration operator of BBO along with mutation, crossover & selection operators of
DE
Saturday, 23 May 15 23
Habitat Migration in BBO
1 5
Algorithm 2: Habitat Migration [64]
20
for i = 1 to N p do
ay
Select habitat H i with probability proportional to
M
λi
2
if randreal(0,1) < λi do
-2
18
for j = 1 to N p do
r,
Select another habitat H j with probability proportional to µ j
pu
ai
if randreal(0,1) < µ j then
,J
Randomly select an SIV from the habitat H j
IT
N
end if
15
end for
T'
end if
AO
end for
Saturday, 23 May 15 24
DE/BBO
Hybrid Migration Operator
51
Hybrid migration operator is most important step in DE/BBO
20
ay
Combines DE’s mutation & crossover with migration operation of BBO
M
2
In this operator, an offspring incorporates new features from population
-2
18
members
r,
pu
Fundamental idea of such hybridization is based on two considerations
ai
,J
Destruction of good solution is minimized
IT
N
15
Search space exploration is effective through crossover and mutation operator
of DE
Saturday, 23 May 15 25
Hybrid Migration Operator
Algorithm 3: Hybrid Migration Operator [64]
5
for i = 1 to N p do
1
20
Select uniform randomly r1 ≠ r2 ≠ r3 ≠ i
ay
jrand = randint(1, D)
M
for j = 1 to D do
2
if randreal(0,1) < λi do
-2
if randreal j (0,1) > CR or j == jrand do
18
U i ( j ) = X r1 ( j ) + F × ( X r 2 ( j ) − X r 3 ( j ))
r,
else
pu
Select another habitat X k with probability proportional to µk
ai
,J
Ui ( j) = X k ( j)
IT
end if
else
N
,M
Ui ( j) = X i ( j)
end if
15
end for
T'
end for
AO
5
Main Procedure
1
20
DE/BBO algorithm is developed by integrating hybrid migration operator into DE
ay
M
Comparison with DE, DE/BBO needs little extra computational cost in sorting the
2
population & calculating the migration rates
-2
18
Needs lesser time than BBO
r,
pu
DE/BBO structure is simple
ai
,J
Able to explore new search space with mutation operator of DE & to exploit the
IT
population information with migration operator of BBO
N
,M
AO
Saturday, 23 May 15 27
DE/BBO Algorithm
51
20
Algorithm 4: DE/BBO Algorithm [64]
Generate initial population Pop
ay
M
Evaluate fitness for each individual in Pop
2
while Termination criteria is not satisfied do
-2
For each individual, map fitness to the number of species
18
Calculate immigration rate λi and emigration rate µi for each individual
r,
pu
Modify the population with hybrid migration operation of Algorithm 3
ai
for i = 1 to N p do
,J
Evaluate offspring U i
IT
N
Popi = U i
15
end if
T'
AO
end for
end while
Saturday, 23 May 15 28
Applications till date
5
Economic Load Dispatch
1
20
Economic Emission Dispatch
ay
M
Strategic Bidding of a GENCO in Electricity Market
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Saturday, 23 May 15 29
51
20
ay
M
2
Thank You
-2
18
r,
pu
IT
Queries ?
,J
ai
N
,M
15
T'
AO
Saturday, 23 May 15 30
Particle Swarm
51
20
ay
M
Optimisation
2
-2
18
r,
(PSO)
IT
,J
ai
pu
(To Follow the leader)
N
,M
15
T'
AO
Introduc)on
Ø PSO is a population-based stochastic optimization technique
5
1
developed by Eberhart and Kennedy in 1995
20
Ø It is inspired by social behavior of bird flocking or fish schooling.
ay
Ø It is inherently designed for continuous optimization problems.
M
2
Ø It provides a platform to many swarm based optimization
-2
techniques such as, TLBO, CSO, BA, GSO, etc.
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Posi)on
Update
(Steered the Vehicle)
A swarm consists of number of particles “or possible solutions” that
5
fly through the feasible solution space to explore optimal solutions.
1
20
Each particle updates its position based on its own best
ay
exploration; best swarm overall experience, and its previous velocity
M
vector according to the following model:
2
-2
18
pbest j − s kj gbest j − s kj
r,
v kj +1 = Wv kj + C1 × rand1 () × + C2 × rand 2 () × (1)
pu
Δt Δt
ai
,J
First term: Inertia component that provides momentum
IT
N
s kj +1 = s kj + v kj +1 × Δt (2)
⎛ W − Wmax ⎞
W (k ) = ⎜ min ⎟ (k − 1) + Wmax (3)
⎝ itermax − 1 ⎠
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Advantages
and
Limita)ons
Advantages:
5
1
20
1. Simplicity:
Single
control
equa)on
ay
M
2. CPU
)me:
Shorter
2
-2
3. Implementa)on:
Easy
18
r,
Limita1ons:
pu
ai
,J
1. Sensi)ve
to
parameter
tuning
IT
N
5
Ø Local
trapping
1
20
Ø Convergence
ay
Ø Diversity
M
Ø Accuracy
2
Ø Efficiency
-2
18
Ø Global
op)ma
Ø Local
op)ma
r,
pu
Ø Explora)on
ai
Ø Exploita)on
,J
Ø Promising
region
IT
Ø Communica)on
N
,M
Ø
T'
5
1
20
Ø The
velocity
of
par)cles
remains
uncontrolled
during
ay
M
the
computa)onal
process.
2
-2
18
r,
Ø Par)cles
approaches
the
promising
region
with
pu
rela)vely
higher
veloci)es
and
thus
eventually
miss
the
ai
,J
global/near
global
op)ma.
IT
N
,M
15
5
such
a
way
that
it
maintains
a
proper
balance
1
20
between
explora)on
and
exploita)on
of
the
search
ay
M
space.
2
-2
18
Ø All
par)cles
should
start
with
suitable
higher
r,
pu
veloci)es
and
then
they
must
be
con)nuously
ai
,J
retarded
)ll
maximum
itera)ons
exhausted.
IT
N
,M
15
20
Ø Dynamic
control
of
accelera)on
coefficients:
linear
ay
M
control,
exponen)al
control
,
etc.
2
-2
Ø Velocity
control:
constric)on
factor
approach,
18
r,
empirical
formula,
etc.
pu
ai
Ø I mproving
cogni)ve/social
behavior:
spliXng
,J
IT
Ø Increased
communica)on
Ø Hybridized
approaches
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
Sample
Result
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Selected
References
1. Vla
chogiannis
J
G,
Lee
K
Y.
Economic
load
dispatch
–
a
compara)ve
study
on
heuris)c
op)miza)on
techniques
with
an
improved
coordinated
aggrega)on
based
PS
O.
IEEE
Transac)ons
o
n
Power
Systems
2009;
24
(2):991
–
1001.
2. Park
JB,
Lee
KS,
Shin
JR,
Lee
KY.
A
par)cle
swarm
op)miza)on
for
economic
dispatch
with
non-‐smooth
cost
func)ons.
5
IEEE
Transac)ons
on
Power
Systems
2005;
20
(1):34
–
42.
1
3. Wang
Y,
Zhou
J,
Zhou
C,
Wang
Y,
Qin
H,
Lu
Y.
An
improved
self-‐adap)ve
PSO
technique
for
short-‐term
hydro-‐thermal
20
scheduling.
Expert
Systems
with
Applica)ons
2012;
39
:2288–
2295.
ay
4. Jiejin
C,
Xiaoqian
M,
Lixiang
L,
Haipeng
P.
Chao)c
par)cle
swarm
op)miza)on
for
economic
dispatch
considering
the
M
generator
constraints.
Energy
Conversion
&
Management
2007;
48
:645–
653.
2
5. Park
JB,
Jeong
YW,
Shin
JR,
Lee
KY.
An
improved
par)cle
swarm
op)miza)on
for
non-‐convex
economic
dispatch
-2
problems.
IEEE
Transac)ons
on
Power
Systems
2010;
25
(1):156
–
166.
18
6. Ratnaweera
A,
Halgamuge
SK,
Watson
HC.
Self-‐organizing
hierarchical
par)cle
swarm
op)mizer
with
)me-‐varying
r,
accelera)on
coefficients.
IEEE
Transac)ons
on
Evolu)onary
Computa)on
2004;
8
(3):240
–
255.
pu
7. Chaturvedi
KT,
Pandit
M,
Shrivastava
L.
Self-‐organizing
hierarchical
par)cle
swarm
op)miza)on
for
non-‐convex
ai
economic
dispatch.
IEEE
Transac)ons
on
Power
Systems
2008;
23
(3):1079
–
1087.
,J
8. Ivatloo
BM.
Combined
heat
and
power
economic
dispatch
problem
solu)on
using
par)cle
swarm
op)miza)on
with
IT
)me
varying
accelera)on
coefficients.
Electric
Power
Systems
Research
2013;
95
:9–
18.
N
,M
9. Coelho
LDS,
Lee
CS.
Solving
economic
load
dispatch
problems
in
power
systems
using
chao)c
and
gaussian
par)cle
swarm
op)miza)on
approaches.
Electrical
Power
and
Energy
Systems
2008;
30
:297–
307.
15
10. Yu
B,
Yuan
X,
Wang
J.
Short-‐term
hydro-‐thermal
scheduling
using
par)cle
swarm
op)miza)on
method.
Energy
T'
11. Wang
L,
Singh
C.
Stochas)c
economic
emission
load
dispatch
through
a
modified
par)cle
swarm
op)miza)on
algorithm.
Electric
Power
Systems
Research
2008;
78
:1466–
1476.
12. Baskar
G,
Mohan
MR.
Security
constrained
economic
load
dispatch
using
improved
par)cle
swarm
op)miza)on
suitable
for
u)lity
system.
Electrical
Power
and
Energy
Systems
2008;
30:609
–
613.
13. Selvakumar
AI,
Thanushkodi
K.
A
new
par)cle
swarm
op)miza)on
solu)on
to
non-‐convex
economic
dispatch
problems.
IEEE
Transac)ons
on
Power
Systems
.
2007;
22
(1):42
–
51.
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
Queries
20
15
Thanks & Any
Ant Colony
5
1
20
Optimization
ay
M
2
-2
18
r,
pu
ai
,J
IT
Assistant Professor
15
5
1
20
Guide an underlying search to escape from being trapped
ay
in a local optima and to explore better areas of the
M
solution space
2
-2
Ø Single solution approaches
18
Concerning only one solution at a particular time during
r,
pu
the search. Search is very much restricted to local regions,
ai
so called local
,J
• Simulated Annealing,
IT
• Tabu Search,
N
,M
51
20
Ø Population based approaches
ay
M
concern a population of solutions at
2
-2
a time
18
• Genetic algorithm
r,
pu
• Particle Swarm Optimisation
ai
,J
• Ant Colony Optimisation
IT
N
• BAT Algorithm
,M
15
• Hormony Search
T'
AO
5
R(x) = 20 + x12 + x22 -10(cos 2πx1 + cos 2 πx2
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
How AI works
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
Initialization
-2
2
M
ay
20
15
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Generation/Iteration 60
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Generation/Iteration 80
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Generation/Iteration 95
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Generation/Iteration 100
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Ant Colony
Optimisation
Ant Colony Optimization
5
1
20
ay
The Ant Colony Optimization (ACO)
M
was initially proposed by Marco
2
Dorigo in 1992. The search
-2
18
technique is inspired by the
r,
behavior of ants in finding paths
pu
from the nest to food and back to
ai
,J the nest without any visual clues.
IT
N
5
Almost blind.
1
20
ay
Incapable of achieving complex tasks alone.
M
2
Rely on the phenomena of swarm
-2
18
intelligence for survival.
r,
pu
ai
Capable of establishing shortest-route
,J
IT
and back.
15
T'
pheromone trails.
“Stigmergic?”
5
Stigmergy, a term coined by French
1
20
biologist Pierre-Paul Grasse, is interaction
ay
M
through the environment.
2
-2
18
Two individuals interact indirectly when one
r,
pu
of them modifies the environment and the
ai
,J
other responds to the new environment at
IT
N
,M
5
Follow existing pheromone trails with high
1
20
probability.
ay
M
2
What emerges is a form of autocatalytic behavior:
-2
18
the more ants follow a trail, the more attractive
r,
pu
that trail becomes for being followed.
ai
,J
The process is thus characterized by a positive
IT
N
,M
5
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
There is a path along which ants are walking (for
1
20
example from food source A to the nest E, and vice
ay
versa.
M
Suddenly an obstacle appears and the path is cut off.
2
-2
So at position B the ants walking from A to E (or at
18
position D those walking in the opposite direction)
r,
have to decide whether to turn right or left.
pu
ai
The choice is influenced by the intensity of the
,J
pheromone trails left by preceding ants.
IT
N
an ant a stronger stimulus and thus a higher probability
15
5
Because path BCD is shorter than BHD, the first ant
1
20
following it will reach D before the first ant following path
BHD.
ay
M
The result is that an ant returning from E to D will find a
2
stronger trail on path DCB, caused by the half of all the ants
-2
that by chance decided to approach the obstacle via DCBA
18
and by the already arrived ones coming via BCD: they will
r,
pu
therefore prefer (in probability) path DCB to path DHB.
ai
As a consequence, the number of ants following path BCD per
,J
unit of time will be higher than the number of ants following
IT
EHD.
N
,M
51
20
artificial
ants will have some memory,
ay
M
2
they will not be completely blind,
-2
18
they will live in an environment where
r,
pu
time is discrete.
ai
,J
IT
N
,M
15
T'
AO
Graph representing Ant System
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
5
Suppose that 30 new ants come to B from A, and 30
1
20
to D from E at each time unit, that each ant walks at
ay
a speed of 1 per time unit, and that while walking an
M
2
ant lays down at time t a pheromone trail of
-2
18
intensity 1, which, to make the example simpler,
r,
evaporates completely and instantaneously in the
pu
ai
middle of the successive time interval (t + 1, t + 2).
,J
IT
toward C.
Working of Artificial Ants
5
At t = 1 the 30 new ants that come to B from A find a
1
20
trail of intensity 15 on the path that leads to H, laid by
ay
the 15 ants that went that way from E, and a trail of
M
intensity 30 on the path to C, obtained as the sum of the
2
-2
trail laid by the 15 ants that went that way from B and
18
by the 15 ants that reached B coming from D via C.
r,
The probability of choosing a path is therefore biased,
pu
ai
so that the expected number of ants going toward C will
,J
be the double of those going toward H: 20 versus 10
IT
51
Given a set of n towns, the TSP can be
20
ay
stated as the problem of finding a minimal
M
2
length closed tour that visits each town
-2
18
once.
r,
pu
We call dij the length of the path between
ai
,J
IT
towns i and j.
N
,M
5
1
20
Let bi(t) (i = 1, . . . , n) be the number of ants in town i at time t
ay
𝑚=∑𝑖=1↑𝑛▒𝑏↓𝑖 (𝑡)
M
2
and let be the total number of ants.
-2
18
r,
Each ant is a simple agent with the following characteristics:
pu
it chooses the town to go to with a probability that is a
ai
,J
function of the town distance and of the amount of trail
IT
5
1
Let τij(t) be the intensity of trail on edge (i, j)
20
at time t.
ay
M
Each ant at time t chooses the next town,
2
-2
where it will be at time t + 1.
18
r,
Therefore, if we call an iteration of the AS
pu
algorithm the m moves carried out by the m
ai
,J
ants in the interval (t, t + l), then every n
IT
51
∆𝜏↓𝑖𝑗 =∑𝑘=1↑𝑚▒∆𝜏↓𝑖𝑗↑𝑘
20
ay
M
2
-2
18
where ∆𝜏↓𝑖𝑗↑𝑘 is the quantity per unit of length of
r,
pu
trail substance (pheromone in real ants) laid on
ai
,J
edge (i,j) by the kth ant between time t and t + n; it
IT
N
is given by
,M
15
T'
AO
5
In order to satisfy the constraint that an ant
1
20
visits all the n different towns, we associate
ay
M
with each ant a data structure called the tabu
2
-2
list, that saves the towns already visited up to
18
time t and forbids the ant to visit them again
r,
pu
before n iterations (a tour) have been completed.
ai
,J
When a tour is completed, the tabu list is used to
IT
N
to choose.
5
The transition probability from town i to town j for
1
20
the kth ant as
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
AS.
5
α and β are parameters that control the relative
1
20
importance of trail versus visibility. Therefore
ay
M
the transition probability is a trade-off between
2
-2
18
visibility (which says that close towns should
r,
be chosen with high probability, thus
pu
ai
implementing a greedy constructive heuristic)
,J
IT
autocatalytic process)
Algorithm (Initialization & Randomly
place ants)
5
1. Initialize:
1
20
ay
Set t := 0 {t is the time counter}
M
2
Set NC := 0 {NC is the cycles counter}
-2
18
For every edge (i, j) set an initial value
r,
pu
τij(t) = c for trail intensity and ∆τij = 0
ai
,J
Place the m ants on the n nodes
IT
N
,M
For k := 1 to m do
T'
AO
5
3. Repeat until tabu list is full
1
20
{this step will be repeated (n - 1) times}
ay
M
Set s := s + 1
2
-2
For k := 1 to m do
18
r,
Choose the town j to move to, with
pu
probability 𝑝↓𝑖𝑗↑𝑘 (t)
ai
,J
{at time t the kth ant is on town
IT
N
i = tabuk(s – 1)}
,M
15
5
4. For k := 1 to m do
1
20
ay
Move the kth ant from tabuk(n) to tabuk(1)
M
2
Compute the length Lk, of the tour described
-2
18
by the kth ant
r,
pu
Update the shortest tour found
ai
,J
For every edge (i, j)
IT
N
,M
For k:= 1 to m do
15
T'
AO
Algorithm (Update trail)
5
5. For every edge (i, j) compute τij(t + n)
1
20
ay
according to equation:
M
2
τij(t + n) = ρ. τij(t) + ∆τij
-2
18
r,
Set t = t + n
pu
Set NC = NC + 1 ai
,J
IT
N
5
6. If (NC < NCMAX) and (not stagnation
1
20
ay
behavior)
M
2
then
-2
18
Empty all tabu lists
r,
pu
ai
Goto step 2
,J
IT
else
N
,M
15
stop
A simple TSP example []
51
20
[]
ay
A
M
B
2
2
-2
18
[]
r,
pu
C
ai
,J 3
IT
N
[]
,M
15
T'
4
D
AO
E []
[B]
5
[A]
1
20
ay
2
1
A
M
B
2
[C]
-2
18
r,
pu
3
C
ai
,J
IT
N
,M
[D] [E]
15
T'
D
AO
5
4
E
How to choose next city?
5
[A]
1
20
ay
1
A
M
[A]
B
2
-2
18
[ ( t )] α
[ ] β
r,
1
⎧ τ η
pu
[A] ij ij
⎪ ∑ [ τ ( t )]αC[ η ] β if j ∈ allowed k
⎪
ai
k
pij ( t ) = ⎨ ,Jik ik
k∈allowed k
IT
1
⎪ [A,D]
N
[A]
⎩0 otherwise
,M
⎪
15
1 1
T'
D
AO
E
Iteration 2
[E,A]
5
[C,B]
1
20
ay
5 3
A
M
B
2
-2
[B,C]
18
r,
pu
2
C
ai
,J
IT
[A,D]
N
,M
[D,E]
15
1
T'
D
AO
E
Iteration 3
5
[D,E,A] [E,A,B]
1
20
ay
4 5
A
M
B
2
-2
[A,D,C]
18
r,
pu
1
C
ai
,J
IT
N
[B,C,D]
,M
[C,B,E]
15
2
T'
D
AO
E
Iteration 4
[B,C,D,A] [D,E,A,B]
51
20
ay
2 4
A
M
B
2
-2
[E,A,B,C]
18
r,
pu
5
C
ai
,J
IT
N
[C,B,E,D]
,M
[A,DCE]
15
T'
3
D
AO
1
E
Iteration 5
[C,B,E,D,A] [A,D,C,E,B]
15
20
ay
1
3
A
M
B
2
-2
[D,E,A,B,C]
18
r,
pu
4
C
ai
,J
IT
N
[E,A,B,C,D]
,M
15
[B,C,D,A,E]
T'
5
D
AO
E 2
Path and Trace Update
[A,D,C,E,B]
5
L1 =300
1
20
1
ay
[B,C,D,A,E]
M
L2 =450
2
-2
2
⎧ Q
18
[C,B,E,D,A]
⎪ if (i, j ) ∈ bestTour
Δτ ik, j = ⎨ Lk
r,
pu
L3 =260
⎪0
ai
3
,J ⎩ otherwise
IT
[D,E,A,B,C]
N
,M
L4 =280
15
4
T'
AO
[E,A,B,C,D]
L5 =420
5
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Some inherent advantages
5
1
20
of good solutions
ay
M
Distributed computation avoids premature
2
-2
convergence
18
r,
The greedy heuristic helps find acceptable
pu
solution in the early solution in the early stages
ai
,J
of the search process.
IT
N
,M
agents.
T'
AO
Disadvantages in Ant Systems
51
20
Performed poorly for TSP problems larger
ay
M
than 75 cities.
2
-2
18
No centralized processor to guide the AS
r,
pu
towards good solutions
ai
,J
Can be used for discrete optimization
IT
N
,M
problems
15
T'
AO
Conclusion
5
1
ACO is a relatively new metaheuristic approach for
20
ay
solving hard combinatorial optimization problems.
M
Artificial ants implement a randomized construction
2
-2
heuristic which makes probabilistic decisions.
18
r,
The cumulated search experience is taken into account
pu
by the adaptation of the pheromone trail.
ai
,J
ACO shows great performance with the “ill-structured”
IT
N
5
Dorigo M. and G. Di Caro (1999). The Ant Colony Optimization Meta-Heuristic. In D. Corne, M.
1
Dorigo and F. Glover, editors, New Ideas in Optimization, McGraw-Hill, 11-32.
20
M. Dorigo and L. M. Gambardella. Ant colonies for the traveling salesman problem. BioSystems, 43:73–
ay
81, 1997.
M
M. Dorigo and L. M. Gambardella. Ant Colony System: A cooperative learning approach to the
2
-2
traveling salesman problem. IEEE Transactions on Evolutionary Computation, 1(1):53–66, 1997.
18
G. Di Caro and M. Dorigo. Mobile agents for adaptive routing. In H. El-Rewini, editor, Proceedings of the
31st International Conference on System Sciences (HICSS-31), pages 74–83. IEEE Computer Society
r,
Press, Los Alamitos, CA, 1998.
pu
M. Dorigo,V. Maniezzo, and A. Colorni. The Ant System: An autocatalytic optimizing process. Technical
ai
Report 91-016 Revised, Dipartimento di Elettronica,Politecnico di Milano, Italy, 1991.
,J
L. M. Gambardella, ` E. D. Taillard, and G. Agazzi. MACS-VRPTW: A multiple ant colony system for
IT
vehicle routing problems with time windows. In D. Corne, M. Dorigo, and F. Glover, editors, New Ideas
N
L. M. Gambardella, ` E. D. Taillard, and M. Dorigo. Ant colonies for the quadratic assignment problem.
15
V. Maniezzo and A. Colorni. The Ant System applied to the quadratic assignment problem. IEEE
AO
5
Marco Dorigo, Member, Zeee, Vittorio
1
20
ay
Maniezzo, and Alberto Colorni, ‘Ant
M
System: Optimization By a Colony of
2
-2
18
Cooperating Agents’, IEEE Transactions on
r,
pu
Systems, Man, And Cybernetics-part B
ai
,J
Cybernetics, Vol 26, No 1, February 1996
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
You
20
15
Thank
Teaching–learning-
5
1
20
ay
based optimization
M
2
-2
18
r,
pu
ai
,J
IT
Assistant Professor
15
5
• This method works on the effect of influence of a
1
20
teacher on the output of learners in a class.
ay
• Like other nature-inspired algorithms, TLBO is also a
M
2
population-based method and uses a population of
-2
18
solutions to proceed to the global solution.
r,
• The population is considered as a group of learners or a
pu
ai
class of learners.
,J
• The process of TLBO is divided into two parts: the first
IT
N
5
• Here, output is considered in terms of results or grades.
1
20
• The teacher is generally considered as a highly learned
ay
person who shares his or her knowledge with the
M
2
learners.
-2
18
• The quality of a teacher affects the outcome of the
r,
learners.
pu
ai
• It is obvious that a good teacher trains learners such that
,J
they can have better results in terms of their marks or
IT
N
grades.
,M
15
T'
AO
Distribution of marks obtained by
learners taught by two different teachers
1 5
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
Curve-2 represents better results than curve-1 and so it can be said that teacher
T'
both the results is their mean (M2 for Curve-2 and M1 for Curve-1), i.e. a good
teacher produces a better mean for the results of the learners.
Learners also learn from interaction between themselves, which also helps in their
results
Model for the distribution of marks
obtained for a group of learners.
1 5
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
The figure shows a model for the marks obtained for learners in a class with curve-A
15
having mean MA.
T'
AO
The teacher is considered as the most knowledgeable person in the society, so the best
learner is mimicked as a teacher, which is shown by TA.
The teacher tries to disseminate knowledge among learners, which will in turn increase
the knowledge level of the whole class and help learners to get good marks or grades.
So a teacher increases the mean of the class according to his or her
capability. In the figure, teacher TA will try to move mean MA towards
their own level according to his or her capability, thereby increasing
the learners’ level to a new mean MB.
15
Teacher TA will put maximum effort into teaching his or her
20
students, but students will gain knowledge according to the quality of
ay
teaching delivered by a teacher and the quality of students present in
M
the class.
2
The quality of the students is judged from the mean value of the
-2
18
population.
Teacher TA puts effort in so as to increase the quality of the students
r,
pu
from MA to MB, at which stage the students require a new teacher, of
ai
superior quality than themselves, i.e. in this case the new teacher is
,J
TB.
IT
Hence, there will be a new curve-B with new teacher TB.
N
,M
5
A good teacher is one who brings his
1
20
or her learners up to his or her level in
ay
M
terms of knowledge.
2
-2
18
But in practice this is not possible and a
r,
pu
teacher can only move the mean of a
ai
,J
class up to some extent depending on the
IT
N
,M
on many factors.
Teacher phase
5
Let Mi be the mean and Ti be the teacher at any iteration i.
1
20
Ti will try to move mean Mi towards its own level, so now the new mean
ay
will be Ti designated as Mnew.
M
2
The solution is updated according to the difference between the existing
-2
18
and the new mean given by:
r,
Difference_Meani = ri (Mnew − TF Mi )
pu
where TF is a teaching factor that decides the value of mean to be
ai
,J
changed, and ri is a random number in the range [0, 1].
IT
N
5
A learner learns something new if the other learner has more
1
20
knowledge than him or her.
ay
Learner modification is expressed as:
M
2
For i = 1: Pn
-2
18
Randomly select two learners Xi and Xj, where i≠ j
r,
pu
If f (Xi) < f (Xj)
ai
Xnew,i = Xold,i + ri(Xi – Xj)
,J
Else
IT
N
End
T'
End
AO
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Implementation of TLBO for
optimization
5
1
Step 1: Define the optimization problem and
20
initialize the optimization parameters.
ay
M
Define the population size (Pn), number of
2
-2
generations (Gn), number of design variables
18
(Dn), and limits of design variables (UL, LL).
r,
pu
Define the optimization problem as:
ai
,J
Minimize f (X ).
IT
N
LLi ≤ Xi ≤ ULi
Step 2: Initialize the population.
Generate a random population according to the
5
population size and number of design variables.
1
20
For TLBO, the population size indicates the
ay
M
number of learners and the design variables
2
-2
indicate the subjects (i.e. courses) offered.
18
r,
This population is expressed as
pu
ai
,J
IT
N
,M
15
T'
AO
Step 3: Teacher phase.
Calculate the mean of the population column-wise, which will
give the mean for the particular subject as
5
MD = [m1, m2, . . . , mD]
1
20
The best solution will act as a teacher for that iteration
ay
M
Xteacher = Xf (X)=min
2
-2
The teacher will try to shift the mean from MD towards Xteacher,
18
which will act as a new mean for the iteration. So,
r,
pu
M_newD = Xteacher,
ai
,J
The difference between two means is expressed as
IT
N
51
earlier.
20
ay
M
2
-2
Step 5: Termination criterion.
18
r,
Stop if the maximum generation number
pu
ai
is achieved; otherwise repeat from Step 3.
,J
IT
N
,M
15
T'
AO
Reference
5
R.V. Rao, V.J. Savsani, D.P. Vakharia, “Teaching–
1
20
learning-based optimization: A novel method for
ay
M
constrained mechanical design optimization
2
-2
problems”, Computer-Aided Design, vol. 43 (2011)
18
r,
pp. 303–315.
pu
ai
,J
IT
N
,M
15
T'
AO
Testing on Benchmark Functions
5
Benchmark Function 1 (13 design variables, 9 linear
1
20
inequality constraint)
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
The optimum solution for this problem is at x∗= (1, 1, 1, 1,
1
20
1, 1, 1, 1, 1, 3, 3, 3, 1) with objective function value f (x∗) =
ay
M
−15.
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Benchmark Function 2 (10 design variables and one nonlinear equality
constraint)
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
The global maximum is at x∗= (1/√n, 1/√n, 1/√n, . . .) with
1
20
objective function value f (x∗) = 1.
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
Benchmark Function 3 (7 design variables and 4 nonlinear inequality
1
20
constraints.)
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
The optimum solution is at x∗= (2.330499,
1
20
1.951372,−0.4775414, 4.365726, −0.6244870, 1.1038131,
ay
1.594227) with objective function value
M
2
-2
f(x∗) = 680.6300573.
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
Benchmark Function 4 (8 design variables and 3 nonlinear inequality
1
20
and 3 linear inequality constraints.)
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
The optimum solution is at x∗= (579.3066,
1
20
1359.9709, 5109.9707, 182.0177, 295.601, 217.982,
ay
M
286.165, 395.6012) with objective function value
2
-2
f (x∗) = 7049.248021.
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
Benchmark Function 5 (3 design variables)
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
The optimum solution is at x∗= (5, 5, 5) with
1
20
objective function value f (x∗) = 1.
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
You
20
15
Thank
Short Term Course
51
20
ay
Advanced Optimization Techniques
M
2
-2
18
r,
pu
Day 4 : (21-May-2015)
ai
,J
IT
N
,M
15
T'
AO
OUTLINE
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
Grey Wolf Optimizer(GWO)
r,
pu
ai Dr. Rajesh Kumar
Associate Professor
,J
Electrical Engineering
IT
MNIT Jaipur
N
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
OUTLINE
2
-2
18
1 About Grey Wolf
r,
2 Developers of Algorithm
pu
3 Wolf behaviour in nature
4 Algorithm development
ai
,J
5 Example
IT
N
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
Wolf is characterised by power full teeth, bushy
r,
tail and lives and hunts in packs. The average
pu
group size is 5-12.
ai
Their natural habitats are found in the mountains,
forests, plains of North America, Asia and Europe.
,J
chain.
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Developers of Algorithm
2
-2
18
r,
pu
ai
,J
IT
Andrew Lewis
N
Seyed Mohammad
M
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature Social behaviour
ay
Algorithm development Hunting behaviour
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
Social behaviour
pu
Hierarchy exits in pack
ai
α is the leader and decision maker.
,J
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature Social behaviour
ay
Algorithm development Hunting behaviour
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
Hunting behaviour
r,
Group hunting behaviour is of equal interest in studying
pu
optimization.
ai
Tracking, chasing, and approaching the prey.
,J
Pursuing, encircling, and harassing the prey until it stops
IT
moving.
N
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature Social behaviour
ay
Algorithm development Hunting behaviour
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
pu
ai
,J
IT
N
M
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature Social behaviour
ay
Algorithm development Hunting behaviour
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
pu
ai
,J
IT
N
M
Pursuit
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature Social behaviour
ay
Algorithm development Hunting behaviour
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
pu
ai
,J
IT
N
M
Harass
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature Social behaviour
ay
Algorithm development Hunting behaviour
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
pu
ai
,J
IT
N
M
Encircling
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature Social behaviour
ay
Algorithm development Hunting behaviour
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
pu
ai
,J
IT
N
M
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Algorithm development
2
-2
18
Social hierarchy
r,
In order to mathematically model the social hierarchy of wolves when
pu
designing GWO, we consider the fittest solution as the alpha (α).
ai
Consequently, the second and third best solutions are named beta
,J
(β) and delta (δ) respectively. The rest of the candidate solutions
are assumed to be omega (ω). In the GWO algorithm the hunting
IT
three wolves.
M
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Encircling prey
2
-2
18
r,
pu
ai
,J
IT
N
M
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
The mathematical model of the encircling behaviour is represented
pu
by the equations:
ai
D = |CXp − AX (t)| (1)
,J
X (t + 1) = Xp (t) − AD (2)
IT
N
M
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
A and C are coefficient vectors given by:
r,
A = 2ar1 a (3)
pu
ai C = 2r 2 (4)
,J
t is the current iteration
X is the position vector of a wolf
IT
2 to 0
M
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Hunting
2
-2
18
Grey wolves have the ability to recognize the location of prey
r,
and encircle them.
pu
The hunt is usually guided by the alpha. The beta and delta
might also participate in hunting occasionally.
ai
However, in an abstract search space we have no idea about
,J
suppose that the alpha, beta and delta have better knowledge
M
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Hunting
2
-2
18
→
− →
− → − →
− →− →
− → − →
− →− →
− → − →
−
D α = | C 1 . X α (t) − X (t)|, D β = | C 2 . X β (t) − X (t)|, D δ = | C 3 . X δ (t) − X (t)| (5)
→
− →
− →− →
− →− →
− →− →
− →− →
− →− →
−
X 1 = X α (t) − A 1 .( D α ), X 2 = X β (t) − A 2 .( D β ), X 3 = X δ (t) − A 3 .( D δ ) (6)
r,
→
− X1 + X2 + X 3
pu
X (t + 1) = (7)
3
→
− →
− →
−
−
ai
where t indicates the current iteration, X α (t), X β (t) and X δ (t) are the position of the gray wolves α, β and δ
→
at t h iteration, X (t) presents the position of the gray wolf at t th iteration.
t
,J
→
−
A (.) = 2→
−
a .rand(0, 1) − →
−
a (8)
IT
→
−
C (.) = 2.rand(0, 1) (9)
N
M
→
− →
−
Where →
−
a is the linear value varies from 2 to 0 according to iteration. A (.) and C (.) are the coefficient vector of
α, β and δ wolfs.
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
pu
ai
,J
IT
N
M
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
pu
ai
,J
IT
N
M
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Example
2
-2
18
r,
pu
minimization of Korn function
ai
f(x1 ,x2 ) = min{(x1 − 5)2 + (x2 − 2)2 }
,J
IT
N
M
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Iteration 1
2
-2
18
x1 x2 f (x) x1 x2 f (x)
1 6.1686 4.4100 7.1739 α 4.7372 3.3048 1.7717
r,
2 6.2104 4.0935 5.8479 β 4.8148 3.4931 2.2637
pu
3 7.4231 8.3880 46.6773 δ 5.9444 3.4433 2.9751
4 2.8950 0.8703 5.7074
5 6.1062 3.7275 ai4.2079
6 6.3458 3.2158 3.2896
,J
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Update process
2
-2
18
−
→ −
→ itr
D α = |2.rand().[4.7372, 3.3048] − [6.1686, 4.4100]| a = 2 − 2.( maxitr )
r,
−
→
X1 = [4.7372, 3.3048] − (2−
→a .rand(0, 1) − −
→
a )D α −
→
a = 2 − 2.( 13 )
pu
−
→
D β = |2.rand().[4.8148, 3.4931] − [6.1686, 4.4100]|
ai −
→ −
→
a = 1.3333
X2 = [4.8148, 3.4931] − (2−
→a .rand(0, 1) − −
→
a )D β
,J
−
→
D δ = |2.rand().[5.9444, 3.4433] − [6.1686, 4.4100]|
−
→
X = [5.9444, 3.4433] − (2− →
a .rand(0, 1) − −
→
IT
3 a )D δ
N
−
→ X1 + X2 + X3
X (1, :) = = [4.0487, 2.6051]
M
3
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Iteration 2
2
-2
18
x1 x2 f (x) x1 x2 f (x)
1 4.0487 2.6051 1.2710 α 4.2452 2.6600 1.0054
r,
2 4.6492 3.0427 1.2103 β 4.1136 2.5382 1.0754
pu
3 5.4633 3.6633 2.9813 δ 5.0927 3.1546 1.3418
4 5.6096 3.5901 2.9001
5 4.6582 3.0302 ai1.1781
6 4.7476 3.3369 1.8509
,J
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Iteration 3
2
-2
18
x1 x2 f (x) x1 x2 f (x)
1 4.4838 2.7843 0.8816 α 4.5634 2.8257 0.8725
r,
2 4.5634 2.8257 0.8725 β 4.5696 2.8291 0.8727
pu
3 4.5899 2.8395 0.8730 δ 4.5750 2.8321 0.8730
4 4.7486 2.9400 0.9467
5 4.6340 2.8684 ai0.8881
6 4.5957 2.8445 0.8767
,J
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Flow chart
2
-2
18
r,
pu
ai
,J
IT
N
M
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
Easy to implement due to simple structure.
pu
Less storage requirement than the other techniques.
Convergence is faster due to continuous reduction of search
ai
space and Decision variables are very less (α, β and δ).
,J
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
Unit Commitment (UC) is a very significant optimization task,
which plays an important role in the operation planning of
r,
power systems.
pu
UCP is considered as two linked optimization decision pro-
ai
cesses, namely the unit-scheduled problem that determines on/off
,J
status of generating units in each time period of planning hori-
zon, and the economic load dispatch problem.
IT
tus.
5,
'1
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
Table : Total costs of the BGWO method for test systems
r,
pu
No. of Best Average Worst Std. De- CPU
Unit Cost ($) Cost ($) Cost ($) viation Time
ai (Sec)
,J
10 563937.3 563976.6 564017.7 40.2 31.3
20 1124687.9 1124837.7 1124941.1 128.7 58.7
IT
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
Performance Comparison
2
-2
18
Table : Comparison With Other Algorithms
r,
pu
10 20 40 60 80 100
LR 565825 1130660
ai 2258503 3394066 4526022 5657277
GA 565825 1126243 2251911 3376625 4504933 5627437
EP 564551 1125494 2249093 3371611 4498479 5623885
,J
MA 565827 1128192 2249589 3370820 4494214 5616314
GRASP 565825 1128160 2259340 3383184 4525934 5668870
LRPSO 565869 1128072 2251116 3376407 4496717 5623607
IT
2
About Grey Wolf
Developers of Algorithm
Wolf behaviour in nature
ay
Algorithm development
Example
Advantages over other techniques
M
Application on Unit commitment problem
2
-2
18
r,
Thank You . . .
pu
ai
Mail: rkumar .ee@gmail.com
,J
IT
N
M
5,
'1
5
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
Dr Rajesh Kumar
,M
15
Associate Proffesor
T'
AO
MNIT Jaipur
Outlines
51
20
ay
M
1. Basic Fireworks Algorithm (FA)
2
-2
18
2. FA Variant
r,
pu
ai
3. Example IT
,J
N
,M
15
T'
AO
Developers of the Algorithm
5
1
20
Ying Tan and Yuanchu Zhu, Peking University Developed in 2010.
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
1.3 Introduction to Fireworks
Algorithm
5
1
20
When a firework is set Off, a shower of sparks will fill the
local space around the firework.
ay
M
The explosion can be viewed as a search in the local
2
-2
Space around a firework.
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
from Nature to Mathematical Modeling
5
1
• Definition of firework
20
ay
M
2
Good firework: firework can generate a big
-2
18
population of sparks within a small range.
r,
Bad firework: firework that generate a small
pu
ai
Population of sparks within a big range.
,J
IT
N
,M
15
T'
6
2. Basic Firework Algorithm (FA)
51
20
2.1 Problem description
ay
2.2 Flowchart of FA
M
2
-2
2.3 Design of basic FA
18
Selection operators
r,
pu
Number of son sparks
ai
,J
2.4 Experimental results
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
2.1 Problem description
AO
T'
15
,M
N
IT
,J
ai
pu
r,
Repeat
18
-2
2
M
ay
20
1 5
2.2 FA’s Flowchart
Y
N
2.3 Design of basic Firework Algorithm
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
2.3 Design of basic Firework
Algorithm
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
2.3 Design of basic Firework Algorithm
15
20
ay
M
2
SMALL RANGE
-2
MORE SPARKS
18
r,
pu
ai
,J
IT
N
,M
BIG RANGE
15
LITTLE SPARKS
T'
AO
2.3 Design of basic Firework
Algorithm
51
20
Generating Sparks. In explosion, sparks may
ay
undergo the effects of explosion from random z
M
2
directions (dimensions).
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
2.3 Design of basic Firework Algorithm
5
1
To keep the diversity of sparks, we design another
20
way of generating sparks----Gaussian explosion.
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
2.3 Design of basic Firework Algorithm
5
1
20
ay
M
Sparse
2
-2
18
r,
pu
KEEP DIVERSITY!
ai
,J
IT
N
,M
15
T'
AO
Crowd
2.3 Design of basic Firework Algorithm
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Example of FWA:
51
20
ay
M
2
-2
18
r,
pu
Boundary limits:
ai
,J
IT
N
,M
15
T'
AO
Random selection of location
1 5
20
ay
M
2
-2
Location X(1) X(2) f(x)
18
number
r,
pu
1 -3.7 1.32 15.4324
ai
2 4.13 ,J -4.02 33.2173
IT
3 -2.21 0.46 5.0957
N
,M
15
T'
AO
Number of spark generation in each
explosion calculation:
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
Accordingly :
N
,M
15
T'
AO
Amplitude of spark calculation
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
Accordingly :
N
,M
15
T'
AO
15
20
ay
Location X(1) X(2) f(x) No of Amplitude
M
number Spark
2
-2
18
1 -3.7 1.32 15.4324 2 0.54
r,
pu
2 4.13 -4.02 33.2173 1 1.46
ai
,J
3 -2.21 0.46 5.0957 3 0.25
IT
N
,M
15
T'
AO
Obtaining location of Spark
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5
1 0 1 -3.72 0.92 14.7409
1
20
2 1 0 3.084 -4.03 25.7538
ay
M
3 1 1 -2.21 0.429 5.0613
2
3 1 1 -2.002 0.667 4.4545
-2
18
3 1 0 -2.064 0.46 4.4713
r,
pu
ai
,J
IT
N
,M
15
T'
Note: Bit(x) shows the dimension selected in random process i.e. 1 selected and 0 is not
AO
selected.
Obtaining the location of
Gaussian spark
51
20
ay
M
2
-2
18
r,
pu
ai
Gaussian spark generation:
,J
IT
N
,M
15
T'
AO
1 5
20
ay
Bit(x(1)) Bit(x(2)) X(1) X(2) f(x)
M
2
1 0 1 -3.70 2.1486 18.3065
-2
18
2 1 1 0.5631 -0.5495 0.619
r,
3 0 1 -2.21 0.4956 5.1297
pu
ai
,J
IT
N
,M
15
T'
AO
Selection of next position of spark
5
1
20
•
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
Question ?
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
Thank you
20
15
Malaviya National Institute of Technology, Jaipur
1 5
20
ay
M
2
-2
Dr. Rajesh Kumar
18
r,
pu
ai
PhD, PDF (NUS, Singapore)
,J
SMIEEE, FIETE, LMCSI, MIE (I),SMIACSIT, LMISTE, MIAENG
IT
N
,M
5/23/2015 1
Malaviya National Institute of Technology, Jaipur
Goal of Optimization
51
20
ay
M
2
-2
18
Find values of the variables that minimize or maximize the
r,
pu
objective function while satisfying the constraints.
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
5
1
20
ay
minimize the objective function
M
min f ( x), x x1 , x2 ,......., xn
2
-2
18
subject to constraints
r,
pu
ci ( x) 0 Example
ai
,J
ci x 0 min x1 2 x2 1
IT
2 2
N
,M
15
subject: x1 x2 0
2 2
T'
AO
x1 x2 2
Malaviya National Institute of Technology, Jaipur
5
• Multi-variable
1
20
• Constrained
ay
M
• Non-constrained
2
-2
18
• Single objective
r,
pu
• Multi-objective
ai
• Linear ,J
IT
N
,M
• Non-linear
15
T'
AO
Malaviya National Institute of Technology, Jaipur
5
weak strong
1
strong local local
20
local minimum minimum
f(x) minimum strong
ay
global
minimum
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
x
15
feasible region
T'
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
Motivation
51
20
• The new optimization algorithm
ay
M
inspired by group decision-making
2
-2
process of honey bees for best nest site
18
r,
which is analogous to the optimization
pu
ai
process.
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
51
20
• Then the bee loads the nectar and returns to the hive to relinquish and
ay
M
performs the waggle dance.
2
-2
18
• The waggle dance gives information about the quality, distance and
r,
pu
direction of flower patch.
ai
,J
IT
N
• The recruited bees will evaluate the quality of source upon exploring it.
,M
15
T'
AO
• If the patch is still good enough for nectar, then they perform waggle
dance to convey the same which enables quiescent bees in recruitment
of more bees to explore the patch.
Malaviya National Institute of Technology, Jaipur
Decision Process
• Two methods used by bee swarms to reach to a decision for
51
finding out the best nest site.
20
• Consensus: Widespread agreement among the group is taken
ay
M
into account.
2
-2
18
• Quorum: The decision for best site happens when a site crosses
r,
pu
the quorum (threshold) value.
ai
,J
• In DBC, both consensus and quorum have been mimicked,
IT
N
51
20
• 1/10 th of total bees (known as scouts) go
ay
M
for searching next site.
2
-2
• Information of best site through waggle
18
dance
r,
pu
• Decision making Process:
ai
,J
–Consensus
IT
N
,M
– Quorum
15
51
20
• Waggle dance
ay
• Calculation of angle b/w
M
2
-2
sun and flowers
18
• waggle dance occurs on
r,
pu
the surface of a swarm
ai
rather than on the combs ,J
IT
N
inside a hive.
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
1 5
20
• Bee runs through a
ay
SUN SUN
M
figure-eight pattern on a
2
-2
vertical comb
18
FOOD
r,
pu
FOOD
α=
ai
to the food source ,J α= 0°
BEE-HIVE
IT
30
N
BEE-HIVE
source
Malaviya National Institute of Technology, Jaipur
5
interact with each other and in unison take decision to achieve
1
20
goals.
ay
M
• Assumptions made in the proposed algorithm:
2
-2
Bees live and act in a given environment.
18
Bees attempt to achieve particular goals or perform particular
r,
pu
ai
tasks.
,J
Information exchange process accompanies no losses.
IT
N
,M
Bees will never die and hence number of bees remains constant
15
DBC Algorithm
• In DBC, all bees live in an environment i.e. FSR. An environment
5
is organized in a structure as shown below for d(1,2,3) parameters:
1
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
5
the search process, which in proposed approach is the midpoint of
1
20
that volume through various investigations.
ay
M
• The midpoint of total cluster can be calculated using (1)
2
-2
18
r,
pu
ai
,J
• If there is one parameter only one bee explores the search.
IT
N
• If the parameters are 𝑾𝟏𝒊 = 𝟏, 𝑾𝟏𝒇 = 𝟔 then 5 bees are sent for
,M
15
5
the search approach adopted by bees and the tendency of
1
20
remembering only four locations at a time.
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5/23/2015 17
Malaviya National Institute of Technology, Jaipur
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5/23/2015 18
Malaviya National Institute of Technology, Jaipur
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
5/23/2015 19
Malaviya National Institute of Technology, Jaipur
5
centralized system that chooses best preferable solution.
1
20
• Definition 5. A food quality index FQI : FSR- ℛ is a measure of
ay
M
the quality of the solution that is represented by the bee food
2
-2
search procedure.
18
• For optimal minimum cases it selects the best optimal
r,
pu
solution which can mathematically expressed as
ai
,J
IT
N
,M
5
fitness value it selects the optimal solution nearer to the starting
1
20
point (Fig. 5).
ay
M
• This is due to the natural phenomenon used by bees when it find
2
-2
nectar site with same fitness value.
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
5
data) is finished the global optimized point is chosen by comparing
1
20
the Fitness Values of all the optimized points in the optimum
ay
M
vector table i.e. global best, gbest as in case of PSO [10,11].
2
• Point with the lowest Fitness Value is selected as the global
-2
18
optimized point (𝑿𝑮 ).
r,
pu
ai
,J
• Quorum: In quorum method, if the number of bees providing an
IT
N
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
Benchmark Functions:
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
5
compared (Table 3).
1
20
• Performance can be increased by switching between the two
ay
M
decision making processes.
2
-2
• Two choices of parameters can be used: number of bees and
18
quorum (threshold) value.
r,
pu
• As the quorum value is increased the optimal solution approaches
ai
,J
towards the global value at an expense of time (Fig. 7).
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
Choice of parameters:
• Effect of number of bees: More the number of bees, more is the
5
computational time taken for processing but with increased
1
20
accuracy towards the optimal result (Fig. 8).
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
5
algorithms at lower dimensions.
1
20
• The t-tests show that DBC perform better than most of
ay
algorithms and in a statistically significant manner.
M
2
• The uniqueness of the solution by the proposed algorithm is
-2
18
evident from the zero standard deviation (SD) which is novel
r,
character of proposed algorithm and can be recommended for
pu
ai
real time applications.
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
5
the performance with different algorithms is noted.
1
20
•
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
51
20
with the counterparts.
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Malaviya National Institute of Technology, Jaipur
Conclusion
• The nature inspired algorithms are analyzed and problems are
identified there by making a way for the proposal of new DBC
51
optimization.
20
• Upshot of the proposed algorithm: Generates better optimal solutions
ay
M
as compared to its counterparts.
2
-2
• One has to choose between consensus and quorum, can go for
18
r,
consensus for better accuracy.
pu
ai
• In case of quorum better solution can be obtained by increasing the
,J
threshold.
IT
N
,M
• The results show that DBC performs better and also removes the
15
M
2
-2
(AOT-15)
18
r,
pu
ai
MARKOV CHAINS
,J
IT
N
,M
15
5
1
• Let X1 be a random variable that characterizes the state of a
20
ay
system at discrete points in time t= 1,2,3…. The family of
M
random variables {X1} form a stochastic process.
2
-2
• The number of states in a stochastic process may be finite or
18
r,
infinite.
pu
ai
Example 1.1 (Machine Maintenance)
,J
The condition of a machine at the time of monthly preventive
IT
N
represented as :
AO
0, poor
X t 1, fair T=1,2,3…
2, good
The random variable Xt is finite because it represent three state:
poor (0), fair(1), good (2).
15
20
ay
M
2
Example 1.2 (Job Shop)
-2
18
Jobs arrive randomly at job shop at the average rate of 5 jobs per
r,
hour. The arrival process follows a Poisson distribution which,
pu
ai
theoretically, allow any number of jobs between zero and infinity
,J
to arrive at the shop during the time interval (0, t). The infinite-
IT
N
Xt = 0,1,2….. t> 0
AO
• A stochastic process is a Markov Process if the occurrence of a
future state depends only on the immediately preceding states.
• For a given chronological time t0, t1, … tn , the family of random
15
variables {Xtn}= {x1, x2, … xn} is said to be a Markov process if it
20
possesses the following property:
ay
M
P{ X t n xn IX tn1 xn 1 ,..., X t0 x0 } P{ X tn xn IX t n1 xn 1}
2
-2
18
r,
• In a Markovian process with n exhaustive and mutually exclusive
pu
ai
states (outcomes), the probabilities at a specific point in time t=
,J
IT
0,1,2….are usually written as
N
,M
5
Pij 0, (i, j ) 1,2,3...n
1
20
ay
A convenient way for summarizing the one step transition
M
2
probabilities is to use the following matrix notation :
-2
18
p11 p12 p13 . . p1n
r,
pu
p21 p22 p23 . . p2 n
ai
P. . . . . .
,J
. . . . . .
IT
N
pm1 pm 2 pm 3 . . pmn
,M
15
T'
The matrix P is Markov chain. All the transition probabilities Pij are
AO
5
condition. Depending on the outcomes of the test, productivity
1
20
for the new season falls in one of the three states: (1) Good (2)
ay
Fair (3) Poor. Over the year, Gardener has observed that the last
M
2
year’s soil condition impacts current year’s productivity and that
-2
18
the situation can be described by the Markov chain:
r,
pu
ai
,J
State of the system next year
0 .2 0.5 0 .3
,M
State of the
P 0
15
0.5 0.5
T'
AO
0 0 1
The transition probabilities shows that the soil condition can either
deteriorate or stay same but never improve.
state 1 Good : there is 20% chance soil will not change next year,
5
50% chance it will become fair and 30% chance it will deteriorate .
1
20
state 2 Fair : next year’s productivity may remain fair with 50%
ay
probability or become poor with probability 50%.
M
2
state 3 poor : equal condition next year with probability 1.
-2
18
r,
.30 .60 .10
pu
P1 .10
ai
,J
IT .60 .30
.05 .40 .55
N
,M
15
T'
15
Given the initial probabilities a (0) {a (j0) } of starting in state j
20
ay
and the transition matrix P of a Markov chain, the absolute
M
probabilities a ( n) {a (jn) } of being in state j after n transition (
2
-2
n>0) are computed as follows
18
r,
a (1) a ( 0 ) P
pu
ai
a ( 2 ) a (1) P a ( 0 ) PP a ( 0 ) P 2
,J
IT
a ( 3) a ( 2 ) P a ( 0 ) P 2 P a ( 0 ) P 3
N
,M
.
15
T'
.
AO
.
a ( n ) a ( 0 ) P n , n 1,2...
The matrix P n is known as the n-step transition matrix. From
these calculation
P n P n 1 P
51
20
or
ay
M
P n P n m P m ,0 m n
2
-2
18
r,
pu
ai
,J
IT
N
,M
5
.30 .60 .10
1
20
P1 .10 .60 .30
ay
M
.05 .40 .55
2
-2
18
r,
The initial condition of the soil is good – that is a ( 0) (1,0,0) .
pu
ai
Determine the absolute probabilities of the three state of the
,J
system after 1, 8 and 16 gardening seasons.
IT
N
,M
15
5
P 4 .1050 .5400 .3550 .1050 .5400 .3550 .10226 .52645 .37129
1
20
.0825 .4900 .4275 .0825 .4900 .4275 .09950 .52193 .37857
ay
M
2
-2
18
r,
.10679 .53295 .36026 .10679 .53295 .36026 .101753 .525514 .372733
pu
P 8 .10226 .52645 .37129 .10226 .52645 .37129 .101702 .525435 .372863
ai
,J
IT
.09950 .52193 .37857 .09950 .52193 .37857 .101669 .525384 .372863
N
,M
15
T'
P16 .101702 .525435 .372863 .101702 .525435 .372863 .101659 .52454 .372881
.101669 .525384 .372863 .101669 .525384 .372863 .101659 .52454 .372881
.30 .60 .10
a (1) 1 0 0 .10 .60 .30 .30 .60 .10
.05 .40 .55
15
20
.101753 .525514 .372733
ay
M
P 8 1 0 0 .101702 .525435 .372863 .101753 .525514 .372733
2
-2
.101669 .525384 .372863
18
r,
pu
.101659 .52454 .372881
ai
P16 1 0 0 .101659 ,J
.52454 .372881 .101659 .52454 .372881
IT
N
almost identical. The result more pronounced for P16 . It shows as the
number of transition increase, the absolute probabilities are
independent of initial a ( 0 ). In this case the resulting probabilities are
known as steady- state probabilities.
CLASSIFICATION OF THE STATES IN A
MARKOV CHAIN
5
1
20
The states of a Markov chain can be classified based on the
ay
transition probabilities pij .
M
1. A state j is absorbing if it returns to itself with certainty in
2
-2
one transition- that is pj .
18
r,
2. A state j is transient if it can reach another state but cannot
pu
ai
itself be reached back another state. Mathematically, this will
,J
happen if lim pijn 0 , for all i.
IT
n
N
other states is 1. this happen if, and only if, the state is not
T'
transient.
AO
4. A state j is periodic with period t > 1 if a return is possible only
in t,2t,3t,…. This means that p jj 0 whenever n is not divisible by
n
t.
15
20
ay
M
State 1 and 2 are transient because they cannot be reentered
2
-2
once the system is “trapped” in states 3 and 4.
18
r,
pu
ai
,J
0 1 0 0
IT
N
0 0 1 0
,M
P1
15
0 0 .3 .7
T'
AO
0 0 .4 .6
State 3 and 4 play role of an absorbing state, constitute of closed
set. All states of a closed set must communicate, which means that
it is possible to go from any state to any other state in the set in one
5
or more transition
1
20
ay
p njj 0 for all i j and n 1
M
2
-2
18
State 3 and 4 can both be absorbing states if p33 p44 1
r,
pu
A closed Markov chain is said to be ergodic if all its state are
ai
,J
recurrent and aperiodic. In this case, the absolute probabilities after
IT
N
probabilities a 0 .
AO
Example 3.1 ( Absorbing and Transient states)
Consider the gardener Markov chain with no fertilizer
. 2 .5 .3
15
P 0
20
.5 .5
ay
0 0 1
M
2
-2
18
States 1 and 2 are transient because they reach state 3 but can
never be reached back. State 3 is absorbing because p33 1 . These
r,
pu
classification can also be seen when lim pijn 0 is computed.
ai
,J n
IT
0 0 1
N
,M
P100 0 0 1
15
T'
0 0 1
AO
5
at the corresponding period of state.
1
20
ay
0 . 6 .4 .24 .76 0
M
P 0 P2 0
2
1 0 1 0
-2
18
.6 . 4 0 0 .76 .24
r,
pu
ai
,J
IT
N
,M
P3 0 1 0 P4 0 1 0
T'
AO
51
.03456 .96544 0
20
ay
M
2
-2
Continuing with n = 6,7…, p n shows that p11 and p33 are
18
positive for even values of n and zero otherwise. This means that
r,
pu
the period for states 1 and 3 is 2.
ai
,J
IT
N
,M
15
T'
AO
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
Thank you
ay
20
15
Multi Agent System for Self Healing System
5
1
20
Multi Agent System for Micro Grid
ay
M
2
-2
Dr. Rajesh Kumar
18
r,
pu
PhD, PDF (NUS, Singapore)
ai
SMIEEE, FIETE, MIE (I),LMCSI, SMIACSIT, LMISTE, MIAENG
,J
IT
N
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
Smart Grid
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
5
1
20
ay
M
2
-2
Electrical Infrastructure
18
r,
pu
ai
,J
IT
N
,M
15
“Intelligence” Infrastructure
T'
AO
5
Multi Agent System for Self Healing System
1 5
20
ay
M
2
-2
18
r,
pu
Demand Response and Dynamic Pricing
ai
,J
Distributed Generation and Alternate Energy Sources
IT
N
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
Smart Grid
51
20
ay
M
The smart grid is the collection of all technologies, concepts,
2
-2
topologies, and approaches that allow the silo hierarchies of
18
r,
generation, transmission, and distribution to be replaced with an
pu
end-to-end, organically intelligent. Fully integrated environment
ai
,J
where the business processes, objectives, and needs of all
IT
N
5
• Distributed generation – not only traditional large power stations,
1
20
but also individual PV panels, micro-wind, etc
ay
M
• Grid optimization – system reliability, operational efficiency and
2
-2
18
asset utilization and protection
r,
pu
• Demand response – utilities offer incentives to customers to reduce
ai
consumption at peak times ,J
IT
N
• Grid-scale storage
T'
AO
Micro Grid
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
Definition of Agent
51
1. An agent is an entity whose state is viewed as consisting of
20
ay
mental components such as beliefs, capabilities, choices, and
M
commitments. [Yoav Shoham, 1993]
2
-2
18
r,
2. An entity is a software agent if and only if it communicates
pu
ai
correctly in an agent communication language. [Genesereth and
,J
Ketchpel, 1994]
IT
N
,M
15
Definition of Agent
51
20
ay
5. An agent is anything that can be viewed as (a)Perceiving its
M
environment, and (b) Acting upon that environment [Russell
2
-2
and Norvig, 1995]
18
r,
pu
6. A computer system that is situated in some environment and is
ai
capable of autonomous action in its environment to meet its
IT
,J
design objectives. [Wooldridge, 1999]
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
An agent is a computational system that interacts with one or more
ay
M
counterparts or real-world systems with the following key features
2
to varying degrees:
-2
18
• Autonomy
r,
pu
ai
• Reactiveness IT
,J
• Pro-activeness
N
,M
15
• Social abilities
T'
AO
5
Single agent-based systems differ from multi-agent systems:
1
20
ay
• Environment: agents need to take into account others who may
M
2
interfere with their plans and goals. They need to coordinate with
-2
18
others in order to avoid conflicts.
r,
pu
• Knowledge/expertise/skills: these are distributed
ai
,J
IT
N
17
Multi Agent System for Self Healing System
51
20
ay
Agent tries to solve its
M
learning problem, while other
2
-2
agents in the environment also
18
r,
are trying to solve their own
pu
ai
learning problems.
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
• Centralized Multi-agent Planning: a coordinating agent get
2
-2
receipt of all local plans
18
r,
pu
ai
,J
• Distributed Multi-agent planning: each agent is provided with
IT
N
,M
Multi-agent systems
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Environment
Coherence
51
20
• Coherence, i.e. how well the system behaves as a unit, is
ay
M
important in MASs
2
-2
18
• Coherence can be measured in different ways depending on the
r,
pu
problem domain and system
ai
,J
IT
N
,M
21
Multi Agent System for Self Healing System
MAS applications
51
20
They are particularly suitable for:
ay
M
• Building systems to solve complex problems that cannot be
2
-2
18
solved by any one agent on its own
r,
pu
• Dealing with problems that involve many problem-solving
ai
,J
methods, require different types of expertise and knowledge or
IT
N
MAS advantages
51
20
The use of MAS technology offers a number of advantages:
ay
M
2
-2
• Extensibility and flexibility
18
r,
• Robustness and reliability
pu
ai
• Computational efficiency and speedIT
,J
• Development and maintainability
N
,M
• Reusability
15
T'
• Reduced costs
AO
23
Multi Agent System for Self Healing System
MAS challenges
51
20
A number of issues with regards to their design and implementation:
ay
M
• Efficient and effective interaction protocols
2
-2
• Task and problem formulation, decomposition, task allocation to
18
individual agents and subtask synthesis
r,
pu
• Agent identification, search and location in open systems
ai
,J
• Coherent and stable system behaviour while avoiding harmful
IT
N
interactions
,M
24
Multi Agent System for Self Healing System
Closed MAS
51
20
ay
• Static design with pre-defined components and functionalities
M
2
• The properties of the system are known in advance:
-2
18
–common language
r,
pu
–each agent can be developed as an expert
ai
–agents are cooperative ,J
IT
N
25
Multi Agent System for Self Healing System
5
Advantages
1
20
• Distributed load and expertise
ay
• Simplicity and predictability, since
M
2
-2
–components are known
18
–interaction language and protocols are known
r,
pu
–agents usually are cooperative
ai
,J
–agents share architecture and software
IT
N
,M
15
Disadvantages
T'
Open MAS
51
20
• The system has no prior static design, only single agents within
ay
M
2
-2
• Agents are not necessarily aware of others – a mechanism for
18
r,
identifying, searching and locating others is required
pu
ai
,J
• Agents may be non-cooperative, malicious or not trustworthy
IT
N
,M
15
27
Multi Agent System for Self Healing System
51
20
Advantages
ay
• Single agent or groups are designed separately (modular)
M
• Flexible and fault tolerant
2
-2
18
• Evolutionary design
r,
• Easier to maintain
pu
ai
• Dynamic, open society IT
,J
N
,M
Disadvantages
15
28
Multi Agent System for Self Healing System
Interaction
51
Definition: An interaction occurs when two or more agents are brought
20
into a dynamic relationship through a set of reciprocal actions (Ferber
ay
M
1999)
2
-2
18
• Interactions develop as a result of a series of actions whose
r,
pu
consequences influence the future behavior of agents
ai
,J
• May be direct or indirect, intended, or unintended
IT
N
• Interaction assumes:
,M
15
Elements of interactions
51
20
• Goals: The agents’ objectives and goals are important. Goals
ay
M
of different agents can be conflicting
2
-2
18
• Resources: agents require resources to achieve their objectives.
r,
pu
Resources are not infinite, other agents may want to use them
ai
as well. Conflicts may arise ,J
IT
N
,M
15
30
Multi Agent System for Self Healing System
Agent characterization
51
20
• Self-interested/antagonistic agents: have incompatible goals
ay
M
with others and they are interested in maximizing their own
2
-2
utility, not necessarily that of the society as a whole
18
r,
pu
• Cooperative/nonantagonistic agents: have usually compatible
ai
,J
goals; they act to maximize their own utility in conjunction
IT
N
31
Multi Agent System for Self Healing System
Agent communication
51
20
Why is communication important?
ay
M
• Communication is required in MASs where agents have to
2
-2
cooperate, negotiate etc., with other agents
18
• Agents need to communicate in order to understand and be
r,
pu
understood by others
ai
,J
• Diversity introduces heterogeneity
IT
N
,M
15
32
Multi Agent System for Self Healing System
51
20
ay
The model of human intelligence and human perspective of the
M
world characterise an intelligent agent using symbolic
2
-2
representations and mentalistic notions:
18
knowledge - John knows humans are mortal
r,
pu
beliefs - John took his umbrella because he believed it was
ai
going to rain ,J
IT
desires, goals - John wants to possess a PhD
N
,M
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
Intelligent Agent
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
Security Architecture
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
Ontology
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
System Structure
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
Centralized Multi-agent Self-healing Power
r,
pu
ai
System N
IT
,J
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
Three phase Fault Current and voltage from wind farm
with SFCL
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
Different Types of Current Current Voltage Voltage No: No: Cycles
ay
Fault without with without with Cycles to to
M
SFCL (A) SFCL (A) SFCL(kV) SFCL(kV) compensa compensate
2
-2
te without with SFCL
18
SFCL
r,
pu
Distribution Grid 1985 1634 5.5 7.7 3 1
ai
Fault
Transmission Line 2240 1670 ,J 0 7.7 3 1
IT
Fault
N
,M
Fault
T'
AO
51
20
• Critical loads are the loads in which continuity of supply
ay
M
should be maintained. (e.g. ICU of an HOSPITAL)
2
-2
• In this work, Intelligent Distributed Autonomous Power
18
Systems (IDAPS) comprises of solar photovoltaic as
r,
pu
distributed energy resources (DER) and loads, as well as
ai
,J
their control algorithms, has been developed.
IT
N
Simulation Setup
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Proposed MAS
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
51
20
• During the Grid-Connected Mode (before 5PM).
ay
M
• During the Transition (at 5PM).
2
-2
18
r,
pu
ai
,J
IT
N
,M
51
20
• During Resynchronization to the Main Grid (at 8PM)
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Conclusions
51
20
• Self-healing
ay
M
–Agent protocols have been developed and a microgrid
2
-2
multi agent System simulation that demonstrates
18
r,
performance during a casualty requiring self-healing has
pu
been implemented. The power system self-healing is
ai
,J
diffcult, but the case is made for combining multi-agent
IT
N
Acknowledgement
51
20
ay
M
2
-2
I thank my researcher students for their research contributions
18
r,
used in the presentation
pu
ai
,J
IT
N
,M
15
T'
AO
Multi Agent System for Self Healing System
51
20
ay
M
2
-2
Thanks
18
r,
pu
ai
,J
IT
Q&A
N
,M
15
T'
AO
Short Term Course
51
20
ay
Advanced Optimization Techniques
M
2
-2
18
r,
pu
Day 5 : (22-May-2015)
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
15
20
Scientific Paper Writing
ay
M
2
-2
18
Dr. Rajesh Kumar
r,
pu
ai
,J
PhD, PDF (NUS, Singapore)
IT
SMIEEE, FIETE, MIE (I),SMIACSIT, LMISTE, MIAENG
N
,M
5
• National scientific journal
1
20
ay
• International scientific journal
M
2
-2
• (general or important)
18
r,
• SCI/ EI/ ISTP
pu
ai
,J
• SCI: Scientific Citation Index
IT
N
,M
Outline
15
20
ay
• Brief introduction of scientific paper
M
2
-2
• 2. Paper writing
18
r,
pu
• 3. Paper submitting
ai
,J
IT
• 4. Common mistakes appeared in paper
N
,M
15
1 5
20
ay
M
Title
2
-2
Author Name
18
Affiliation
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
Introduction
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
Main Text
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
Results
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
Conclusion
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
References
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
15
20
ay
1. Title
M
2
2. Authors’ name (including the author’s affiliation)
-2
18
r,
3. Abstract (including 3~8 keywords)
pu
ai
4. Main text ,J
IT
N
,M
5. Acknowledgement
15
T'
AO
6. References
7. Appendix
Scientific Paper Writing
15
• The research object (or problem) should be included in the
20
ay
title
M
• Reflect the main outcome and the approach used in your
2
-2
18
research work
r,
pu
ai
Title: Meaningful and brief
,J
IT
N
,M
is better than
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
(5 words)
Scientific Paper Writing
15
20
Contribution: The name of the authors, with all initials, the
ay
M
institute or organization, with full address
2
-2
18
r,
Example:
pu
ai
,J
IT
Rajesh Kumar
N
,M
15
20
ay
• A short summary of a longer article
M
2
-2
• Written after the paper is completed, although it is
18
r,
intended to be read first
pu
ai
,J
• Appears on a separate page just after the title page and
IT
N
,M
15
20
ay
M
• Decide whether to read an entire article
2
-2
18
• Remember key findings on a topic
r,
pu
ai
• Understanding a text
,J
IT
• Index articles
N
,M
15
T'
AO
Scientific Paper Writing
5
1
20
ay
Purpose Parameters Tests Conclusion
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
15
20
ay
M
2
Introduction:
-2
18
• What is the problem and why is it interesting?
r,
pu
ai
• Who are the main contributor?
,J
IT
• What did they do?
N
,M
15
15
20
ay
M
2
-2
Background
18
r,
pu
ai
,J
IT
N
,M
15
T'
Issues
AO
Scientific Paper Writing
15
20
ay
M
2
Similar Algorithms
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Issues
Scientific Paper Writing
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
5
• Experimental procedure or theoretical analysis
1
20
• Usually a list of all materials or equipments you used for the
ay
M
experiment
2
-2
• The theoretical model used in your analysis
18
r,
• list all steps in the correct order
pu
ai
• ,J
Experimental paper: equipment, materials, method
IT
N
15
20
ay
M
2
-2
Assumptions
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Definations
Scientific Paper Writing
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Scientific Paper Writing
15
20
ay
M
2
-2
18
Experimental
r,
pu
setup
ai
,J
IT
N
,M
15
Various
T'
AO
Experiments
Scientific Paper Writing
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
15
20
• Discussion:
ay
M
• Extract principles, relationships, generalizations.
2
-2
• Present analysis, model or theory.
18
r,
• Show relationship between the results and analysis, model or
pu
ai
theory.
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
15
20
ay
M
Figures
2
-2
18
Flow charts show methods, procedures.
r,
pu
Graphs plot data
ai
,J
Schematics show how equipment works, or illustrate a
IT
N
,M
mechanism or model
15
T'
AO
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
15
20
Conclusions
ay
M
•Must have:
2
-2
•Summarize the study work
18
r,
•State outcome (or conclusions) of the research
pu
ai
•May have: IT
,J
•Make suggestions of further work
N
,M
15
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
Acknowledgement
15
20
ay
• The author states and expresses the acknowledge for the
M
2
-2
financial support .
18
r,
• Thank people who have helped you with ideas, technical
pu
ai
,J
assistance, materials or finance.
IT
N
,M
• Keep it simple, give full names and affiliation, and don’t get
15
T'
sentimental
AO
Scientific Paper Writing
References
5
• Provide a reference list at the end of the article or chapter to
1
20
supplement textual notes; the reference list gives complete
ay
M
citation for all works cited.
2
-2
• References must be complete: name, initials, year, title,
18
journal, volume, start page and finish-page
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
Appendix
15
20
• Generally, the appendix is adopted where the content does not
ay
M
directly relate to the discussed topic but useful for
2
-2
18
understanding the study work.
r,
pu
• Equation derivation and experimental data often included in
ai
appendix ,J
IT
N
,M
15
T'
AO
Scientific Paper Writing
Paper Writing
15
20
ay
1. The design
M
2
-2
2. The market – who are your reader?
18
r,
3. The concept – making a concept sheet
pu
ai
4. Embodiment – the first draft ,J
IT
N
,M
5
What Who are the How will they use it?
1
20
Writing? readers?
ay
M
Thesis Examiners To judge and rank your work
2
-2
18
Paper Referees To check originality, quality,
r,
Scientifically suitability
pu
ai
literate public To extract information
,J
IT
Research The funding To judge if your aims match the
N
,M
AO
T'
15
,M
N
IT
,J
ai
pu
r,
18
-2
2
M
ay
20
15
Scientific Paper Writing
Look Professional
15
20
• Grammar: words, sentence structure
ay
M
• Punctuation: full stop, comma, question mark
2
-2
• Style: clear, define everything
18
r,
Consistent in
pu
ai
• Font size IT
,J
• Margins
N
,M
• Spacing
15
T'
Acknowledgement
15
20
ay
M
2
-2
I thank
18
r,
Wei-Ze Wang, Ph. D
pu
ai
School of Mechanical & Power Engineering
IT
,J
East China University of Science and Technology
N
,M
presentation
AO
Scientific Paper Writing
15
20
ay
M
2
-2
Thanks
18
r,
pu
ai
,J
IT
Q&A
N
,M
15
T'
AO
Technical
Wri,ng
Using
1 5
20
ay
LaTeX
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
Dinesh
Gopalani
T'
dgopalani.cse@mnit.ac.in!
What
is
LaTeX?
15
• Latex
is
a
document
markup
language
used
for
20
ay
wri,ng
documents.
M
2
-2
• Latex
was
originally
wri?en
in
the
early
1980s
18
by
Leslie
Lamport
at
SRI
Interna,onal.
r,
pu
ai
• It
has
become
the
standard
for
wri,ng
,J
IT
15
• We
may
concentrate
on
Contents
of
the
20
ay
paper
rather
than
its
visual
appearance.
M
2
-2
• The
output
produced
is
equivalent
to
that
of
18
published
books.
r,
pu
ai
• Facilitates
for
the
automa,c
numbering
of
,J
IT
1 5
• If
you
write
a
research
paper
and
wish
to
get
it
20
ay
published.
Each
publisher
will
have
its
own
M
2
layout
of
how
it
wants
things.
If
you
write
it
in
-2
18
Latex
you
only
need
to
change
the
class/style
r,
pu
file
to
set
it
into
their
format.
ai
,J
IT
N
,M
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Your
LaTeX
–
The
Steps
…
5
favourite
Acrobat
1
20
Reader
text
editor
ay
M
2
-2
18
Text
file
latex Device
dvi2pdf
file.tex Independent
file.dvi
r,
with
PDF
pu
embedded
Document
Document
ai
instruc,ons
,J
IT
file.tex file.dvi
N
file.pdf
,M
15
T'
AO
Some
other
printer
format
(eg.
Postscript)
How
to
Write
LaTeX
Code?
5
1
• Commands
start
with
a
backslash
(\).
20
ay
• Parameters
are
given
in
curly
brackets
{
}.
M
2
-2
• Op,onal
parameters
are
given
in
square
brackets
18
[
].
r,
pu
ai
• Environments
(blocks
with
a
certain
type
of
,J
IT
\begin{environment_type}
15
T'
AO
environment
content
\end{environment_type}
A
Simple
Document
15
20
\documentclass
{ar,cle}
ay
M
2
-2
18
\begin{document}
r,
pu
ai
,J
IT
….
N
,M
15
T'
AO
\end{document}
A
Simple
Document
1 5
\documentclass
{ar,cle}
20
ay
M
This
specifies
the
type
2
-2
\begin{document}
18
of
the
document:
r,
ar,cle,
report,
book,
pu
ai
Here
we
write
,J le?er,
etc.
….
IT
the
contents.
N
,M
15
T'
\end{document}
AO
Standard
Document
Classes
15
• ArCcle:
Ideal
for
a
short
research
paper
20
ay
(divided
into
sec,ons,
subsec,ons,
etc).
M
2
-2
• Book:
Class
to
be
used
to
typeset
a
book
18
(chapters,
sec,ons,
etc).
r,
pu
ai
• Report:
Similar
to
the
Book
class
but
for
,J
IT
single-‐sided
prin,ng.
N
,M
15
15
• A
paper
is
typically
split
into
the
following
logical
20
ay
parts:
M
– A
,tle
2
-2
18
– An
abstract
r,
pu
– A
number
of
sec,ons
ai
– A
number
of
subsec,ons
in
each
sec,on
,J
IT
tables,
etc.)
The
Title
51
\documentclass{ar,cle}
20
\begin{document}
ay
M
2
-2
\,tle{Technical
Wri,ng
Using
LaTeX}
18
r,
\author{Dinesh
Gopalani}
pu
\date{
May
22,
2015}
ai
,J
IT
N
\make,tle
,M
15
T'
AO
\end{document}
The
Title
5
1
\documentclass{ar,cle}
20
\begin{document}
ay
M
2
-2
\,tle{Technical
Wri,ng
Using
LaTeX}
18
r,
\author{Dinesh
Gopalani}
pu
\date{
May
22,
2015}
ai
,J Technical
WriCng
Using
LaTeX
IT
N
\make,tle
,M
Dinesh Gopalani
15
T'
\end{document}
The
Abstract
15
• Used
to
give
an
overview
of
the
content
of
the
20
ay
document.
M
2
-2
• Is
usually
typeset
with
wider
margins
than
the
18
main
text.
r,
pu
ai
• Specified
using
the
abstract
environment:
,J
IT
N
\begin{abstract}
,M
15
…
T'
AO
\end{abstract}
Sec,ons
and
Subsec,ons
\documentclass{ar,cle}
51
20
\begin{document}
ay
M
\sec,on{This
is
a
Sec,on}
2
-2
18
\subsec,on
{This
is
a
Subsec,on}
r,
pu
This
is
body
of
subsec,on.
ai
,J
IT
15
\end{document}
Sec,ons
and
Subsec,ons
\documentclass{ar,cle}
1 5
20
\begin{document}
ay
M
\sec,on{This
is
a
Sec,on}
2
-2
18
\subsec,on
{This
is
a
S1. This
is
a
SecCon
ubsec,on}
r,
pu
This
is
body
of
subsec,on.
ai
1.1
This
is
a
SubsecCon
,J
IT
This
is
\
subsec,on
{This
is
another
body
of
subsec,on.
Subsec,on}
N
,M
1.2
This
is
another
SubsecCon
15
2.
This
is
another
SecCon
\end{document}
Figures
and
Tables
15
• The
figure
environment
is
used
to
include
a
20
ay
floa,ng
figure
in
the
text.
M
2
• Similarly
the
table
environment
can
be
used
-2
18
to
insert
a
floa,ng
table.
r,
pu
• A
cap,on
can
be
added
to
both
using
the
ai
,J
\cap,on{}
command.
IT
N
,M
51
20
ay
\begin{figure}
M
2
-2
\includegraphics{latex.jpeg}
18
r,
\cap,on{
A
Book
on
Latex}
pu
ai
,J
\end{figure}
IT
N
,M
15
T'
AO
Figures
(Contd.)
1 5
20
ay
\begin{figure}
M
2
-2
\includegraphics{latex.jpeg}
18
r,
\cap,on{
A
Book
on
Latex}
pu
ai
,J
\end{figure}
IT
N
,M
15
T'
AO
1 5
20
ay
\begin{figure}
M
2
-2
\includegraphics{latex.jpeg}
18
r,
\cap,on{
A
Book
on
Latex}
pu
ai
,J
\end{figure}
IT
N
,M
15
T'
Number
is
AO
assigned
automa,cally
Figure
1.
A
Book
on
Latex
Tables
1 5
20
• To
draw
Tables,
the
tabular
environment
is
ay
used.
M
2
-2
• Parameters
give
the
informa,on
about
the
18
r,
column
layout.
(l
–
Len,
c
–
Centre,
r
–
Right
pu
ai
Alignments,
|
-‐
Ver,cal
Line,
etc.)
,J
IT
N
15
20
\begin{tabular}{
|l|c|r|}
ay
M
\hline
2
-2
18
Name
&
Pos
&
Score
\\
\hline
r,
pu
Suresh
Sharma&
3rd
&
90
\\
\hline
ai
,J
IT
\end{tabular}
Tables
(Contd.)
Right
1 5
20
\begin{tabular}{
|l|c|r|}
Alignment
ay
for
Third
M
\hline
2
Column
-2
Len
Center
18
Name
&
Pos
&
Score
\\
\hline
r,
Alignment
Alignment
pu
Suresh
for
Sharma&
First
3rd
&
9S0
econd
\\
\hline
ai
for
,J
Column
Column
IT
\end{tabular}
Tables
(Contd.)
1 5
\begin{tabular}{
|l|c|r|}
20
ay
\hline
M
2
-2
Name
&
Pos
&
Score
\\
\hline
18
r,
pu
Suresh
Sharma&
3rd
&Name
90
\\
Pos
Score
Suresh
Sharma
,J
ai
3rd
90
Pradeep
Agarwal
&
2
nd
&
9A4
garwal
\\
IT
Pradeep
2nd
94
N
,M
\end{tabular}
AO
Mathema,cal
Symbols
and
Equa,ons
1 5
• All
mathema,cs
must
appear
in
maths
mode
.
20
ay
• The
following
symbols
can
be
produced
using
M
2
-2
the
commands:
18
r,
\leq
\,mes
\pi
\inny
pu
ai
,J
IT
N
,M
51
20
ay
M
2
-2
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Mathema,cal
Symbols
and
Equa,ons
(Contd.)
51
20
ay
A method closure $\langle\varsigma(x_i)b_i,S_{e_i}\rangle$
M
2
is a pair consisting of method together with a stack that
-2
is used for the reduction of the method body.
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Mathema,cal
Symbols
and
Equa,ons
(Contd.)
51
20
ay
A method closure $\langle\varsigma(x_i)b_i,S_{e_i}\rangle$
M
2
is a pair consisting of method together with a stack that
-2
is used for the reduction of the method body.
18
r,
pu
ai
,J
IT
N
,M
15
T'
AO
Mathema,cal
Symbols
and
Equa,ons
(Contd.)
1 5
• If
mathema,cal
formulae
are
to
appear
on
a
separate
20
ay
line,
start
the
mathema,cs
using
\[,
and
end
it
with
\].
M
2
-2
• For
Equa,ons,
use
equa,on
environment.
18
\begin{equa,on}
r,
pu
ai
\Gamma,
x_i:\Sigma_s^{\;\prime}(\iota_i)\vdash
,J
IT
i\in
1..q
15
T'
\end{equa,on}
AO
Mathema,cal
Symbols
and
Equa,ons
(Contd.)
15
• If
the
mathema,cal
formulae
are
to
appear
on
a
20
ay
separate
line,
start
the
mathema,cs
using
\[,
and
M
end
it
with
\].
2
-2
18
• For
Equa,ons,
use
equa,on
environment.
r,
pu
\begin{equa,on}
\Gamma,
,J
aix _i:\Sigma_s^{\;\prime}(\iota_i)\vdash
IT
N
i\in
1..q
15
T'
AO
\end{equa,on}
References
(Bibliography)
15
• To
create
the
bibliography,
use
the
20
ay
thebibliography
environment.
M
2
• Items
in
the
bibliography
are
added
using
the
-2
18
\bibitem{label}
command.
The
label
is
used
to
r,
pu
refer
to
the
entry.
ai
,J
• Ci,ng
a
bibliography
item
in
the
main
text
can
IT
N
5
1
\begin{thebibliography}
20
ay
\bibitem{label1}
bibliographic
informa,on
M
2
-2
\bibitem{label2}
bibliographic
informa,on
18
r,
pu
…
ai
,J
\end{thebibliography}
IT
N
,M
15
T'
AO
References
(Bibliography)
(Contd.)
5
1
\begin{thebibliography}
20
ay
\bibitem{label1}
bibliographic
informa,on
M
2
-2
\bibitem{label2}
bibliographic
informa,on
18
r,
pu
…
,J
ai
\end{thebibliography}
IT
N
,M
15
T'
AO
Table
of
Contents
15
20
• To
add
a
table
of
contents,
with
chapters,
ay
M
sec,ons,
etc.
the
command
\tableofcontents
is
2
-2
used.
18
r,
pu
• We
may
also
include
a
list
of
figures
and
a
list
ai
,J
of
tables
by
using
\listoffigures
and
IT
N
\listonables
respec,vely.
,M
15
T'
AO
Table
of
Contents
(Contd.)
51
\documentclass[12pt]{report}
20
ay
\begin{document}
M
2
-2
\tableofcontents
18
r,
\listoffigures
pu
\listonables
ai
,J
IT
….
N
,M
15
\end{document}
T'
AO
Table
of
Contents
(Contd.)
51
\documentclass[12pt]{report}
20
ay
\begin{document}
M
2
-2
\tableofcontents
18
r,
\listoffigures
pu
ai
\listonables
,J
IT
….
N
,M
15
\end{document}
T'
AO
Labels
and
Cross
References
15
•
To
name
a
numbered
object
(figure,
sec,on,
20
ay
chapter,
etc)
the
command
\label{label-‐name}
M
2
is
used.
-2
18
r,
• Now
in
the
text
to
insert
the
number
of
the
pu
ai
object
named
using
\label
command
,
,J
IT
1 5
The
calculus
consists
of
objects
and
aspects,
and
define
20
the
opera,ons.
Table
\ref{syntax-‐un-‐asp}
summarizes
ay
M
syntax
for
the
aspect
calculus.
The
conven,ons
used
2
-2
for
represen,ng
the
syntax
are
based
on
standard
18
Backus-‐Naur
Form(BNF)
\cite{cho.56,
knu.64}.
r,
pu
….
ai
,J
\begin{tabular}
IT
N
…
,M
15
\label{syntax-‐un-‐asp}
T'
AO
\end{tabular}
Labels
and
Cross
References
(Contd.)
1 5
The
calculus
consists
of
objects
and
aspects,
and
define
20
the
opera,ons.
Table
\ref{syntax-‐un-‐asp}
summarizes
ay
M
syntax
for
the
aspect
calculus.
The
conven,ons
used
2
-2
for
represen,ng
the
syntax
are
based
on
standard
18
Backus-‐Naur
Form(BNF)
\cite{cho.56,
knu.64}.
r,
pu
….
ai
,J
\begin{tabular}
IT
N
…
,M
15
\label{syntax-‐un-‐asp}
T'
AO
\end{tabular}
Final
Words
....
15
20
ay
• Your
journey
with
LaTeX
already
started
by
M
2
a?ending
this
session.
-2
18
• Enjoy
your
wri,ng
with
LaTeX
r,
pu
ai
• Spread
the
joy
of
using
LaTeX.
,J
IT
N
,M
15
T'
AO