You are on page 1of 12

Bloque Computación Evolutiva

Manejo de restricciones y parametrización

1. Manejo de restricciones.

2. Técnicas basadas en funciones penalti

3. Técnicas basadas en separación de objetivos y


restricciones.

4. Control de parámetros.
Constrained optimization problems (COP) Constraint handling

Why constraints?
• Practical relevance: a great deal of practical problems are constrained.

Constrained Optimization Problem (COP): S, f, Φ


• S is a free search space
• f is a (real valued) objective function on S
• Φ is a formula (Boolean function on S)

Solution of a COP: x  SΦ such that f(x) is optimal in SΦ


• SΦ = {x  S | Φ(x) = true} is the feasible search space

Φ is the feasibility condition – may be a combination of


several constraints
In general with m constraints they will fall into two
types:
• inequalities: g_i(x) < 0 1…i…q
• equalities h_i(x) = 0 q+1 … i…m
Examples Constraint handling

Consider the traveling salesman problem (TSP) for n cities:


• Let C = {city1, … , cityn}
If the search space is defined as
• S = Cn, then we need a constraint that guarantees that each city only
appears once at S
• S = {permutations of C}, then no restriction is required.
THEREFORE
Consideration as an optimization problem with or without constraints may
depend on the definition of the search space
Problems to handle constraints in EAs
• Standard variation operators are transparent to constraints
• Mutation of a feasible individual may yield an unfeasible individual.
• Recombination of two feasible individuals may yield new unfeasible individuals.

• A population may contain some feasible (a, c, d) and


unfeasible individuals (b, e, f), while the optimum
solution is ‘x’
How to deal with unfeasible individuals?

1. How should two feasible individuals be compared?


2. How should two unfeasible individuals be compared?
3. Should we assume that 1. eval f: function for feasible domain
for any and for and any 2. eval u: function for unfeasible domain
In particular, which individual is better: individual ‘c’ or unfeasible individual ‘f’ (note
that the optimum is ‘x’)
4. Should we consider unfeasible individuals harmful and eliminate them from the
population? ( may be there are useful)
5. Should we ‘repair’ unfeasible solutions by moving them into the closest point of the
feasible space ? ( should we use a repair procedure for evaluation purpose only)
6. Should we chose to penalize unfeasible individuals?
Approaches to handle constraints in EAs Constraint handling

 Penalty methods:
COP: S, f, Φ: transform the constraints in Φ into f
conceptually simple, transparent allows user to tune on his/her
problem independent preferences by weights
reduces problem to ‘simple’ optimization allows EA to tune fitness function by
modifying weights during the search
Difficult balance of penalties:
• too low: potential infeasible solutions
• too high: slow or maybe difficult convergence
 Separation of objectives and constraints
 Special representations and operators that guarantee feasible solutions e.g.,
mutation or crossover of k-bit integers will always produce k-bit integers.
problem specific
 Repair algorithms: an infeasible solution can be shifted to a feasible one
without significantly altering the genetic information.
problem specific
no guidelines
Penalty functions Constraint handling

• Assuming a minimisation problem, the general form, will be :


min f(x) such that x F
• This can be reformulated as:
min f (x) + p( d( x, F) ) where
d( x, F) is a distance function of x from F , and
p(.) is a penalty function subject to p(0) =0
• Distance metrics might include:
• number of violated constraints
• sum of constraint violations ( linear or exponential)
• distance to feasible region

• Penalty function must balance exploration of infeasible region, while not


wasting time
Death penalty functions Constraint handling

A variant is the death penalty approach: when a constraint is


violated, the solution is eliminated (or zero fitness is
assigned).
Very efficient: when a solution is unfeasible, just generate
another one.
No computation required to assess the degree of unfeasibility.
Problem may become stagnated �
when the feasible region
represents a small percentage of the search space:
recommended only for large feasible regions.
Does not use information from unfeasible solutions.
Cannot handle equality constraints.
Static penalty functions Constraint handling

Penalty factors do not depend on the current generation


m
number. P(d ( x, F)   widi
i 1
Some variants:
1. Fixed penalty for all infeasible solutions. Weights are set
so high as to prevent the use of infeasible solutions.
2. Add a penalty for each violated constraint ( di 1 if
constraint violated) �
3. Penalty Functions based on distance function to feasible
region:
m
f p ( x )  f ( x )   wi d i where  gi ( x)
di  
 hi ( x)
i 1

First two generally inferior.


Dynamic penalty functions Constraint handling

• Attempt to solve problem of setting the values


of wi by hand.
• Idea is to progressively penalise infeasible
solutions as time progresses, e.g. weight is a
function of time wi (t )  (ci  t )
• Still need to fine tune penalties to problem
– too lenient => infeasible solutions in final pop.
– too harsh => converge to sub‐optimal solution
• Defining good dynamic functions is as difficult
as static functions; still, they usually perform
better.
Adaptive penalty functions Constraint handling
m
f p ( x )  f ( x )   wi d i
i 1
• Revise penalties based on performance of best over last k generations.
 (1 / 1 ) wi (t ) if x  F for all t  k  1  j  t

wi (t  1)    2 wi (t ) if x  S  F for all t  k  1  j  t
 wi (t ) otherwise

• Penalty involves estimating “near feasible threshold” over time

m
 d  NFTo
f p ( x, t )  f ( x, t )  ( F feas (t )  Fall (t )) wi  i  NFT 
i 1  NFTi  1 

Fall (t ) is best objective function found so far; F feas (t ) is best objective function
of feasible solution found so far. NFTo is upper bound on NFT.  is 0 for static
penalty, or depends on generation number for dynamic penalty. It provides
search specific and constraint specific penalty.
• Step Wise Adaptation of weights: change weight wi by w if i‐th
constraint is violated by the best individual in the population
Separation of constraints and objectives Constraint handling

 The key concept behind this method is the assumption of superiority of feasible
solutions over unfeasible ones.
 Incorporate a heuristic rule for processing unfeasible solutions:
“ every feasible solution is better than every unfeasible solution”
 Using binary tournament selection, the following rules are applied:
– Between a feasible and an infeasible solution, the feasible one is selected
– Between two feasible solutions, the one with better objective function is selected.
– Between two unfeasible solutions, the one with smaller constraint violation is
selected.

 fi ( x) if feasible

fitnessi ( x )   m
 g j ( x ) otherwise
 j 1
Fuzzy logic techniques Constraint handling

1 g j  spec j

gj ( x)   g j ( x)  spec j 2
exp{( ) }, g j  spec j
 p j / 2

(a) (b)

1 1
membership

membership
0.5 0.5

0 0
0 20 40 60 80 0 20 40 60 80
DC gain DC gain
(c) (d)

1 1
membership

membership
0.5 0.5

0 0
0 20 40 60 80 0 20 40 60 80
DC gain DC gain

You might also like