You are on page 1of 97

Unit - 5

Introduction to Dynamic Programming: Need and


characteristics, stage and state, process of
dynamic programming.
Introduction to emerging optimization
techniques: Artificial Neural Networks, Fuzzy
Logic,Genetic Algorithms (Only concept of each
technique).
Q1.
Answer - Dynamic programming is used for problems
requiring a sequence of interrelated decision. This means that
to take another decision we have to depend on the previous
decision or solution formed.
dynamic programming is a recursive
optimization procedure which means it’s a procedure which
optimizes on a step by step basis using information from the
preceding steps . We optimize as we go. In dynamic
programming , a single step is sequentially related to preceding
steps and is not itself a solution to the problem. A single step
contains information that identifies a segment or a part of the
optimal solution e.g. time dependent problems, decision
making.
Q2
Answer –
1.Stage – division of sequence of a system into various
subparts is called stages
2.State – a specific measurable condition of the system
3. Recursive equation – at every stage in dynamic
programming the decision rule is determined by evaluating
an objective function called recursive equation.
4.Principle of optimality – it states that an optimal set of
decisions rules has the property that regardless of the ith
decisions, the remaining decisions must be optimal with
respect to the outcome that results from the ith decision.
This means that optimal immediate decision depends
only on current state and not how you got there
Q3.
ANSWER- The two basic approaches for solving dynamic
programming are:-
1.)Backward recursion-
a)it is a schematic representation of a problem involving a
sequence of n decisions.
b)Then dynamic programming decomposes the problem into
a set of n stages of analysis, each stage corresponding to
one of the decisions. each stage of analysis is described by a
set of elements decision, input state, output state and
returns.
c)Then notational representation of these element when a
backward recursion analysis is used
d)Then symbolic representation of n stages of analysis using
backward recursion so we can formalize the notation
 The general form of the recursion equation used to
compute cumulative return:-
cumulative return = direct return + cumulative
return
through stage from stage through stage
i-1
2.)Forward recursion – this approach takes a problem
decomposed into a sequence of n stages and analyzes
the problem starting with the first stage in the
sequence, working forward to the last stage it is also
known as deterministic probability approach
Q4.
Answer- dynamic programming is a recursive
optimization procedure which means that it optimizes
on a step by step basis using information from
preceding steps
even in goal
programming optimization occurs step by step but it
was iterative rather then recursive that means that
each step in goal programming represented a unique
solution that was non-optimal.in dynamic
programming a single step is sequentially related to
preceding steps and is not itself a solution to the
problem
Q5.
Answer-
Advantages -
1)`the process of breaking down a complex problem into a
series of interrelated sub problems often provides insight
into the nature of problem
2) Because dynamic programming is an approach to
optimization rather than a technique it has flexibility that
allows application to other types of mathematical
programming problems
3) The computational procedure in dynamic programming
allows for a built in form of sensitivity analysis based on
state variables and on variables represented by stages
4)Dynamic programming achieves computational savings
over complete enumeration.
Disadvantages –
1.)more expertise is required in solving dynamic
programming problem then using other methods
2.)lack of general algorithm like the simplex method. It
restricts computer codes necessary for inexpensive and
widespread use
3.)the biggest problem is dimensionality. This problems
occurs when a particular application is characterized
by multiple states. It creates lot of problem for
computers capabilities & is time consuming
ARTIFICIAL NEURAL NETWORKS:

t
Contents
¾ Introduction
¾ Origin Of Neural Network
¾ Biological Neural Networks
¾ ANN Overview
¾ Learningg
¾ Different NN Networks
¾ Challenging
g g Problems
¾ Summery
INTRODUCTION
y Artificial Neural Network (ANN) or Neural Network(NN)
has provide an exciting alternative method for solving a
variety of problems in different fields of science and
engineering.
engineering
y This article is trying to give the readers a :
- Whole idea about ANN
- Motivation for ANN development
- Network architecture and learningg models
- Outline some of the important use of ANN
Origin
g of Neural Network
y Human brain has many incredible characteristics such as massive
parallelism, distributed representation and computation, learning
ability generalization ability,
ability, ability adaptivity,
adaptivity which seems simple but is
really complicated.
y It has been always a dream for computer scientist to create a
computer which could solve complex perceptual problems this fast.
fast
y ANN models was an effort to apply the same method as human
brain uses to solve perceptual problems.
y Three
Th periods i d off development
d l t for
f ANN:
ANN
- 1940:Mcculloch and Pitts: Initial works
- 1960: Rosenblatt: perceptron convergence theorem
Minsky and Papert: work showing the limitations of a simple
perceptron
p
- 1980: Hopfield/Werbos and Rumelhart: Hopfield's
p energy
gy
approach/back-propagation learning algorithm
Biological
g Neural Network
Biological
g Neural Network
y When a signal reaches a synapse: Certain
chemicals called neurotransmitters are
released.
released
y Process of learning: The synapse
effectiveness can be adjusted by signal
ppassingg through.
g
y Cerebral cortex :a large flat sheet of
neurons about 2 to 3 mm thick and 2200
cm , 10^11 neurons
y Duration of impulses between neurons:
milliseconds and the amount of
information sent is also small(few bits)
y Critical information are not transmitted
directly , but stored in interconnections
y The term Connectionist model initiated
from this idea.
ANN Overview: COMPTIONAL MODEL FOR ARTIFICIAL NEURON
The McCullogh-Pitts model

n
z = ∑ wi xi ; y = H ( z )
i =1

Wires : axon & dendrites


Connection weights: Synapse
Threshold function: activity in soma
ANN Overview: Network Architecture
Connection
patterns

FFeed-
d
Recurrent
Forward
Single layer
perceptron

Multilayer
Perceptron
Feed-
Forward Radial Basis
Functions
i
Connection
Competetive
patterns
Networks
Recurrent
Kohonen’s
SOM

Hopefield
Network

ART models
Learningg
y What is the learning process in ANN?
- updating network architecture and connection weights so that
network can efficiently perform a task
y What is the source of learning for ANN?
- Available
l bl training patterns
- The ability of ANN to automatically learn from examples or input-out
pput relations
y How to design a Learning process?
- Knowing about available information
- Having a model
d l ffrom environment: Learning Paradigm
d
- Figuring out the update process of weights: Learning rules
- Identifying a procedure to adjust weights by learning rules: Learning
algorithm
Learningg Paradigm
g
1. Supervised
y The correct answer is p provided for the network for everyy input
p ppattern
y Weights are adjusted regarding the correct answer
y In reinforcement learning only a critique of correct answer is provided
2. Unsupervised
y Does not need the correct output
y Theh system itselflf recognize the
h correlation
l andd organize patterns into
categories accordingly
3. Hybrid
y A combination of supervised and unsupervised
y Some of the weightsg are pprovided with correct outputp while the
others are automatically corrected.
Learning Rules
y There are four basic types of learning rules:
- Error correction rules
- Boltzmann
- Hebbian
- Competitive learning
y Each of these can be trained with or with out a teacher
y Have a particular architecture and learning algorithm
Error Correction Rules
y An error is calculated for the output and is used to modify
the connection
th ti weights;
i ht error gradually
d ll reduces
d
y Perceptron learning rule is based on this error-correction
p
principle
p
y A perceptron consists of a single neuron with adjustable
weights and a threshold u.
y Iff an error occurs the
h weights
h get updated
d d byb iterating unit
ill reaching an error of zero
y Since the decision boundary is linear so if the patterns are
linearly separated, learning process converges with an infinite
No. of iteration
B lt
Boltzman L
Learning
i g
y Used in symmetric recurrent networks(symmetric:Wij=Wji)
y Consist of binary units(+1 for on, -1 for off)
y Neurons are divided into two groups: Hidden & Visible
y Outputs
O are produced
d d according
d to Boltzman
B l statisticall
mechanics
y Boltzman learningg adjust
j weightsg until visible units satisfyy a
desired probabilistic distribution
y The change in connection weight or the error-correction is
measuredd between
b the
h correlation
l i between
b two pair
i off iinput
and output neuron under clamped and free-operating
condition
Hebbian Rules
y One of the oldest learning rule initiated
form neurobiological experiments
y The basic concept of Hebbian Learning:
when neuron A activates, and then causes
neuron B to activate, then
h theh connection
strength between the two neurons is
increased, and it will be easier for A to
activate B in the
h future.
f
y Learning is done locally, it means weight of
a connection is corrected onlyy with respect
p
to neurons connected to it.
y Orientation selectivity: occurs due to
Hebbian training of a network
Competitive
p Learningg Rules
y The basis is the “winner take all” originated from biological
Neurall network
N t k
y All input units are connected together and all units of output are
also connected via inhibitory weights but gets feed back with
excitory weight
y Onlyy one of the unites with largest
g or smallest inputp is activated
and its weight becomes adjusted
y As a result of learning process the pattern in the winner unit
(weight)
h become
b closer
l to he
h input pattern.
y http://www.peltarion.com/blog/img/sog/competitive.gif
Multilayer
y Perceptron
p
y The most popular networks with feed-forward system
y Applies the Back Propagation algorithm
y As a result of having hidden units, Multilayer perceptron can form
arbitrarily complex decision boundaries
y Eachh unit in the
h ffirst hhidden
dd llayer impose a hyperplane
h l in the
h space
pattern
y Each unit in the second hidden layer y impose
p a hyperregion
yp g on outputs
p of
the first layer
y Output layer combines the hyperregions of all units in second layer
Back propagation Algorithm
Radial Basis Function Network(RBF)
y A Radial Basis Function like Gaussian Kernel is applied as an
activation function.
y A Radial Basis Function(also called Kernel Function) is a
real valued function whose value depends only on the
real-valued
distance from the origin or any other center: F(x)=F(|x|)
y RBF network uses a hybrid
y learningg , unsupervised
p clusteringg
algorithm and a supervised least square algorithm
y As a comparison to multilayer perceptron Net.:
-The Learning algorithm is faster than back-propegation
- After training the running time is much more slower
Let’s see an example!
Gaussian Function
Kohonen Self-Organizing Maps
y It consists of a two dimensional array of output units connected to all input
nodes
y It works based on the pproperty
p y of Topology
p gy preservation
p
y Nearby input patterns should activate nearby output units on the map
y SOM can be recognized as a special competitive learning
y Only the weight vectors of winner and its neighbor units are updated
Adaptive
p Resonance Theoryy Model
y ART models were proposed theories to overcome the
concerns related to stability-plasticity dilemma in
competitive learning.
y Is it likely that learning could corrupt the existing knowledge
in a unite?
y If the input
p vector is similar enough g to one of the stored
prototypes(resonance), learning updates the prototype.
y If not, a new category is defined in an “uncommitted” unit.
y Similarity is controlled by vigilance parameter.
Hopfield
p Network
y Hopfield designed a network based on an energy function
y As a result of dynamic recurrency, the network total energy
decreases and tends to a minimum value(attractor)
y The dynamic updates take places in two ways: synchronously and
asynchronously
y traveling
li salesman
l problem(TSP)
bl (TSP)
Challenging
g g Problems
y Pattern recognition
- Character recognition
- Speech recognition

y Clustering/Categorization
- Data mining
- Data analysis
l
Challenging
g g Problems
y Function approximation
- Engineering & scientific Modeling

y Prediction/Forecasting
- Decision-making
- Weather forecasting
Challenging Problems
y Optimization
- Traveling salesman problem(TSP)
Summeryy
y A great overview of ANN is presented in this paper, it is very
easy understanding and straightforward
y The different types of learning rules, algorithms and also
different architectures are well explained
y A number of Networks were described through simple words
y The popular applications of NN were illustrated
y The
h author
h Believes
l that
h ANNS brought
b h up both
b h enthusiasm
h
and criticism.
y Actually except for some special problems there is no evidence
that NN is better working than other alternatives
y More development and better performance in NN requires
the
h combination
b off ANN withh new technologies
h l
FUZZY LOGIC
OVERVIEW

 What is Fuzzy Logic?

 Where did it begin?

 Fuzzy Logic vs. Neural Networks

 Fuzzy Logic in Control Systems

 Fuzzy Logic in Other Fields

 Future
WHAT IS FUZZY LOGIC?

Definition of fuzzy
 Fuzzy – “not clear, distinct, or precise; blurred”

Definition of fuzzy logic


 A form of knowledge representation suitable for
notions that cannot be defined precisely, but which
depend upon their contexts.
What is Fuzzy Logic?

Fuzzy logic is a form of many-valued logic; it


deals with reasoning that is approximate
rather than fixed and exact. In contrast with
traditional logic theory, where binary sets
have two-valued logic: true or false, fuzzy
logic variables may have a truth value that
ranges in degree
What is Fuzzy Logic?

Fuzzy logic has been extended to handle the


concept of partial truth, where the truth
value may range between completely true
and completely false. Furthermore,
when linguistic variables are used, these
degrees may be managed by specific
functions
Fuzzy Logic began

Fuzzy logic began with the 1965 proposal


of fuzzy set theory by Lotfi Zadeh Fuzzy logic
has been applied to many fields, from control
theory to artificial intelligence
Fuzzy Data- Crisp Data

• he reasoning in fuzzy logic is similar to


human reasoning
• It allows for approximate values and
inferences as well as incomplete or
ambiguous data
• (binary yes/no choices
Fuzzy Data- Crisp Data

• Fuzzy logic is able to process


incomplete data and provide
approximate solutions to problems
other methods find difficult to solve.
Fuzzy Data- Crisp Data

• Terminology used in fuzzy logic not used in


other methods are: very high, increasing,
somewhat decreased, reasonable and very
low.
Degrees of Truth

Fuzzy logic and probabilistic logic are


mathematically similar – both have truth
values ranging between 0 and 1 – but
conceptually distinct, due to different
interpretations—see interpretations of
probability theory..
Degrees of Truth

Fuzzy logic corresponds to "degrees of truth",


while probabilistic logic corresponds to
"probability, likelihood"; as these differ, fuzzy
logic and probabilistic logic yield different
models of the same real-world situations.
Degrees of Truth

Both degrees of truth and probabilities range


between 0 and 1 and hence may seem similar
at first. For example, let a 100  ml glass
contain 30 ml of water. Then we may consider
two concepts: Empty and Full. The meaning
of each of them can be represented by a
certain fuzzy set.
Degrees of Truth

Then one might define the glass as being 0.7


empty and 0.3 full. Note that the concept of
emptiness would be subjective and thus
would depend on the observer or designer.
Degrees of Truth

Another designer might equally well design a


set membership function where the glass
would be considered full for all values down
to 50 ml. It is essential to realize that fuzzy
logic uses truth degrees as a mathematical
model of the vagueness phenomenon while
probability is a mathematical model of
ignorance.
Applying the Values

A basic application might characterize


subranges of a continuous variable. For
instance, a temperature measurement
for anti-lock brakes might have several
separate membership functions defining
particular temperature ranges needed to
control the brakes properly.
Applying the Values

Each function maps the same temperature


value to a truth value in the 0 to 1 range.
These truth values can then be used to
determine how the brakes should be
controlled
Applying the Values
Applying the Values

In this image, the meaning of the


expressions cold, warm, and hot is
represented by functions mapping a
temperature scale. A point on that scale has
three "truth values"—one for each of the three
functions.
Applying the Values

The vertical line in the image represents a


particular temperature that the three arrows
(truth values) gauge. Since the red arrow
points to zero, this temperature may be
interpreted as "not hot". The orange arrow
(pointing at 0.2) may describe it as "slightly
warm" and the blue arrow (pointing at 0.8)
"fairly cold"
TRADITIONAL REPRESENTATION
OF LOGIC

Slow Fast
Speed = 0 Speed = 1

bool speed;
get the speed
if ( speed == 0) {
// speed is slow
}
else {
// speed is fast
}
FUZZY LOGIC REPRESENTATION
Slowest
For every problem
[ 0.0 – 0.25 ]
must represent in
terms of fuzzy sets.
Slow
What are fuzzy [ 0.25 – 0.50 ]

sets?
Fast
[ 0.50 – 0.75 ]

Fastest
[ 0.75 – 1.00 ]
FUZZY LOGIC REPRESENTATION
CONT.

Slowest Slow Fast Fastest


float speed;
get the speed
if ((speed >= 0.0)&&(speed < 0.25)) {
// speed is slowest
}
else if ((speed >= 0.25)&&(speed < 0.5))
{
// speed is slow
}
else if ((speed >= 0.5)&&(speed < 0.75))
{
// speed is fast
}
else // speed >= 0.75 && speed < 1.0
{
// speed is fastest
}
Linguistic Variables

While variables in mathematics usually take


numerical values, in fuzzy logic applications,
the non-numeric linguistic variables are
often used to facilitate the expression of rules
and facts
Linguistic Variables

A linguistic variable such as age may have a


value such as young or its antonym old.
However, the great utility of linguistic
variables is that they can be modified via
linguistic hedges applied to primary terms.
The linguistic hedges can be associated with
certain functions
Examples

Fuzzy set theory defines fuzzy operators on


fuzzy sets. The problem in applying this is that
the appropriate fuzzy operator may not be
known. For this reason, fuzzy logic usually
uses IF-THEN rules, or constructs that are
equivalent, such as fuzzy associative matrices

Rules are usually expressed in the form:


IF variable IS property THEN action
that uses a fan might look like
this:

IF temperature IS very cold THEN stop fan


IF temperature IS cold THEN turn down fan
IF temperature IS normal THEN maintain
level
IF temperature IS hot THEN speed up fan

There is no "ELSE" – all of the rules are evaluated,


because the temperature might be "cold" and
ORIGINS OF FUZZY LOGIC
 Traces back to Ancient Greece

 Lotfi Asker Zadeh ( 1965 )


 First to publish ideas of fuzzy logic.

 Professor Toshire Terano ( 1972 )


 Organized the world's first working group on fuzzy
systems.

 F.L. Smidth & Co. ( 1980 )


 First to market fuzzy expert systems.
FUZZY LOGIC VS. NEURAL
NETWORKS
 How does a Neural Network work?

 Both model the human brain.


 Fuzzy Logic
 Neural Networks

 Both used to create behavioral

systems.
FUZZY LOGIC IN CONTROL
SYSTEMS

 Fuzzy Logic provides a more efficient and


resourceful way to solve Control Systems.
 Some Examples
 Temperature Controller

 Anti – Lock Break System ( ABS )


TEMPERATURE CONTROLLER
 The problem
 Change the speed of a heater fan, based off the room
temperature and humidity.
 A temperature control system has four settings
 Cold, Cool, Warm, and Hot
 Humidity can be defined by:
 Low, Medium, and High
 Using this we can define
the fuzzy set.
BENEFITS OF USING FUZZY LOGIC
ANTI LOCK BREAK SYSTEM ( ABS )
Nonlinear and dynamic in nature
Inputs for Intel Fuzzy ABS are derived from
 Brake
 4 WD
 Feedback
 Wheel speed
 Ignition
Outputs
 Pulsewidth
 Error lamp
FUZZY LOGIC IN OTHER FIELDS

 Business

 Hybrid Modelling

 Expert Systems
CONCLUSION

 Fuzzy logic provides an alternative way to


represent linguistic and subjective attributes of
the real world in computing.

It is able to be applied to control systems and


other applications in order to improve the
efficiency and simplicity of the design process.
GENETIC ALGORITHM
GENETIC ALGORITHM
INTRODUCTION
● Genetic Algorithm (GA) is a search-based
optimization technique based on the principles
of Genetics and Natural Selection. It is
frequently used to find optimal or near-optimal
solutions to difficult problems which otherwise
would take a lifetime to solve.
INTRODUCTION TO
OPTIMIZATION
● Optimization is the process of making
something better.
What are Genetic Algorithms?
● Nature has always been a great source of
inspiration to all mankind. Genetic Algorithms
(GAs) are search based algorithms based on
the concepts of natural selection and genetics.
● GAs are a subset of a much larger branch of
computation known as Evolutionary
Computation.
BASIC TERMINOLOGY
● Population – It is a subset of all the possible
(encoded) solutions to the given problem.
● Chromosomes – A chromosome is one

such solution to the given problem.


● Gene – A gene is one element position of a

chromosome.
● Allele – It is the value a gene takes for a

particular chromosome.
● Genotype – Genotype is the population in the computation
space. In the computation space, the solutions are
represented in a way which can be easily understood and
manipulated using a computing system.
● Phenotype – Phenotype is the population in the actual real

world solution space in which solutions are represented in


a way they are represented in real world situations.
● Decoding and Encoding – For simple problems, the

phenotype and genotype spaces are the same. However,


in most of the cases, the phenotype and genotype spaces
are different. Decoding is a process of transforming a
solution from the genotype to the phenotype space, while
encoding is a process of transforming from the phenotype
to genotype space.Decoding should be fast as it is carried
out repeatedly in a GA during the fitness value calculation.
● Fitness Function – A fitness function simply defined
is a function which takes the solution as input and
produces the suitability of the solution as the output.
In some cases, the fitness function and the objective
function may be the same, while in others it might be
different based on the problem.
● Genetic Operators – These alter the genetic
composition of the offspring. These include
crossover, mutation, selection, etc.
Knapsack Problem
● The knapsack problem or rucksack problem is a problem
in combinatorial optimization: Given a set of items, each
with a weight and a value, determine the number of each
item to include in a collection so that the total weight is
less than or equal to a given limit and the total value is as
large as possible. It derives its name from the problem
faced by someone who is constrained by a fixed-size
knapsack and must fill it with the most valuable items.
BASIC STRUCTURE OF GENETIC ALGORITHM
GENOTYPE REPRESENTATION
● BINARY REPRESENTATION

● REAL VALUED REPRESENTATION


● INTEGER REPRESENTATION

● PERMUTATION REPRESENTATION
GA- POPULATION
● Population is a subset of solutions in the
current generation
● Set of chromosomes
● POPULATION INITIALIZATION
1. RANDOM INITIALIZATION
2. HEURISTIC INITIALIZATION

● POPULATION MODEL
1. STEADY STATE
2. GENERATIONAL
FITNESS FUNCTION
● Takes a candidate solution to the problem as
input and produces as output
● The objective is to either maximize or minimize
the given objective function
GA- PARENT SELECTION
● Fitness Proportionate Selection
Roulette Wheel Selection

● Stochastic Universal Sampling (SUS)
● Tournament Selection

● Rank Selection
GA- CROSSOVER
● one parent is selected and one or more off-
springs are produced using the genetic material
of the parents
● One Point Crossover
● Multi Point Crossover

● Uniform Crossover
GA- MUTATION
● used to maintain and introduce diversity in the
genetic population
● Mutation Operators:
-Bit Flip Mutation

-Random Resetting
-Swap Mutation
● Scramble Mutation

● Inversion Mutation
GA- SURVIVOR SELECTION
● Age Based Selection
● Fitness Based Selection
GA- TERMINATION CONDITION
● When there has been no improvement in the
population for X iterations.
● When we reach an absolute number of
generations.
● When the objective function value has reached
a certain pre-defined value

You might also like