Professional Documents
Culture Documents
AI APPROACHES:
TURING APPROACH
Thinking/Reasoning: System that think like humans
Behaviour/Action: System that act like humans
RATIONAL APPROACH
Thinking/Reasoning: System that think rationally
Behaviour/Action: System that act rationally
Types of AI Tasks
(i) Mundane Tasks (Daily Routine Tasks)
• Perception
• Vision
• Speech
• Natural Language understanding, generation and translation
• Common-sense Reasoning
• Simple reasoning and logical symbol manipulation
• Robot Control
(ii) Formal Tasks
• Games
– Chess
Deep Blue recently beat Gary Kasparov
– Backgammon
– Draughts
• Mathematics
– Geometry and Logic
Logic Theorist: It proved mathematical theorems. It actually proved several theorems
from Classical Math Textbooks
– Integral Calculus
– Programs such as Mathematical and Mathcad and perform complicated symbolic
integration and differentiation.
(iii) Expert Tasks
• Engineering
– Design
– Fault finding
– Manufacturing
• Planning
• Scientific Analysis
• Medical Diagnosis
• Financial Analysis
• Rule based systems - if (conditions) then action
INTELLIGENT AGENTS
Agent is something that perceives environment using sensor and acts upon it using actuator
Rational Agent:
It is agent which perceives environment using sensor and acts upon to using actuator such that
performance measure is maximized.
Performance Measure:
Degree of Success
PEAS:
P->Performance
E->Environment
A->Actuators
S->Sensors
Robotic Sensors: MIC, Camera and infrared range finders for sensors
Example:
Performance Measure:
Search
Search Algorithms
• Uninformed or blind search strategies uses only the information available in the problem
definition
• Informed or heuristic search strategies use additional information. Heuristic tells us
approximately how far the state is from the goal state. Heuristics might underestimate or
overestimate the merit of a state.
Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was
empty, quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST
o
(0,(0,0)0)
(4, 0) (0, 3)
(0, 2) (4, 0)
Advantages of BFS
• BFS will not get trapped exploring a blind alley. This contrasts with DFS, which may follow a
single unfruitful path for a very long time, perhaps forever, before the path actually terminates in
a state that has no successors.
• If there is a solution, BFS is guaranteed to find it.
• If there are multiple solutions, then a minimal solution will be found.
Informed Search
Generate and test
The generate and test is the simplest form of all heuristic search methods
Algorithm
1. Generate a possible solution. For some problems, this means generating a particular point in the
problem space. For others, it means generating a path from a start state.
2. Test to see if this is actually a solution by comparing the chosen point or the endpoint of the
chosen path to the set of acceptable goal states.
3. If a solution has been found, quit. Otherwise, return to step 1.
Example - Traveling Salesman Problem (TSP)
• Traveler needs to visit n cities.
• Know the distance between each pair of cities.
• Want to know the shortest route that visits all the cities once.
HILL CLIMBING
Hill climbing is a variant of generate and test in which feedback from the test procedure
is used to help the generator decide which direction to move in the search space .
Hill climbing is often used when a good heuristic function is available for evaluating
states but when no other useful knowledge is available .
DEFINTION:
“It is an iterative algorithm that starts with an arbitrary solution to a problem and
attempts to find a better solution by changing a single element of the solution
incrementally. If the change produces a better solution, an incremental change is taken
as a new solution. This process is repeated until there are no further improvements.”
Algorithm
1. Evaluate the initial state. If it is a goal state , then return it and quit . Otherwise , continue with the
initial state as the current state .
2. Loop until a solution is found or there are no new operators left to be applied in the current state
a) Select and apply a new operator
b) Evaluate the new state:
i) If it is a goal state, then return and quit .
ii) If it is not a goal state but if it is better than the current state, then make it
current state.
iii) If it is not better than the current state , then continue within the loop .
Plateau
A flat area of the search space in which all neighbouring states have the same value is called plateau. On a
plateau, it is not possible to determine the best direction in which to move by making local comparisons.
Ridge
A ridge is special kind of local maximum, It is an area of the search space that is higher than surrounding
areas and that itself has a slope. But the orientation of the high region, compared to the set of available
moves and the directions in which they move, makes it impossible to traverse a ridge by single moves.
Algorithm
1. Evaluate the initial state.
2. Loop until a solution is found or a complete iteration produces no change to current state:
a) SUCC = a state such that any possible successor of the
current state will be better than SUCC (the worst state).
b)For each operator that applies to the current state, evaluate
the new state:
goal quit
better than SUCC set SUCC to this state
- SUCC is better than the current state set the current state to SUCC.
Here, h(n) = the number of misplaced tiles (not including the blank), the Manhattan Distance heuristic
helps us quickly find a solution to the 8-puzzle.
A-S-F-B 140+99+211=450
Performance Analysis
• Time and space complexity – O(bm)
• Optimality – no
• Completeness - no
Evaluation function
f(n) = h(n)+g(n)
• f(n) = cost of the cheapest solution through n
• g(n) = actual path cost from the start node to node n
Algorithm
1. Create a priority queue of search nodes (initially the start state). Priority is determined by the
function f )
2. While queue not empty and goal not found:
(a) Get best state x from the queue.
(b) If x is not goal state:
(i) generate all possible children of x (and save path information with each node).
(ii) Apply f to each new node and add to queue.
(iii) Remove duplicates from queue (using f to pick the best).
A-S-R-P-B 140+80+97+101=418
Performance Analysis
• Time complexity – depends on heuristic function and admissible heuristic value
• space complexity – O(bm)
• Optimality – yes (locally finite graphs)
• Completeness – yes (locally finite graphs)
GAME PLAYING
Early work on AI focused on formal tasks such as game playing and theorem proving. In
game playing to select the next state, search technique is used. There were two reasons that games
appeared to be a good domain in which to explore machine intelligence.
(i) Games provide a structured task in which it is easy to measure success or failure
(ii) They did not require large amount of knowledge. They apply a straight forward search and
provide a solution from the starting state to a winning state
Unfortunately, the second statement is not true for all. But, it is true for simple games.
• For e.g., consider chess
• The average branching factor is around 35
• Each player may make 50 moves
• It takes 35100 positions to examine the complete tree
Thus, in this case, it is difficult to apply a simple straightforward search. Some kind of heuristic search
procedure is necessary.
In generate and test procedure, the generator generates the entire proposed solutions and the
tester then evaluates. To improve the effectiveness of the test procedure two things can be done.
Improve the generate procedure so that only good moves are generated.
Improve the test procedure, so that the best moves will be recognized and explored first.
Two types of generator
(i) Legal move generator – All the possible moves will be generated
(ii) Plausible move generator – Some smaller number of promising moves are generated.
MINIMAX searches the entire tree, even if in some cases the rest can be ignored. But, this
alpha beta cutoff returns appropriate minimax decision without exploring entire tree. The minimax search
procedure is slightly modified by including branch and bound strategy one for each players. This
modified strategy is known as alpha beta pruning. It requires maintenance of two threshold values, one
representing a lower bound on the value that a maximizing node may ultimately be assigned (alpha) and
another representing upper bound on the value that a minimizing node may be assigned (beta).
Here is an example of Alpha-Beta search:
Another example
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and
β= +∞, these value of alpha and beta passed down to node B where again α= -∞ and β=
+∞, and Node B passes the same value to its child D.
Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game states
it has to examine are exponential in depth of the tree. Since we cannot eliminate
the exponent, but we can cut it to half. Hence there is a technique by which
without checking each node of the game tree we can compute the correct
minimax decision, and this technique is called pruning. This involves two
threshold parameter Alpha and beta for future expansion, so it is called alpha-
beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not
only prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the same move as
the standard algorithm does, but it removes all the nodes which are not really
affecting the final decision but making algorithm slow. Hence by pruning these
nodes, it makes the algorithm fast.
Step 1: At the first step the, Max player will start first move from node A where α= -∞
and β= +∞, these value of alpha and beta passed down to node B where again α= -∞
and β= +∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node
D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a
turn of Min, Now β= +∞, will compare with the available subsequent nodes value, i.e.
min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and
the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current
value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and
β= 3, where α>=β, so the right successor of E will be pruned, and algorithm will not
traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At
node A, the value of alpha will be changed the maximum available value is 3 as max (-
∞, 3)= 3, and β= +∞, these two values now passes to right successor of A which is
Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α
remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3
and β= 1, and again it satisfies the condition α>=β, so the next child of C which is G will
be pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed and
nodes which has never computed. Hence the optimal value for the maximizer is 3 for this
example.
Rules to find good ordering:
Following are some rules to find good ordering in alpha-beta pruning:
Expert System (ES) is a computer program that uses knowledge and inference procedures to solve
problems that are difficult enough to require human expertise for their solutions. Expert System (ES) is an
information system that is capable of mimicking human thinking and making considerations during the
process of decision-making
Advantages
• Increased productivity (find solutions much faster than humans).
• Availability of expertise (human experts can be at one place at a time).
• Can be used in dangerous environments (e.g. in space).
• Reuse of User Interface and Inference modules
• Level of programming skill needed is low.
• Faster and cheaper completion of project
Limitations
• Difficulty in engineering, especially acquiring the expertise.
• Mistrust by the users.
• Effective only in specific areas (areas of expertise)
Inference Engine:
It consists of inference mechanism and control strategy.
Inference means search through knowledge base and derive new knowledge.
It involve formal reasoning involving matching and unification similar to the one performed
by human expert to solve problems in a specific area of knowledge.
Inference operates by using modus ponen rule.
Control strategy determines the order in which rules are applied.
There are mainly two types of control mechanism viz., forward chaining and backward
chaining.
To recommend a solution, the interface engine uses the following strategies −
Forward Chaining
Backward Chaining
Forward Chaining
It is a strategy of an expert system to answer the question, “What can happen next?” Here,
the interface engine follows the chain of conditions and derivations and finally deduces the outcome.
It considers all the facts and rules, and sorts them before concluding to a solution. This strategy is
followed for working on conclusion, result, or effect. For example, prediction of share market status
as an effect of changes in interest rates.
Backward Chaining
With this strategy, an expert system finds out the answer to the question, “Why this happened?” On the
basis of what has already happened, the interface engine tries to find out which conditions could have
happened in the past for this result. This strategy is followed for finding out cause or reason. For example,
diagnosis of blood cancer in humans.
Knowledge Acquisition:
● Knowledge acquisition module allows system to acquire knowledge about the problem
domain.
● Sources of Knowledge for ES
● Updating of Knowledge can be done using knowledge acquisition module of the system.
− Insertion,
− Deletion and
− Updation of existing knowledge
Knowledge Engineer
Knowledge engineers are involved with validation and verification.
Validation is the process of ensuring that something is correct or conforms to a certain
standard. A knowledge engineer is required to carry out data collection and data entry, but they
must use validation in order to ensure that the data they collect, and then enter into their systems,
fall within the accepted boundaries of the application collecting the data.
It is important that a knowledge engineer incorporates validation procedures into their
systems within the program code. After the knowledge-based system is constructed, it can be
maintained by the domain expert
System Editor
Knowledge base editor which help the expert or knowledge engineer to easily update and
check the knowledge base .
Case History
● Case History stores the file created by inference engine using the dynamic database created at
the time of consultation.
● Useful for learning module to enrich its knowledge base.
● Different cases with solutions are stored in Case Base system.
● These cases are used for solving problem using Case Base Reasoning (CBR).
Explanation Module
● Most expert systems have explanation facilities that allow the user to ask the system why it
asked some question, and how it reached to conclusion.
● It contains 'How' and 'Why' modules attached to it.
− The sub-module ‘How’ tells the user about the process through which system has
reached to a particular solution
− ‘Why' sub-module tells that why is that particular solution offered.
● It explains user about the reasoning behind any particular problem solution.
● Questions are answered by referring to the system goals, the rules being used, and any
existing problem data.
User Interfaces:
Allows user to communicate with system in interactive mode and helps system to create working
knowledge for the problem to be solved. The explanation may appear in the following forms −
Natural language displayed on screen.
Verbal narrations in natural language.
Listing of rule numbers displayed on the screen.
Special Interfaces:
● It may be used for specialized activities such as handling uncertainty in knowledge.
● This is a major area of expert systems research that involves methods for reasoning with
uncertain data and uncertain knowledge.
● Knowledge is generally incomplete and uncertain.
● To deal with uncertain knowledge, a rule may have associated with it a confidence factor or a
weight.
Rules of Inference in Artificial
intelligence
Inference:
In artificial intelligence, we need intelligent computers which can create new logic from old
logic or by evidence, so generating the conclusions from evidence and facts is
termed as Inference.
Inference rules:
A proposition is a declarative statement that’s either TRUE or FALSE (but not both).
P p
T F
F T
Conjunction
Conjunction corresponds to English “and.”
p q is true exactly when p and q are both true.
Ex. Amy is curious AND clever.
p q pq
T T T
T F F
F T F
F F F
Disjunction
Disjunction corresponds to English “or.”
p q is when p or q (or both) are true.
Ex. Michael is brave OR nuts.
p q pq
T T T
T F T
F T T
F F F
Implication
Implication: p q corresponds to English “if p then q,” or “p implies q.”
If it is raining then it is cloudy.
If there are 200 people in the room, then I am the Easter Bunny.
If p then 2+2=4.
p q pq
T T T
T F F
F T T
F F T
Propositional Logic: Truth Tables
Truth table for different connectives for Negation, Disjunction (AND), Conjunction (OR), Condition,
Bicondition, NAND, NOR, XOR
Contrapositives: p q and q p
Ex. “If it is noon, then I am hungry.”
“If I am not hungry, then it is not noon.”
Converses: p q and q p
Ex. “If it is noon, then I am hungry.”
“If I am hungry, then it is noon.”
Inverses: p q and p q
Ex. “If it is noon, then I am hungry.”
“If it is not noon, then I am not hungry.”
P Q expression Value
T T PVQ T
T F PˬQ T
F T P→Q T
F F ¬P Q F
Use De Morgan's Laws to write the negation of the expression, and translate the negation in English
Solution:
¬ P V Q is equivalent to P → Q
3. Let
P = "John is healthy"
Q = "John is wealthy"
R = "John is wise"
Represent:
Let
P: The fox can catch the hare
Q: The lynx can catch the hare.
R: The hare is alert
S: The hare is quick
(R Λ S) → ~( P V Q)
b."You can either (stay at the hotel and watch TV ) or (you can go to the museum and spend some time
there)".
The parentheses are used to avoid ambiguity concerning the priority of the logical connectives.
Translation: (P Λ Q) V (R Λ S)
PVT=
PVF=
P V ~P =
….
6.Conditional statements:
6.1. For the implication P → ~Q indicate which of the following expressions is its contrapositive, its
converse and its inverse:
P → Q, ~Q → P, ~P → ~Q, ~P → Q, ~Q → ~P, Q → ~P
f. Let
P: we are on vacation
Q: we go fishing
The logical expression for the above sentence is: P → Q
g.negation: P Λ ¬ Q
"We are on vacation and we do not go fishing."
h.converse: Q → P
"If we go fishing, we are on vacation."
i. inverse: ¬ P → ¬ Q
"If we are not on vacation, we don't go fishing."
j. contrapositive: ¬ Q → ¬ P
"If we don't go fishing, we are not on vacation."
P→Q ~Q → ~ P Q→P ~P → ~Q
~P → Q ~Q→P Q → ~P P → ~Q
Q → ~P P → ~Q ~P → Q ~Q→P
1. Premises:
If I read the newspaper in the kitchen, my glasses would be on the kitchen table.
Solution:
Let
Formal representation:
(1) P → Q
(2) ~P
(3) Therefore ~Q
We know that when P is false, i.e. we have ~P, the implication is true
for any value of Q.
Hence we cannot say whether Q is true or false.
The error in the above argument is called inverse error.
2. Premises:
a. If I don't study hard, I will not pass this course
b. If I don't pass this course I cannot graduate this year.
Solution:
Let
Formal representation:
(1) P → Q
(2) Q → R
(3) Therefore P → R
3. Premises:
a. You will get an extra credit if you write a paper or if you solve the test problems.
b.You don’t write a paper, however you get an extra credit.
Solution:
Let
Formal representation:
(1) (Q V R) → P
(2) ~Q
(3) P
(4) Therefore R
The above argument is a combination of disjunctive syllogism and modus ponens, however the modus
ponens is not applied correctly.
The disjunctive syllogism consists in the following:
Given that (Q V R) is true, and that Q is false (~Q is true) we conclude that R is true.
4. Premises:
a. You will get an extra credit if you write a paper or if you solve the test problems.
b. You don’t write a paper and you don't get an extra credit.
Solution:
Let
Formal representation:
(1) (Q V R) → P
(2) ~Q
(3) ~P
(4) Therefore ~R
Note, that the premise ~Q is not necessary. Since both sides of the disjunction must be false, Q must
be false too.
A valid argument would be the following one:
(1) (Q V R) → P
(2) ~P
(3) Therefore ~Q and ~R
Example:
2. Modus Tollens:
The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬ P will also true.
It can be represented as:
Example:
Statement-1: If you have my home key then you can unlock my home. P→Q
Statement-2: If you can unlock my home then you can take my money. Q→R
Conclusion: If you have my home key then you can take my money. P→R
4. Disjunctive Syllogism:
The Disjunctive syllogism rule state that if P∨Q is true, and ¬P is true, then Q will be true. It
can be represented as:
Example:
5. Addition:
The Addition rule is one the common inference rule, and it states that If P is true, then P∨Q
will be true.
Example:
represented as:
7. Resolution:
The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q∨R will also be true. It can
be represented as
INFERENCE MECHANISM
An inference is an idea or conclusion that's drawn from evidence and reasoning.
An Inference Engine is a tool from artificial intelligence.
Given a set of rules, there are essentially two ways to generate new knowledge
Forward chaining
Backward chaining
FORWARD CHAINING
Forward chaining is one of the two main methods of reasoning when using an inference engine and can
be described logically as repeated application of modus ponens. Forward chaining is a popular
implementation strategy for expert systems, business and production rule systems. Forward chaining is
also known as data driven approach. Forward chaining starts with the facts and see what rules apply.
Facts are held in the working memory (ie., contains the current state of the world). Rules represent the
action to be taken when specified facts occur in the working memory. The actions involve adding or
deleting facts from the working memory.
Inference
Rule R1: IF hot and smoky THEN fire
Rule R2: IF alarm_beeps THEN smoky
Rule R3: IF fire THEN switch_on_sprinklers
Suppose that we know the facts A, B, C, D, E and the rules shown in the knowledge base to the left
What facts can we infer from this?
After inferring facts X, L, Y and Z there are no more rules that can be fired.
Properties of forward chaining
All rules which can fire do fire.
Can be inefficient, which lead to spurious rule firing, unfocussed problem solving.
Set of rules that can fire known as conflict set.
Decision about which rule to fire is conflict resolution.
BACKWARD CHAINING
This is also called goal driven approach. A desired goal is placed in working memory, inference cycle
attempts to find evidence to prove it.
The knowledge base (rule base) is searched for rules that might lead to goal. If condition of such rule
matches fact in working memory then rule is fired and goal is proved.
Backward chaining means reasoning from goals to facts. Rules and facts are processed using backward
chaining interpreter.
Backward chaining inferred Z from the facts and rules that were available.
Advantages
Most efficient to infer one particular fact.
User may be asked to input additional facts
GENETIC ALGORITHM
A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural evolution
based on ‘survival of the fittest’. It is a global search method for finding the global optimal solution from
a complicated search spaces. According to this mechanism, the species with optimal fitness will survive
to leave ancestors and continues to populate the world.
1. Initialization of Population
Population is the set of solutions called individuals. The individual is built out of chromosomes. A
chromosome contains genes. Each gene contains a parameter of problem represented by the whole
chromosome.
In genetic representation and initialization of population, instead of starting from a single solution within
the search space, genetic algorithms are initialized with a population of solution. They are usually random
and will be spread throughout the search space. In genetic algorithm, each possible solution is represented
by an individual of population called chromosome.
2. Fitness Function
Fitness function accurately reflects the suitability of a solution. The fitness function assigns a score to the
possible solution. The score is a numerical value that represents, how well the particular solution solves
the problem. The task of the GA is to discover solutions that have high fitness values among the set of all
possible solutions.
Genetic algorithms are usually suitable for solving maximization problems. Minimization problems are
transformed in to maximization problem by some suitable transformation. In general,fitness functions
F(x) is first derived from the objective function and is used in successive genetic operations.
Consider the following transformations
F(x) = f(x) for maximization problem
F(x) = 1/f(x) for minimization problem, if f(x) ≠ 0
3. Selection Operation
Selection or reproduction operation is usually the first operator applied on population.
Selection operation chooses two individuals from an existing population, according to the definition
of fitness. Individuals which are selected from the population pool are to be parents. According to
Darwin’s theory of survival of the fittest, the best one should survive and create new offspring. A
selection operation in genetic algorithm is a process that helps to select better individuals in the
population for the mating pool.
The various methods for selecting the individual from the population pool for cross-over are
i.Roulette wheel selection
ii.Truncation selection
iii.Tournament selection
iv.Rank selection
4. Cross-over operation
Cross-over operator is applied to the mating pool to create a better individual than the individual in the
population pool. The various types of cross-over operations are
i.Single point cross-over
ii.Multi-point cross-over
iii.Uniform cross-over
i. Single point cross-over
In single point cross-over, the cross-over point is selected randomly along the length of the mated strings,
and the bits next to the cross-over points are exchanged as shown in the Figure.
Parent1 1 0 1 1 0 1 1 0 1 0 1 1 0 1 0 0 1 1 0 0
Parent2 0 0 0 1 1 1 1 1 1 0 0 1 0 1 1 0 0 0 1 1
Child1 1 0 1 1 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1 1
Child2 0 0 0 1 1 1 1 1 1 0 1 1 0 1 0 0 1 1 0 0
An example illustrating single point cross-over
5. Mutation Operation
Mutation alters one or more gene values in a chromosome from its initial state. In mutation, the solution
may change entirely from the previous solution. This method can lead to new individuals in the problem
space. Sometimes, this can lead to new and better results. The following are the three types of mutation
operations: (i) Flip Bit: randomly chooses the genome and inverts the bits; (ii) Boundary: replaces the
genome with either lower or upper bound randomly; and (iii) Uniform: replaces the value of the chosen
gene with a uniform random value selected between the userspecified
upper and lower bounds for that gene.
6. Elitism
The cross-over and mutation operation will destroy the best solution of the population pool. Thus, elitism
concept is used while adding/discarding an individual in to/from the population, because it preserves few
best solutions in the population pool. The elitism concept will allow some of the better individuals from
the current generation (without altering) to carry over to the next generation while constructing a new
population
7. Adding/Discarding the individual
This module in GA is used to check whether the offspring (child) produced is added in to
the population pool or discarded from the population pool. The fitness value of the child is compared
with the fitness value of the individuals in the population pool. The steps involved are:
i. Check if the fitness value of the child is greater than the fitness value of the individuals in
the population pool. If the condition is satisfied, go to step (ii); otherwise, discard the
child.
ii. Check whether the child is already in the population pool.
a) If the child is not in the pool, remove the individual which has minimum fitness
value from the population pool and add the child in to the population pool.
b) If the child is already in the pool, discard the child.
iii. Repeat steps (i) to (ii) for the next child.
8. Convergence of genetic algorithm
Convergence criterion for the genetic algorithm is set based on the maximum number of
generations or elapsed time, predefined fitness value etc.