You are on page 1of 23

LabTasks

Task1:MapColoringProblem
ProblemDescription:
If we have a map with regions, we have to color it in a number of colors such that no two adjacent
regions have the same..

MethodFormulation:
In order to find a solution to this specific challenge, we will use a genetic algorithm that improves the
coloring over multiple generations of candidates.A random assignment of colors will be used to generate
the first or initial population. By counting the number of nearby regions with the same color, we may
determine the quality of a solution using a fitness function called evalMapColouring.Our goal is to
reduce the number of these.
We will utilize two-point crossover to generate new generations by exchanging the color allocations of
two individuals. We will also use a specific chance to change the color of a person's region in order to
apply mutation uniformly. People will be chosen for the tournament based on how fit they are.We will
continue iterating over a specified number of generations until we get the optimal outcome.
In order to create individuals, initialize populations, evaluate them, and implement genetic operators
(such as selection, mutation, and crossover), we will be utilizing the DEAPtoolbox and its register
functions..

Implementation:
pipinstalldeap

Collectingdeap
Using cached deap-1.4.1-cp310-cp310-
manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2
014_x86_64.whl (135 kB)
Requirementalreadysatisfied:numpyin
/usr/local/lib/python3.10/dist-packages (from deap)
(1.23.5)Installingcollectedpackages:deap
Successfullyinstalleddeap-1.4.1
importnumpyasnp
fromdeapimportbase,creator,tools,algorithms

#Inputvariables
numColours=5
numNames=7
colours=('red','green','blue','gray','orange')
names=('Olivia','Liam','Emma','Noah','Ava','William','Sophia')
neighbours=[
[0,1,1,1,0,0,0],
[0,0,1,0,1,0,1],
[0,0,0,1,0,1,0],
[0,0,0,0,1,0,0],
[0,0,0,0,0,1,0],
[0,0,0,0,0,0,1],
[0,0,0,0,0,0,0],
]

defevalMapColouring(ind):
conflicts=sum(neighbours[i][j]andind[i]==ind[j]foriin
range(numNames)forjinrange(numNames))
returnconflicts,

#GeneticAlgorithm Setup
creator.create("FitnessMin",base.Fitness,weights=(-1.0,))
creator.create("Individual",list,fitness=creator.FitnessMin)

toolbox=base.Toolbox()
toolbox.register("attr_int",np.random.randint,numColours)
toolbox.register("individual",tools.initRepeat,creator.Individual,
toolbox.attr_int,n=numNames)
toolbox.register("population",tools.initRepeat,list,
toolbox.individual)

#Registeringtheevaluate function
toolbox.register("evaluate",evalMapColouring)

toolbox.register("mate",tools.cxTwoPoint)
toolbox.register("mutate",tools.mutUniformInt,low=0,up=numColours-
1,indpb=0.2)
toolbox.register("select",tools.selTournament,tournsize=3)

pop=toolbox.population(n=10)
stats=tools.Statistics(lambdaind:ind.fitness.values)
stats.register("min",np.min)

#Runningthegeneticalgorithm
pop,log=algorithms.eaSimple(pop,toolbox,cxpb=0.8,mutpb=0.4, ngen=50,
stats=stats,verbose=False)

#Retrieving andprintingthebest solution


best=tools.selBest(pop,1)[0]
print("Best:%s.Fitness:%s."%(best,evalMapColouring(best)[0]))
foriinrange(numNames):
print("%s==>%s"%(names[i],colours[best[i]]))
Best:[2,3,4,1,4,1,2].Fitness:0.
Olivia ==>
blueLiam ==>
grayEmma==>orang
eNoah ==>
greenAva ==>
orangeWilliam==>gr
eenSophia ==> blue

Discussion:
• The genetic algorithm that we used successfully tackled the issue of map coloring.
• As far as map coloring constraints were concerned, the optimal solution had a fit of 0.
• Parameter fine-tuning has the ability to affect convergence speed and the quality of the
solution obtained..

Task2:The8-PuzzleProblem
ProblemDescription:
• A 3x3 grid with digits 1–8 representing tiles and "E" signifying empty space is included in the 8-
puzzle problem.From a starting point, the objectiveist can move the empty space to an end state by
dragging it to the left, right, up, or down. A certain arrangement of numbers is the target state.The
issue is characterized by:
• Example: A 3x3 array with the values 1–8 and "E" for the empty space.
• Using the following instructions and checking the boundaries, move the blank area up, down, left, and
right.
• Check if the current state matches the goal state using the GoalTest.
• Attributed to each step is the path cost..

MethodFormulation:
Thesolutioninvolvesimplementing informed (heuristic) search usingtheA*search algorithm with two
algorithms for estimating

numberofmisplacedtiles is the result of calling h1(MisplacedTiles).

The distances of the softiles from their target positions are added together..

Implementation:
pipinstallsimpleai

Collectingsimpleai
Downloadingsimpleai-0.8.3.tar.gz(94kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━94.4/94.4kB1.0MB/seta0:00:00
etadata(setup.py)...pleai
Buildingwheelforsimpleai(setup.py)...pleai:filename=simpleai-0.8.3-
py3-none-
any.whlsize=100984sha256=32b875ef40a11b2f1c478978727a0854639a1ce71c6a
0278a26336c2689a8320
Storedindirectory:
/root/.cache/pip/wheels/91/0c/
38/421d7910e7bc59b97fc54f490808bdb1097607d83d1a592865
Successfullybuiltsimpleai
Installingcollectedpackages:simpleaiSu
ccessfullyinstalledsimpleai-0.8.3

fromsimpleai.searchimportastar,SearchProblem

classEightPuzzleProblem(SearchProblem):
def init (self,initial_state):
self.initial_state=tuple(initial_state)
self.goal_state=tuple([1,2,3,4,5,6,7,8,0])

defactions(self,state):
actions=[]
empty_index=state.index(0)
row,col=divmod(empty_index,3)

ifrow>0:
actions.append(('down',empty_index-3))
ifrow<2:
actions.append(('up',empty_index+3))
ifcol>0:
actions.append(('right',empty_index-1))
ifcol<2:
actions.append(('left',empty_index+1))

returnactions

defresult(self,state,action):
state_list=list(state)
empty_index=state_list.index(0)
new_index=action[1]

state_list[empty_index],state_list[new_index]=
state_list[new_index],state_list[empty_index]

returntuple(state_list)

defis_goal(self,state):
returnstate==self.goal_state

defheuristic(self,state):
#h1(MisplacedTiles)
misplaced_tiles=sum([1ifstate[i]!=self.goal_state[i]
else0foriinrange(9)])

#h2 (Sum of Distances)


total_distance=0
foriinrange(9):
ifstate[i]!=self.goal_state[i]:
goal_index=self.goal_state.index(state[i])
goal_row,goal_col=divmod(goal_index,3)
current_row,current_col=divmod(i,3)
distance=abs(goal_row-current_row)+abs(goal_col
-current_col)
total_distance+=distance

returnmisplaced_tiles+total_distance

defdisplay_state(state):
foriinrange(0,9,3):
print(state[i:i+3]) print("\
n")

#Exampleusage:
initial_state=[1,0,3,4,2,5,7,8,6]
problem=EightPuzzleProblem(initial_state)
result=astar(problem)

print("InitialState:")
display_state(initial_state)

print("FinalState:")
display_state(result.state)
print("Pathtogoal:",result.path())
print("Totalcost:",result.cost)

InitialState:
[1,0,3]
[4,2,5]
[7,8,6]

FinalState:
(1,2,3)
(4,5,6)
(7,8,0)

Pathtogoal:[(None,(1,0,3,4,2,5,7,8,6)),(('up',4),(1,2,
3,4,0,5,7,8,6)),(('left',5),(1,2,3,4,5,0,7,8,6)),
(('up',8),(1,2,3,4,5,6,7,8,0))]
Totalcost:3
Discussion:
The 8-puzzle problem is solved using the A*search algorithm in the provided method.With h1 and h2
working together, we can do better-informed searches, which should speed up our way to the best possible
answer..

Task3:RoutePlanning
ProblemDescription:
• This issue pertains to determining the best or quickest path between any two cities.The cities are
represented as integers from 1 to 6 for our task.The following components allow us to
appropriately define the problem:
• "State" means "numerical representation of cities" in our implementation, which ranges from
1 to 6.
• The new state is the destination city, and the action is in the current city.The action "3"
indicates a journey across The city 1 towards City 3, for example, if the present city is 1 and
the following state is 3.
• Objective Test: Verifying if the present state is the target state.Here, City 5 is the ultimate
stop.
• PathCost: The graph shows the cost of the route.Transform the graph's weights into a 6x6
array, where each member stands for the distance between a pair of cities (0 for self-position
and # for unconnected neighborhoods)..

MethodFormulation:
• We will use three distinct search strategies to determine the best course of action for this method
planning issue:
• Consistent expensesLook through
• Search by breadth
• Search by Depth First

Implementation:
fromsimpleai.searchimportSearchProblem,astar,greedy,
breadth_first,depth_first,uniform_cost
importmath

COSTS=[
[0,7,9,'inf','inf',14],
[7,0,10,15,'inf','inf'],
[9,10,0,11,'inf',2],
['inf',15,11,0,6,'inf'],
['inf','inf','inf',6,0,9],
[14,'inf',2,'inf',9,0]
]
classRoute(SearchProblem):
def init (self,initial,goal):
self.initial=initial-1
self.goal=goal-1
super(Route,self). init (initial_state=self.initial)

defactions(self,state):
actions=[]
foractioninrange(len(COSTS[state])):
ifCOSTS[state][action]notin['inf',0]:
actions.append(action)
returnactions

defresult(self,state,action):
returnaction

defis_goal(self,state):
returnstate==self.goal

defcost(self,state,action,state2):
returnCOSTS[state][action]

#UniformCostSearch
defuniform_cost_search(start,goal):
problem=Route(start,goal)
result=uniform_cost(problem)
path=[x[1]+1forxinresult.path()]
returnpath,result.cost

#Breadth-FirstSearch
defbreadth_first_search(start,goal):
problem=Route(start,goal)
result=breadth_first(problem)
path=[x[1]+1forxinresult.path()]
returnpath,result.cost

#Depth-FirstSearch
defdepth_first_search(start,goal):
problem=Route(start,goal)
result=depth_first(problem)
path=[x[1]+1forxinresult.path()]
returnpath,result.cost

#Exampleusage
start_city=1
goal_city=5

path_uniform_cost,cost_uniform_cost=uniform_cost_search(start_city,
goal_city)
print("UniformCostSearch:")
print("Therouteis%s,andtotalcostis%s"%(path_uniform_cost,
cost_uniform_cost))

path_bfs,cost_bfs=breadth_first_search(start_city,goal_city)
print("\nBreadth-FirstSearch:")
print("Therouteis%s,andtotalcostis%s"%(path_bfs,cost_bfs))

path_dfs,cost_dfs=depth_first_search(start_city,goal_city)
print("\nDepth-FirstSearch:")
print("Therouteis%s,andtotalcostis%s"%(path_dfs,cost_dfs))
UniformCostSearch:
Therouteis[1,3,6,5],andtotalcostis20

Breadth-FirstSearch:
Therouteis[1,6,5],andtotalcostis23

Depth-FirstSearch:
Therouteis[1,6,5],andtotalcostis23
#Plotting theroutesforUniformCost Search
importmatplotlib.pyplotasplt
importnetworkxasnx

defvisualize_search_algorithm(problem,result,algorithm_name):
graph=nx.DiGraph()

forstateinrange(6):
graph.add_node(state+1)

forstateinrange(6):
actions=problem.actions(state)
foractioninactions:
cost=problem.cost(state,action,None)
graph.add_edge(state+1,action+1,weight=cost)

pos=nx.spring_layout(graph)
labels={node:nodefornodeingraph.nodes()}

nx.draw(graph,pos,with_labels=True,labels=labels,
node_size=700,node_color="skyblue",font_size=8)
edge_labels={(i,j):problem.cost(i-1,j-1,None)fori,j
ingraph.edges()}
nx.draw_networkx_edge_labels(graph,pos,edge_labels=edge_labels,
font_color='red')

path=[x[1]+1forxinresult.path()]
path_edges=[(path[i-1],path[i])foriinrange(1,len(path))]
nx.draw(graph,pos,nodelist=path,node_size=700,
node_color="orange")
nx.draw_networkx_edges(graph,pos,edgelist=path_edges,
edge_color="orange",width=2)

plt.title(f"{algorithm_name} - Route Visualization")


plt.show()

# Example usage for Uniform Cost Search


problem_uniform_cost=Route(start_city,goal_city)
result_uniform_cost=uniform_cost(problem_uniform_cost)
visualize_search_algorithm(problem_uniform_cost, result_uniform_cost,
"Uniform Cost Search")

Discussion:
From the data shown above, it is clear that the three search methods yielded varying outcomes with
respect to the chosen route and total cost.
There were three different search methods that arrived at the same optimal path; Uniform Cost
Search discovered it for $20, Breadth-First for $23, and Depth-First for $23.

We may conclude that, although Breadth-First Search alongside Depth-First Search were faster, they
were not as optimal as Uniform CostSearch when it came to identifying the shortest path. Also, the
output could change depending on the parameters you use. We can sum up by stating that the
problem's requirements determine which of these algorithms to use..
Task4:MazeSolver
ProblemDescription:
• A path from point 'O' at the beginning of the maze to point 'X' at the end is the objective. The
base of operations ('O'), the endpoint ('X'), impassable walls ('#'), and open passageways ('')
make up the maze's grid. Starting at a starting place and making our way to the target with a
restricted number of movements, we must navigate the maze. We have the ability to walk in
all four directions. We need to make sure that we take the quickest route possible.
• The following components can be used to define the problem in detail:
• Put forwardRepresentation: The current position (x, y) on the maze grid represents the
problem state.
• Activities: Depending on the walls around, possible activities at a particular state include
moving up, down, left, or right.
• What we mean by "transition model" is that when we take an action that is valid (i.e., one
that doesn't crash), we update the state accordingly.
• The goal test verifies if the current state is the target point ('X'), which means the
objective has been achieved.
• Assuming all moves have the same cost, the cost function assigns equal cost to each legal
move..

MethodFormulation:
An initialization point ('O') and an endpoint ('X') are required to begin the problem with.Using a queue
data structure for Breadth-First Searh traversal, we can identify the shortest route through the maze, and
an empty set will be used to keep track of the states that visited.

You can methodically explore the maze until you reach the destination point or all of your alternatives
have been exhausted because the state space contains all conceivable (x, y) positions..

Implementation:
#SolvingwithBreadth FirstSearch
fromcollectionsimportdeque

defis_valid_move(maze,x,y):
#Checking ifit's avalid move,i.e., withintheboundaries
return0<=x<len(maze)and0<=y<len(maze[0])andmaze[x][y]
!='#'
deffind_shortest_path(maze):
start,target=None,None

#Find thestarting andtarget points


foriinrange(len(maze)):
forjinrange(len(maze[0])):
ifmaze[i][j]=='O':
start=(i,j)
elifmaze[i][j]=='X':
target=(i,j)

ifstartisNoneortargetisNone:
raiseValueError("Startingpoint('O')ortargetpoint('X')
notfoundinthemaze.")

visited=set()
queue=deque([(start,0)])

#Possible moves(up,down, left,right)


moves=[(-1,0),(1,0),(0,-1),(0,1)]

whilequeue:
current,distance=queue.popleft()
x,y=current

ifcurrent==target:
returndistance #Shortest pathfound

ifcurrentinvisited:
continue

visited.add(current)

#Explorepossible moves
formoveinmoves:
new_x,new_y=x+move[0],y+move[1]

ifis_valid_move(maze,new_x,new_y):
queue.append(((new_x,new_y),distance+1))

return-1 #Nopathfound #

Example usage:
maze=[
"########",
"#O# #",
"#####",
"##X #",
"# #",
"########"
]

result=find_shortest_path(maze)
ifresult!=-1:
print(f"Shortestpathlength:{result}") else:
print("Nopathfound.")
Shortestpathlength:7

#SolvingwithDepth-FirstSearch
deffind_shortest_path_dfs(maze):
start,target=None,None

#Findingthestarting andtargetpoints
foriinrange(len(maze)):
forjinrange(len(maze[0])):
ifmaze[i][j]=='O':
start=(i,j)
elifmaze[i][j]=='X':
target=(i,j)

ifstartisNoneortargetisNone:
raiseValueError("Startingpoint('O')ortargetpoint('X')
notfoundinthemaze.")

visited=set()

#Possible moves(up,down, left,right)


moves=[(-1,0),(1,0),(0,-1),(0,1)]

defdfs(current,distance):
nonlocalvisited

x,y=current

ifcurrent==target:
returndistance #Shortest pathfound

ifcurrentinvisited:
returnfloat('inf') #Alreadyvisited, backtrack

visited.add(current)

#Explorepossiblemoves
shortest_path=float('inf')
formoveinmoves:
new_x,new_y=x+move[0],y+move[1]

ifis_valid_move(maze,new_x,new_y):
path_length=dfs((new_x,new_y),distance+1)
shortest_path=min(shortest_path,path_length)

return shortest_path

result=dfs(start,0)

return result if result != float('inf') else -1 #Nopathfound

#Exampleusage:
maze=[
"########",
"#O# #",
"#####",
"##X #",
"# #",
"########"
]

result=find_shortest_path_dfs(maze) if
result != -1:
print(f"Shortest path length: {result}")
else:
print("Nopath found.")
Shortestpathlength:7

Discussion:
We put the Amazement Solver algorithm into action. We adopted a simple yet effective method.We
obtained the shortest path length of 7 in the example that we implemented.For small and medium-sized
mazes, the Breadth-First Search works well.Unfortunately, we also achieved the same outcome after
implementing a version with DepthFirstSearch.We might need to use more sophisticated algorithms and
optimization techniques to tackle larger mazes efficiently..

Task5:TheTravellingSalesmanProblem(TSP)
ProblemDescription:
Computer scientists, OR experts, and mathematicians all agree that the Traveling Salesman Problem (TSP)
is a classic optimization problem. Finding the shortest possible route that visits each city precisely once and
returns to the originating city is the challenge at hand, given a list of cities and the distances between each
pair of cities.A salesperson's total distance traveled should be minimized as much as possible..
MethodFormulation:
Genetoc algorithms typically perform well for NP-hard optimization issues like this one, so we'll use
them to solve the problem, even if there are many other ways, such as dynamic programming and brute
force.

Every route in the initial population represents a potential solution to the TSP, and we'll utilize a
random permutation of cities to generate them. After that, we'll figure out how to measure the
population's fitness by defining a fitness function.Our goal is to maximize fitness on shorter routes,
hence we will calculate fitness as the inverse of the total distance traveled. Wheel of fortune In order
to choose routes that are suitable for them, Wheel Picking will be used. Additionally, we will
incorporate mutations to increase population variety and crossover to generate new progeny.

We will breed for a set number of generations until we discover the optimal route based on fitness..

Implementation:
importrandom

defgenerate_initial_population(num_cities,population_size):
return[random.sample(range(num_cities),num_cities)for_in
range(population_size)]

defcalculate_total_distance(route,distances):
total_distance=0
foriinrange(len(route)-1):
total_distance+=distances[route[i]][route[i+1]]
total_distance+=distances[route[-1]][route[0]] #Returntothe
startingcity
returntotal_distance

defcalculate_fitness(route,distances):
return1/(1+calculate_total_distance(route,distances))

defcrossover(parent1,parent2):
start,end=sorted(random.sample(range(len(parent1)),2))
child=[-1]*len(parent1)
child[start:end+1]=parent1[start:end+1]

remaining_cities=[cityforcityinparent2ifcitynotinchild]
remaining_index=0

foriinrange(len(parent1)):
ifchild[i]==-1:
child[i]=remaining_cities[remaining_index]
remaining_index+=1

returnchild

defmutate(route,mutation_rate):
ifrandom.random()<mutation_rate:
idx1,idx2=random.sample(range(len(route)),2)
route[idx1],route[idx2]=route[idx2],route[idx1]

defgenetic_algorithm(distances,population_size,generations,
mutation_rate):
num_cities=len(distances)

#Generatinginitialpopulation
population=generate_initial_population(num_cities,
population_size)

forgenerationinrange(generations):
fitness_scores=[calculate_fitness(route,distances)for
routeinpopulation]

parents=random.choices(population,weights=fitness_scores,
k=2)

offspring=[crossover(parents[0],parents[1])for_in
range(population_size-2)]

offspring.append(parents[0])
offspring.append(parents[1])

forchildinoffspring:
mutate(child,mutation_rate)

population=offspring

#Selecting thebestroutefromthefinalpopulation
best_route=max(population,key=lambdaroute:
calculate_fitness(route,distances))
min_distance=calculate_total_distance(best_route,distances)

returnbest_route,min_distance

#ExampleUsage
distances=[
[0,10,15,20],
[10,0,35,25],
[15,35,0,30],
[20,25,30,0]
]

population_size=50
generations=100
mutation_rate=0.2

best_route,min_distance=genetic_algorithm(distances,
population_size,generations,mutation_rate)
print("BestRoute:",best_route)
print("MinimumDistance:",min_distance)

BestRoute:[1,3,2,0]
MinimumDistance:80
#Tweaking theparameters(Increasingthenumberofgenerations)
population_size=50
generations=1000
mutation_rate=0.2

best_route,min_distance=genetic_algorithm(distances,
population_size,generations,mutation_rate)
print("BestRoute:",best_route)
print("MinimumDistance:",min_distance)
BestRoute:[1,3,2,0]
MinimumDistance:80
#Tweaking theparameters(Increasingthesize ofthepopulation)
population_size=100
generations=1000
mutation_rate=0.2

best_route,min_distance=genetic_algorithm(distances,
population_size,generations,mutation_rate)
print("BestRoute:",best_route)
print("MinimumDistance:",min_distance)
BestRoute:[3,1,0,2]
MinimumDistance:80

Discussion:
Aswementioned earlier,geneticalgorithmsgenerallyperformwellforNPhardoptimization
problemslikeTSP.Mechanisms likecrossoverandmutationallowforefficientexplorationofthe solutionspace.

However, the quality of the solution obtained may be affected by the way we tune our
parameterslikethenumberofgenerationsbreededandthesizeofthepopulation.The
algorithmmayconvergetoasuboptimalsolutionifnotproperlyconfigured.

Weexecutedthealgorithm fordifferent numberofgenerations andpopulationsizes, however,


changingtheseparametersdidn'tyieldmuchdifferentresults.Themostoptimalroutefound remained the same.
Task6:The8-QueensProblem
ProblemDescription:
A classic chessboard problem, the Eight-Queens Problem requires the goalie to arrange eight queens on
an 8x8 board in a way that prevents any two queens from threatening any other.It is possible for queens
to take on another if they are on the same diagonal, row, or column..

MethodFormulation:
To solve this problem efficiently, we use a
genetic algorithm.Find an initial set of
solutions, where each set of solutions
represents a move of eight queens on the
chessboard.
The number of non-attacking queen pairings
will be used to determine fitness in the
fitness function. With fewer dangers
between queens, fitness may be maximized.
To choose solutions, we will use a roulette
wheel. We will conduct mutations and
crossovers.
Implementation:
import random; define population; return
function with given
parametersfor_inrange(8),
[[random.randint(1,8)]]~inside the range of
the population size].
fitness = 0; break;
defcalculate_fitness(board)
the loop continues iteratively from i=8 to
j=8 and so on, if board[i]!= board[j] and
abs(i-j)!* fitness is equal to 1 plus the
absolute value of the difference between
board[i] and board[j]
returnfitness as the sum of the fitness scores
obtained from the defroulette wheel
selection using the population
probabilities of selectionfitness_scores as a
function of total_fitness
chosen_index =
random.choices(range(len(population)),
weights=selection_probabilities)[0]
populate with the selected index
parent1 is the first child of parents 1 and 2,
and the crossing point is a random integer
between 1 and 7.[:point of
crossover]Additionally, parent2The point at
which the line crosses: the returnchild

child.mutate:
mutated_position=child[mutated_posi
tion] = random.randint(1,8) from 0 to
7 the function
genetic_algorithm(population_size,
generations) returns a new instance of
the population with fitness scores
calculated using the
initialize_population function with the
given population size.
[determine_board's suitability for the
population]

Parents are chosen using a roulette


wheel selection algorithm that takes
the population and fitness scores into
account within a range of the
population size.

If the population size is between 0


and 2, then the number of offspring
will be the result of a crossover
between the current parents and their
descendants.

#mutation in offspring is defined as


[mutate(child)ifrandom.random()<mut
ation_rate elsechild forchild in
offspring]

Coming up with a new generation


best_fitness = max(fitness_scores)
population = offspring output the
value of
f"Generation{generation+1}:BestFitnes
s= {best_fitness}" )

If a solution is found, exit the loop. If


best_fitness is equal to or greater
than 28, print "SolutionFound!".
interrupt #Set the mutation rate to
0.1

#With a population size of 100, run


the genetic algorithm for 100
generations with 50 generations in
between

Two generations: Generation1 with a


BestFitness value of 24 and
Generation2 with a BestFitness value
of 25.3. Generation of Best
FitnessFour: Generation 5: Best
Fitness = 24Generation 6: Best
Fitness 257. BestFitness
Generation8:BestFitness=25Generation9:
BestFitness=25Generation10:BestFitness
=25Generation11:BestFitness=25Generat
ion12:BestFitness=25Generation13:BestF
itness=25Generation14:BestFitness=25G
eneration15:BestFitness=25Generation1
6:BestFitness=24Generation17:BestFitne
ss=25Generation18:BestFitness=25Gener
ation19:BestFitness=25Generation20:Bes
tFitness=25Generation21:BestFitness=24
Generation22:BestFitness=25Generation
23:BestFitness=25Generation24:BestFitn
ess=25Generation25:BestFitness=25Gene
ration26:BestFitness=25Generation27:Be
stFitness=25Generation28:BestFitness=2
5Generation29:BestFitness=25Generatio
n30:BestFitness=25Generation31:BestFit
ness=26Generation32:BestFitness=26Ge
neration33:BestFitness=26Generation34:
BestFitness=25Generation35:BestFitness
=25Generation36:BestFitness=25Generat
ion37:BestFitness=25Generation38:BestF
itness=25Generation39:BestFitness=26G
eneration40:BestFitness=26Generation4
1:BestFitness=26Generation42:BestFitne
ss=25Generation43:BestFitness=25Gener
ation44:BestFitness=25Generation45:Bes
tFitness=25Generation46:BestFitness=25
Generation47:BestFitness=26Generation
48:BestFitness=26Generation49:BestFitn
ess=26Generation50:BestFitness=26

Discussion:
As long as the number of generations is constant, the algorithm will keep showing the most fit version of
itself.The process will end once a solution is found with a maximum fitness of 28.Parameters like
population size, mutation rate, and number of generations can be tweaked for perfect results..
Theapproachisscalableforsolvinglargerinstancesofthe8-QueensProblemwithminor modifications.

Task7:MontyHallProblemUsingBayesianNetwork
ProblemDescription:
A probability puzzle based on an American TV game show is known as the Monty Hall problem.A
automobile (representing the prize) stands behind one of the three closed doors in the scene, while the
other two serve as a backdrop for the useless goats. Monty Hall, the show host, opens a second door and
shows a goat to the participant after they choose a door. At this point, the competitor can either continue
with their initial selection or swap to one of the other remaining options behind closed doors.In order to
increase their chances of winning the automobile, the goalies decide whether sticking or switching is
better..

MethodFormulation:
1. Here is a procedural breakdown of the approach we are going to use:

2. Get the variables started:We assign the variables at random:the probability of the
prize, with values between 0 and 2.C, the contestant, having values between 0 and
2. Assigning values between 0 and 2 to H (Host Action).

3. Make Priority Assignments:The prior probabilities of P and C are 1/3 for each
value.

4. develop a Bayesian network structure:Directional edges: C→H, P→H are used to


establish the structure.

5. We establish conditional probabilities for H using C and P in the Define Conditional


Probability Tables (CPDs). For each possible combination, we will utilize
TabularCPD to establish probabilities.

6. Create Bayesian Model: We use the pgmpy library to instantiate a Bayesian model.
To depict dependencies, we incorporate edges and nodes.

7. Supplement the Model using Conditional Probability Tables: Each CPD must be
associated with a related variable before they can be integrated into the Bayesian
model.

8. Run the Inference: Lastly, we use pgmpy's VariableEliminationmethod.logical


reasoning for probabilistic reasoning.For example, we can offer evidence like
{'Participant': 0, 'Host': 2} and identify the variables to query like 'Prize'.

9. Get Data and Show It:Finally, we will display the findings, which indicate the
distribution of probabilities, after calculating the posterior probabilities for the
variable that was queried..

Implementation:
pipinstallpgmpy
Collectingpgmpy
Downloadingpgmpy-0.1.24-py3-none-any.whl(2.0MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━2.0/2.0MB9.0MB/seta0:00:00
entalreadysatisfied:networkxin/usr/local/lib/python3.10/dist-
packages(frompgmpy)(3.2.1)
Requirementalreadysatisfied:numpyin
/usr/local/lib/python3.10/dist-packages(frompgmpy)
(1.23.5)Requirementalreadysatisfied:scipyin
/usr/local/lib/python3.10/dist-packages(frompgmpy)
(1.11.4)Requirementalreadysatisfied:scikit-learnin
/usr/local/lib/python3.10/dist-packages(frompgmpy)
(1.2.2)Requirementalreadysatisfied:pandasin
/usr/local/lib/python3.10/dist-packages(frompgmpy)
(1.5.3)Requirementalreadysatisfied:pyparsingin
/usr/local/lib/python3.10/dist-packages(frompgmpy)
(3.1.1)Requirementalreadysatisfied:torchin
/usr/local/lib/python3.10/dist-packages(frompgmpy)
(2.1.0+cu121)Requirementalreadysatisfied:statsmodelsin
/usr/local/lib/python3.10/dist-packages(frompgmpy)
(0.14.1)Requirementalreadysatisfied:tqdmin/usr/local/lib/python3.10/
dist-packages(frompgmpy)(4.66.1)
Requirementalreadysatisfied:joblibin
/usr/local/lib/python3.10/dist-packages(frompgmpy)
(1.3.2)Requirementalreadysatisfied:opt-einsumin
/usr/local/lib/python3.10/dist-packages(frompgmpy)
(3.3.0)Requirementalreadysatisfied:python-dateutil>=2.8.1in
/usr/local/lib/python3.10/dist-packages(frompandas->pgmpy)
(2.8.2)Requirementalreadysatisfied:pytz>=2020.1in
/usr/local/lib/python3.10/dist-packages(frompandas->pgmpy)(2023.3.post1)
Requirementalreadysatisfied:threadpoolctl>=2.0.0in
/usr/local/lib/python3.10/dist-packages(fromscikit-learn->pgmpy)
(3.2.0)
Requirementalreadysatisfied:patsy>=0.5.4in
/usr/local/lib/python3.10/dist-packages(fromstatsmodels->pgmpy)
(0.5.4)
Requirementalreadysatisfied:packaging>=21.3in
/usr/local/lib/python3.10/dist-packages(fromstatsmodels->pgmpy)(23.2)
Requirementalreadysatisfied:filelockin
/usr/local/lib/python3.10/dist-packages(fromtorch->pgmpy)
(3.13.1)Requirementalreadysatisfied:typing-extensionsin
/usr/local/lib/python3.10/dist-packages(fromtorch->pgmpy)
(4.5.0)Requirementalreadysatisfied:sympyin
/usr/local/lib/python3.10/dist-packages(fromtorch->pgmpy)
(1.12)Requirementalreadysatisfied:jinja2in
/usr/local/lib/python3.10/dist-packages(fromtorch->pgmpy)
(3.1.2)Requirementalreadysatisfied:fsspecin
/usr/local/lib/python3.10/dist-packages(fromtorch->pgmpy)(2023.6.0)
Requirementalreadysatisfied:triton==2.1.0in
/usr/local/lib/python3.10/dist-packages(fromtorch->pgmpy)
(2.1.0)Requirementalreadysatisfied:sixin/usr/local/lib/python3.10/dist-
packages(frompatsy>=0.5.4->statsmodels->pgmpy)(1.16.0)
Requirementalreadysatisfied:MarkupSafe>=2.0in
/usr/local/lib/python3.10/dist-packages(fromjinja2->torch->pgmpy)
(2.1.3)
Requirementalreadysatisfied:mpmath>=0.19in
/usr/local/lib/python3.10/dist-packages(fromsympy->torch->pgmpy)(1.3.0)
Installingcollectedpackages:pgmpySucce
ssfullyinstalledpgmpy-0.1.24

frompgmpy.modelsimportBayesianModel
frompgmpy.factors.discreteimportTabularCPD

model=BayesianModel([('Contestant','Host'),('Prize','Host')])

cpd_c=TabularCPD('Contestant',3,[[1/3],[1/3],[1/3]])
cpd_p=TabularCPD('Prize',3,[[1/3],[1/3],[1/3]])
cpd_h=TabularCPD('Host',3,[[0,0,0,0,0.5,1,0,1,0.5],
[0.5,0,1,0,0,0,1,0,0.5],
[0.5,1,0,1,0.5,0,0,0,0]],
evidence=['Contestant','Prize'],evidence_card=[3,
3])

model.add_cpds(cpd_c,cpd_p,cpd_h)

WARNING:pgmpy:BayesianModelhasbeenrenamedtoBayesianNetwork.Pleaseus
eBayesianNetworkclass,BayesianModelwillberemovedinfuture.

frompgmpy.inferenceimportVariableElimination

infer=VariableElimination(model)
posterior=infer.query(variables=['Prize'],evidence={'Contestant':
0,'Host':2},show_progress=False,joint=False)
print(posterior['Prize'])

WARNING:pgmpy:BayesianModelhasbeenrenamedtoBayesianNetwork.Pleaseus
eBayesianNetworkclass,BayesianModelwillberemovedinfuture.
WARNING:pgmpy:BayesianModelhasbeenrenamedtoBayesianNetwork.Pleaseus
eBayesianNetworkclass,BayesianModelwillberemovedinfuture.

+ + +
|Prize | phi(Prize)|
+==========+==============+
|Prize(0)| 0.3333|
+ + +
|Prize(1)| 0.6667|
+ +
0.0000|
+ +

Discussion:
The relationships between Prize, Contestant, and HostAction are modeled using the Bayesian Network.The
host's conduct is captured by the conditional probabilities in the CPDs depending on the contestant's choice
and the location of the reward.
With the contestant selecting gate 0 and the judge selecting door 2, we may infer the posterior probability of
winning the car. The findings show that the odds of winning are 1/3 when using the original version door (door
0) and 2/3% when using the other closed door.This is in keeping with the paradoxical character of the
Monterey Hall problem, in which switching doors is the statistically superior approach for winning the car.
Bibliography: Lab manuals and suggested readings.

You might also like