You are on page 1of 17

ROLL NO: 201071026

NAME: MAYURI PAWAR

AI LAB
EXPERIMENT NO: 3b
AIM: Write programs to solve a set of Uniform Random 3-SAT problems for
different combinations of m and n and compare their performance. Try the Hill
Climbing algorithm, Beam Search with a beam width of 3 and 4, Variable
Neighbourhood Descent with 3 Neighbourhood functions and Tabu Search
with neighbourhood functions changing 2 bits at a time.

THEORY:
Boolean satisfiability problem (sometimes called propositional satisfiability
problem and abbreviated SATISFIABILITY, SAT , or B-SAT) is the problem of
determining if there exists an interpretation that satisfies a
given Boolean formula. In other words, it asks whether the variables of a given
Boolean formula can be consistently replaced by the values TRUE or FALSE in
such a way that the formula evaluates to TRUE.
If this is the case, the formula is called satisfiable. On the other hand, if no such
assignment exists, the function expressed by the formula is FALSE for all
possible variable assignments and the formula is unsatisfiable.
For example, the formula "a AND NOT b" is satisfiable because one can find the
values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast,
"a AND NOT a" is unsatisfiable.
Inputs:
k= no. of variables in each clause
m= no. of clauses
n= no. of variables
Hill Climbing Algorithm
o Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing elevation/value to find the peak of
the mountain or best solution to the problem. It terminates when it
reaches a peak value where no neighbor has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
o A node of hill climbing algorithm has two components which are state
and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree
or graph as it only keeps a single current state.
Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:
o Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which
helps to decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
Beam Search Algorithm
A heuristic search algorithm that examines a graph by extending the most
promising node in a limited set is known as beam search.
Beam search is a heuristic search technique that always expands the W
number of the best nodes at each level. It progresses level by level and moves
downwards only from the best W nodes at each level. Beam Search uses
breadth-first search to build its search tree. Beam Search constructs its search
tree using breadth-first search. It generates all the successors of the current
level’s state at each level of the tree. However, at each level, it only evaluates a
W number of states. Other nodes are not taken into account.
The heuristic cost associated with the node is used to choose the best nodes.
The width of the beam search is denoted by W. If B is the branching factor, at
every depth, there will always be W × B nodes under consideration, but only W
will be chosen. More states are trimmed when the beam width is reduced.
When W = 1, the search becomes a hill-climbing search in which the best node
is always chosen from the successor nodes. No states are pruned if the beam
width is unlimited, and the beam search is identified as a breadth-first search.
The beamwidth bounds the amount of memory needed to complete the
search, but it comes at the cost of completeness and optimality (possibly that
it will not find the best solution). The reason for this danger is that the desired
state could have been pruned.
Variable neighborhood search (VNS),
proposed by Mladenović & Hansen in 1997,[2] is a metaheuristic method for
solving a set of combinatorial optimization and global optimization problems. It
explores distant neighborhoods of the current incumbent solution, and moves
from there to a new one if and only if an improvement was made. The local
search method is applied repeatedly to get from solutions in the neighborhood
to local optima. VNS was designed for approximating solutions of discrete and
continuous optimization problems and according to these, it is aimed for
solving linear program problems, integer program problems, mixed integer
program problems, nonlinear program problems, etc.
VNS is built upon the following perceptions:
1. A local minimum with respect to one neighborhood structure is not
necessarily a local minimum for another neighborhood structure.
2. A global minimum is a local minimum with respect to all possible
neighborhood structures.
3. For many problems, local minima with respect to one or several
neighborhoods are relatively close to each other.
Unlike many other metaheuristics, the basic schemes of VNS and its extensions
are simple and require few, and sometimes no parameters. Therefore, in
addition to providing very good solutions, often in simpler ways than other
methods, VNS gives insight into the reasons for such a performance, which, in
turn, can lead to more efficient and sophisticated implementations.
Tabu search
is a metaheuristic search method employing local search methods used for
mathematical optimization. It was created by Fred W. Glover in 1986 and
formalized in 1989.
Local (neighborhood) searches take a potential solution to a problem and
check its immediate neighbors (that is, solutions that are similar except for
very few minor details) in the hope of finding an improved solution. Local
search methods have a tendency to become stuck in suboptimal regions or on
plateaus where many solutions are equally fit.
Tabu search enhances the performance of local search by relaxing its basic
rule. First, at each step worsening moves can be accepted if no improving
move is available (like when the search is stuck at a strict local minimum). In
addition, prohibitions (henceforth the term tabu) are introduced to discourage
the search from coming back to previously-visited solutions.
The implementation of tabu search uses memory structures that describe the
visited solutions or user-provided sets of rules. If a potential solution has been
previously visited within a certain short-term period or if it has violated a rule,
it is marked as "tabu" (forbidden) so that the algorithm does not consider that
possibility repeatedly.
▪ Inputs for the no. of clauses (m), total no. of variables (n), and no. of test
cases or problems to be generated are taken from the user. No of variables in
each clause (k) is set to 3 .
▪ Then, all the problems are generated using the same logic as in Part A. ▪ Then
assignValues function is called to assign the values ‘1’ or ‘0 ’ randomly to all the
variables.
▪ Heuristic Score is calculated as the number of clauses, which evaluates to
true.
▪ Then, the search algorithms are called for every testcase :
Search Algos :
▪ Hill climbing:
o We find the heuristic values for all the neighbours of the current state
by changing one variable at a time and keep a track of the state having the
highest heuristic score and greater than the current heuristic score .
o Then, we choose the problem with best heuristic score if found, and
call the recursion for that problem.
o If no such neighbor or state is found which has a value greater than
the current heuristic value, then we terminate the function and return the
local maxima.
▪ Beam Search:
o We explore all the neighbors of the current state by changing the value
of two variables at a time.
o We find the heuristic value for the given state.
o Then we store all those states which have a heuristic value greater
than the parent heuristic value.
o Then this list is sorted in a descending based on the heuristic score,
and we only take the top beamWidth no. of states from the sorted list and put
them in the Queue.
o Again repeat the same procedure for the new states in the queue and
terminate when we reach a goal state or the queue becomes empty.
▪ Variable Neighborhood Descent:
o This starts with the first neighborhood function and tries to find a
maxima by generating neighbors by changing one bit at a time and uses the
concept of hill climbing.
o If a local maxima is reached, it switches to a denser neighborhood
function and tries to find a maxima by changing 2 bits at a time.
o If still the global maxima is not found, it switches to the third neighbor
function and repeats the procedure but by changing 3 bits at a time. It stops if
the local maxima is reached. (only 3 neighborhood functions are used here).
o If at any time, a global maxima is reached during any neighborhood
function, the function terminates.
▪ Tabu Search:
o We start by exploring all the neighbors from the current state by
changing the value of two bits at a time.
o A Tabu tenure of 4 is chosen for this.
o We find the heuristic value for the current state.
o Only those variables can be changed which are not recently changed. o
We take the state with the best heuristic value among all the
neighboring states.
o The algorithm is terminated once it reaches a goal state or after a
certain amount of time.
▪ Two different performance measures are used to compare the efficiency of
the different search algorithms.
1. Max Best Score : This is used to calculate the best heuristic score for
each problem for each algorithm.
2. Accuracy Measure : This is used to calculate the no. of times an
algorithm was able to find the solution to a problem.
CODE:
from string import ascii_lowercase
import random
from itertools import combinations
import numpy as np
alphabet = (list(ascii_lowercase))[:26]
def generateVariableList(n):
 positive_var = []
 for i in range(1, n + 1):
      x = i
      s = ""
      while x > 0:
          if x % 26 == 0:
              s = s + 'z'
              x = int(x / 26)
              x = x - 1
          else:
              s = alphabet[(x % 26) - 1] + s
              x = int(x / 26)
      positive_var.append(s)
 return positive_var
def getVariableKey(n):
 x = n
 s = ""
 while x > 0:
    if x % 26 == 0:
        s = s + 'z'
        x = int(x / 26)
        x = x - 1
    else:
        s = alphabet[(x % 26) - 1] + s
        x = int(x / 26)
 
 return s
def generateProblemSet(m, k, n,test_cases):
 positive_var = generateVariableList(n)
 negative_var = [c.upper() for c in positive_var]
 variables = positive_var + negative_var
 problems = []
 allCombs = list(combinations(variables, k))
 i = 0
 while i<test_cases:
      c = random.sample(allCombs, m)
      if c not in problems:
          i += 1
          problems.append(list(c))
 return variables, problems
def assignValues(variables, n):
 forPositive = list(np.random.choice(2,n))
 forNegative = [abs(1-i) for i in forPositive]
 values = forPositive + forNegative
 var_assign = dict(zip(variables, values))
 return var_assign
def getHeuersticScore(problem, values):
 count = 0
 for sub in problem:
      l = [values[val] for val in sub]
      count += any(l)
 return count
def hillClimbing(problem,values,noOfClauses,n):
 curr_score = getHeuersticScore(problem,values)
 if(curr_score == noOfClauses):
    return curr_score, True
 maxScore = 0
 maxConfiguration = {}
 for i in range(1,n+1):
    temp_values = values.copy()
    key = getVariableKey(i)
    temp_values[key] = abs(temp_values[key] - 1)
    temp_values[key.upper()] = abs(temp_values[key.upper()] - 1)
    new_score = getHeuersticScore(problem,temp_values)
    if new_score > maxScore:
        maxScore = new_score
        maxConfiguration = temp_values
 if maxScore == noOfClauses:
      return maxScore, True
 if maxScore <= curr_score:
      return curr_score,False
 else:
    return hillClimbing(problem,maxConfiguration,noOfClauses,n)
def beamSearch(problem,values,noOfClauses,n,beamWidth):
 stateQueue = []
 temp = values.copy()
 stateQueue.append(temp)
 nodes_visited = 0
 maxScore = 0
 while len(stateQueue) > 0:
    temp = stateQueue.pop(0).copy()
    nodes_visited += 1
    curr_score = getHeuersticScore(problem,temp)
    maxScore = max(maxScore,curr_score)
    if(curr_score == noOfClauses):
        return curr_score,True
    neighbours = list()
    for i in range(1,n+1):
      temp_values = temp.copy()
      key = getVariableKey(i)
      temp_values[key] = abs(temp_values[key] - 1)
      temp_values[key.upper()] = abs(temp_values[key.upper()] - 1)
      new_score = getHeuersticScore(problem,temp_values)
      if new_score > curr_score:
          z = {'score': new_score, 'state': temp_values}
          neighbours.append(z)
    neighbours.sort(key=lambda item: item.get("score"))
    sorted_neighbours = neighbours[::-1]
    best_neighbours = sorted_neighbours[:beamWidth]
    for neigh in best_neighbours:
      stateQueue.append(neigh['state'])
 return maxScore,False
def
variableNeighbourhoodDescent(problem,values,noOfClauses,n,neighbourhoodFuncVal
ue):
 curr_score = getHeuersticScore(problem,values)
 if(curr_score == noOfClauses):
    return curr_score,True
 maxScore = 0
 maxConfiguration = {}
 if neighbourhoodFuncValue == 1:
 
      for i in range(1,n+1):
          temp_values = values.copy()
          key = getVariableKey(i)
          temp_values[key] = abs(temp_values[key] - 1)
          temp_values[key.upper()] = abs(temp_values[key.upper()] - 1)
          new_score = getHeuersticScore(problem,temp_values)
          if new_score > maxScore:
              maxScore = new_score
              maxConfiguration = temp_values
 elif neighbourhoodFuncValue == 2:
    for i in range(1,n):
        for j in range(i+1,n+1):
            temp_values = values.copy()
            key1 = getVariableKey(i)
            key2 = getVariableKey(j)
            temp_values[key1] = abs(temp_values[key1] - 1)
            temp_values[key1.upper()] = abs(temp_values[key1.upper()] - 1)
            temp_values[key2] = abs(temp_values[key2] - 1)
            temp_values[key2.upper()] = abs(temp_values[key2.upper()] - 1)
            new_score = getHeuersticScore(problem,temp_values)
            if new_score > maxScore:
                maxScore = new_score
                maxConfiguration = temp_values
 else :
      for i in range(1,n-1):
          for j in range(i+1,n):
              for k in range(j+1,n+1):
                  temp_values = values.copy()
                  key1 = getVariableKey(i)
                  key2 = getVariableKey(j)
                  key3 = getVariableKey(k)
                  temp_values[key1] = abs(temp_values[key1] - 1)
                  temp_values[key1.upper()] = abs(temp_values[key1.upper()] -
1)
                  temp_values[key2] = abs(temp_values[key2] - 1)
                  temp_values[key2.upper()] = abs(temp_values[key2.upper()] -
1)
                  temp_values[key3] = abs(temp_values[key3] - 1)
                  temp_values[key3.upper()] = abs(temp_values[key3.upper()] -
1)
                  new_score = getHeuersticScore(problem,temp_values)
                  if new_score > maxScore:
                      maxScore = new_score
                      maxConfiguration = temp_values
 if maxScore == noOfClauses:
      return maxScore, True
 if maxScore <= curr_score:
      if(neighbourhoodFuncValue < 3):
          return
variableNeighbourhoodDescent(problem,maxConfiguration,noOfClauses,n,neighbourh
oodFuncValue + 1)
      else:
          return maxScore, False
 else:
      return
variableNeighbourhoodDescent(problem,maxConfiguration,noOfClauses,n,neighbourh
oodFuncValue)
def tabu(problem,values,noOfClauses,n,tabuTenure):
 curr_score = getHeuersticScore(problem,values)
 
 if(curr_score == noOfClauses):
    return curr_score,True
 maxScore = curr_score
 maxConfiguration = values.copy()
 nextConfiguration = values.copy()
 tabu_memory = []
 
 for i in range(n):
    tabu_memory.append(0)
 time = 0
 while time<= 2000:
    if maxScore == noOfClauses:
      return maxScore, True
    new_i = 0
    new_j = 0
    maxLocalScore = 0
    for i in range(1,n):
        if tabu_memory[i]!=0:
          continue
        for j in range(i+1,n):
            if tabu_memory[j]!=0:
               continue
            temp_values = nextConfiguration.copy()
            key1 = getVariableKey(i)
            key2 = getVariableKey(j)
            temp_values[key1] = abs(temp_values[key1] - 1)
            temp_values[key1.upper()] = abs(temp_values[key1.upper()] - 1)
            temp_values[key2] = abs(temp_values[key2] - 1)
            temp_values[key2.upper()] = abs(temp_values[key2.upper()] - 1)
            new_score = getHeuersticScore(problem,temp_values)
            if new_score > maxLocalScore:
                maxLocalScore = new_score
                nextConfiguration = temp_values.copy()
                new_i = i
                new_j = j
            if new_score == noOfClauses:
                return maxLocalScore, True
    if maxLocalScore > maxScore:
      maxScore = maxLocalScore
      maxConfiguration = nextConfiguration.copy()
    for x in range(n):
      if tabu_memory[x]!=0:
        tabu_memory[x] -= 1
 
 time+=1
 tabu_memory[new_i] = tabuTenure
 tabu_memory[new_j] = tabuTenure
 
 return max(maxScore,curr_score), False
def printProblem(problem,m,k):
 print("{",end = " ")
 for j, clause in enumerate(problem):
    print("(",end = " ")
    for i,variable in enumerate(clause):
        if variable.isupper():
          print(f"~{variable.lower()}",end = " ")
        else:
          print(variable,end = " ")
        if i != k-1:
          print("V",end = " ")
    print(")",end = " ")
    if j != m-1:  
        print("^",end = " ")
 print(" }")
 print()
 print()
 
if __name__ == '__main__':
 m = int(input("Enter the number of clauses(m) : "))
 print()
 n = int(input("Enter the total number of variables(n) : "))
 print()
 print("The number of variables in a clause(k) : 3")
 k = 3
 print()
 testcases = int(input("Enter the no of testcases : "))
 variables, problems = generateProblemSet(m, k, n,testcases)
 values = assignValues(variables, n)
 hillClimbTotal = 0
 hillClimbFoundSol = 0
 beam3Total = 0
 beam3FoundSol = 0
 beam4Total = 0
 beam4FoundSol = 0
 vndTotal = 0
 vndFoundSol = 0
 tabuTotal = 0
 tabuFoundSol = 0
 for i,problem in enumerate(problems):
    print("\n\nCurrent Problem : \n")
    printProblem(problem,m,k)
    print("-------ANALYSIS-------")
    print()
    print("\nHill Climbing :")
    score, foundSolution = hillClimbing(problem,values,m,n)
    print(f"Best Heursictic Score : {score} \nWas Solution found ? :
{foundSolution}")
    hillClimbTotal += score
    if(foundSolution):
        hillClimbFoundSol += 1
   
    print("\nBeam Search for Beam Width 3 : ")
    score, foundSolution = (beamSearch(problem,values,m,n,3))
    print(f"Best Heursictic Score : {score} \nWas Solution found ? :
{foundSolution}")
    beam3Total += score
    if(foundSolution):
         beam3FoundSol += 1
    print("\nBeam Search for Beam Width 4 : ")
    score, foundSolution = (beamSearch(problem,values,m,n,4))
    print(f"Best Heursictic Score : {score} \nWas Solution found ? :
{foundSolution}")
    beam4Total += score
    if(foundSolution):
        beam4FoundSol += 1
 
    print("\nVariable Neighbourhood Descent : ")
    score, foundSolution = variableNeighbourhoodDescent(problem,values,m,n,1)
    print(f"Best Heursictic Score : {score} \nWas Solution found ? :
{foundSolution}")
    vndTotal += score
    if(foundSolution):
        vndFoundSol += 1
   
    print("\nTabu Search : ")
    score, foundSolution = tabu(problem,values,m,n,4)
    print(f"Best Heursictic Score : {score} \nWas Solution found ? :
{foundSolution}")
    tabuTotal += score
    if(foundSolution):
        tabuFoundSol += 1
   
 print()
 print("-------FINAL ANALYSIS OF ALL TESTCASES-------")
 print()
 print("Hill Climbing :")
 print(f"Average Heuristic Score : {hillClimbTotal/testcases} ")
 print(f"Solution was found for : {hillClimbFoundSol} out of {testcases}
TestCases")
 print(f"So the accuracy measure is : {hillClimbFoundSol * 100 /testcases} %")
 print()
 print("Beam Search for Beam Width 3 : ")
 print(f"Average Heuristic Score : {beam3Total/testcases} ")
 print(f"Solution was found for : {beam3FoundSol} out of {testcases}
TestCases")
 print(f"So the accuracy measure is : {beam3FoundSol* 100/testcases} %")
 print()
 print("Beam Search for Beam Width 4 : ")
 print(f"Average Heuristic Score : {beam4Total/testcases} ")
 print(f"Solution was found for : {beam4FoundSol} out of {testcases}
TestCases")
 print(f"So the accuracy measure is : {beam4FoundSol* 100/testcases} %")
 print()
 print("Variable Neighbourhood Descent : ")
 print(f"Average Heuristic Score : {vndTotal/testcases} ")
 print(f"Solution was found for : {vndFoundSol} out of {testcases} TestCases")
 print(f"So the accuracy measure is : {vndFoundSol* 100/testcases} %")
 print()
 
 print("Tabu Search : ")
 print(f"Average Heuristic Score : {tabuTotal/testcases} ")
 print(f"Solution was found for : {tabuFoundSol} out of {testcases}
TestCases")
 print(f"So the accuracy measure is : {tabuFoundSol* 100/testcases} %")
 print()    

OUTPUT:
CONCLUSION:
• As the no of clauses increases, the variation in the performance of the
algorithms also increase.
• As the no of variables increase, the execution time for algorithms increases
vastly.
• Hill Climbing algorithm can get stuck at local maxima whereas other
algorithms which are a modification to hill climbing try to escape it.
• Since all algorithms except Hill Climbing try to escape the local maxima in
search for a global maxima, their accuracies are better than Hill Climbing.
• In search of a global maxima, all algorithms except Hill Climbing have a better
Avg Best Heuristic Score.
• Execution time for hill climbing is far less than the execution time for other
algorithms.
• Variable neighborhood descent algorithm, which is an improvement to hill
climbing algorithm, has a greater accuracy than hill climbing.
• Variable neighborhood descent algorithm switches to a denser neighborhood
function when it gets stuck in a local maxima.
• Beam Search requires more space since it has to maintain a Queue.
• Accuracy of Beam Search with width 4 will be more than Beam Search with
width 3 since with width 4 we are able to check for more number of
neighboring states.
• Space Requirement in tabu search and hill climbing is quite less.
Out of all the algorithms, only Tabu Search can move to a neighbor having a
lower heuristic value. All other algorithms only selects neighbors having a
higher heuristic score than the current score. Thus I have generated different
problems for k-SAT and implemented search algorithms for 3-SAT. Thus I have
learnt about the various search algorithms and I have also understood the
difference between various algorithms

You might also like