You are on page 1of 73

Practical File

of
Artificial Intelligence

Submitted By: Submitted To:


Priyadarshini Dr. Ankit Gupta
LCO17373 Assistant Professor
CSE 3rd Year (5th Sem)
Index
S.N.fff Title Date of Signature
Submission
1.
2.
3.
4.
5.

6.

7.

8.

9.

10.

11.

12.

13.

14.

15.

16.
17.
18.
Practical – 1
Objective: To solve the N-queens problem using a program and
measuring the time taken for various values of N.

Theory: The N Queen is the problem of placing N chess queens on an N×N


chessboard so that no two queens attack each other. The idea is to place queens one
by one in different columns, starting from the leftmost column. When we place a
queen in a column, we check for clashes with already placed queens. In the current
column, if we find a row for which there is no clash, we mark this row and column
as part of the solution. If we do not find such a row due to clashes then we backtrack
and return false.
 Backtracking Algorithm
In Backtracking algorithm, we place a queen in a column one by one, starting from the left.
When we place a queen in a column, we check for its conflicts with the already placed
queens. If in the current column we find a row for which there is no clash, we consider this
row and column as the part of the solution. If in case, we do not get any such row with no
conflicts for a particular column, then we will backtrack.

 Linear Time Serial Algorithm


The algorithm consists of two steps, the initial step and the final step. During the initial
step, queens are placed in successive columns from the left to the right. In each column,
the position of a queen is chosen by random. If there are no conflicts with queens already
placed, the queen is placed on the board. Otherwise, a new position is generated. This
process is repeated, until no queens can be added without introducing conflicts. The rest of
the queens are placed in empty columns at the right. At this point, the initial step ends.
During the final step, queens with conflicts placed in the columns at the right are swapped
randomly with the rest of the queens until there are no conflicts left.

 Naive Algorithm
In Naive algorithm, we will find all possible combinations of queen on the chessboard and
then we will generate a solution on the basis of the given constraints. The combinations in
which queens will not be in the attacking position will be considered, and the combinations
in which there exists an attacking position would be rejected.
ALGORITHM:

Nqueen (k,n)
{
fori= 1 to n
{
if place(k, i)
{
x[k]= I;

if (k=n)
write x[1:n]
else
Nqueen (k+1, n)
}
}
}

place (k,i)
{
for j= 1 to k-1
if ((x[j] =I ) or (abs(x[j] –i) = abs(j-k)))
then return false
return true
}

In this algorithm: -

 n is a user input that denotes the number of queens that are to be placed on the n*n
chessboard.

 x[100] is the array which stores the solution vector such that the index of the array
represents the queen number and the value at that index gives the column number where
the queen is to be placed.

 board[100][100] is a 2-D matrix which stores the array x[] values in form of a chessboard.

 place(int k, inti) is the function to check if the cell (k,i) is under the attack of any other
queen or not.
- Here, we check that whether there is any other queen in same column same diagonal.
- If the queen is found at any of the above cells, then function returns false(0) else it
returns true(1).

 nqueen(int k, int n) is the main function where we implement the backtracking algorithm.
- The initial call to this function is nqueen(1,n).
- if (place(k,i)) checks whether the queen can be placed in cell (k,i) or not.
- If the function is returned true, then the queen k can be placed in cell (k,i) and solution
is written as x[k]=i.
- If k=n, that means all the queens have been placed and the print() function is called to
print the solution.
- Otherwise, the position for the next queen is found by using recursive call.

Program:
#include<iostream>
#include<conio.h>
#include<math.h>
using namespace std;
int a[30],count=0;
int place(int pos) {
int i;
for (i=1;i<pos;i++){
if((a[i]==a[pos])||(abs(a[i]-a[pos])==abs(i-pos)))
return 0;
}
return 1;
}
void print_sol(int n) {
int i,j;
count++;
}
void queen(int n) {
int k=1;
a[k]=0;
while(k!=0){
a[k]=a[k]+1;
while((a[k]<=n)&&!place(k)){
a[k]++;
}
if(a[k]<=n) {
if(k==n){
print_sol(n);
}
else{
k++;
a[k]=0;
}
}
else{
k--;
}
}
}
int main()
{
int i, n=15;
cout<<"Enter the number of Queens(4 or more)\n";
cin>>n;
queen(n);
cout<<"\nTotal solutions="<<count;
return 0;
}

OUTPUT:

No. of queens No. of solutions Execution Time(seconds)


2 0 0.1962
4 02 0.2456
5 10 0.4944
6 04 0.361
7 40 0.4162
8 92 0.5169
9 352 0.6533
10 724 1.204
11 2680 1.512
12 14200 1.079
13 73712 3.394
14 365596 18.3
15 2279184 127.2
Practical – 2
Objective: To represent the Breadth-First Search (BFS) using linked lists.

THEORY:
BFS is a traversing algorithm where you should start traversing from a selected node
(source or starting node) and traverse the graph layer wise thus exploring the neighbour
nodes (nodes which are directly connected to source node). You must then move towards
the next-level neighbour nodes.
As the name BFS suggests, you are required to traverse the graph breadthwise as follows:

1. First move horizontally and visit all the nodes of the current layer
2. Move to the next layer

The time complexity of BFS is O(V + E), where V is the number of nodes and E is the
number of edges.

ALGORITHM:

BFS (G, s) //Where G is the graph and s is the source node


let Q be queue.
Q.enqueue( s ) //Inserting s in queue until all its neighbour vertices are marked.

mark s as visited.

while ( Q is not empty)


{
//Removing that vertex from queue, whose neighbour will be visited now
v = Q.dequeue( )

//processing all the neighbours of v


for all neighbours w of v in Graph G
if w is not visited
Q.enqueue( w ) //Stores w in Q to further visit its neighbour
mark w as visited.
}
In this algorithm: -

 s is the source node.

 G is the Graph, which contains set of edges and vertices.

 Q is the Queue in which nodes are processed and after that they are marked
as visited and are dequeued.

 v is the current vertex, whose neighbours will be processed when it is


dequeued.

Program:
#include<stdio.h>
int a[20][20], q[20], visited[20], n, i, j, f = 0, r = -1;

void bfs(int v) {
for(i = 1; i <= n; i++)
if(a[v][i] && !visited[i])
q[++r] = i;
if(f <= r) {
visited[q[f]] = 1;
bfs(q[f++]);
}
}
int main() {
int v;
printf("\n Enter the number of vertices:");
scanf("%d", &n);

for(i=1; i <= n; i++) {


q[i] = 0;
visited[i] = 0;
}

printf("\n Enter graph data in matrix form:\n");


for(i=1; i<=n; i++) {
for(j=1;j<=n;j++) {
scanf("%d", &a[i][j]);
}
}

printf("\n Enter the starting vertex:");


scanf("%d", &v);
bfs(v);
printf("\n The node which are reachable are:\n");

for(i=1; i <= n; i++) {


if(visited[i])
printf("%d\t", i);
else {
printf("\n Bfs is not possible. Not all nodes are reachable");
break;
}
}
}

OUTPUT

1 2

3 4
Practical – 3
Objective: To represent the Depth-First Search (DFS) using linked lists.

Theory: The DFS algorithm is a recursive algorithm that uses the idea of
backtracking. It involves exhaustive searches of all the nodes by going ahead, if
possible, else by backtracking.
This recursive nature of DFS can be implemented using stacks. The basic idea is as
follows:
Pick a starting node and push all its adjacent nodes into a stack.
Pop a node from stack to select the next node to visit and push all its adjacent nodes
into a stack.
Repeat this process until the stack is empty. However, ensure that the nodes that are
visited are marked. This will prevent you from visiting the same node more than
once. If you do not mark the nodes that are visited and you visit the same node more
than once, you may end up in an infinite loop.
Algorithm:
DFS-iterative (G, s): //Where G is graph and s is source vertex
let S be stack
S.push( s ) //Inserting s in stack
mark s as visited.
while ( S is not empty):
//Pop a vertex from stack to visit next
v = S.top( )
S.pop( )
//Push all the neighbours of v in stack that are not visited
for all neighbours w of v in Graph G:
if w is not visited :
S.push( w )
mark w as visited
DFS-recursive(G, s):
mark s as visited
for all neighbours w of s in Graph G:
if w is not visited:
DFS-recursive(G, w)

Program:
#include<stdio.h>
void DFS(int);
int G[50][50],visited[50],n;

int main()
{
int i,j;
printf("Enter number of vertices:");
scanf("%d",&n);
//read the adjecency matrix
printf("\nEnter adjecency matrix of the graph:");

for(i=0;i<n;i++)
for(j=0;j<n;j++)
scanf("%d",&G[i][j]);
//visited is initialized to zero
for(i=0;i<n;i++)
visited[i]=0;
DFS(0);
}
void DFS(int i)
{
int j;
printf("\n%d",i);
visited[i]=1;
for(j=0;j<n;j++)
if(!visited[j]&&G[i][j]==1)
DFS(j);
}

OUTPUT:

1 2

3 4
Practical – 4
Objective: To implement the Tic-Tac-Toe using MiniMax algorithm.

Theory: Tic-tac-toe is a 2-player combinatorial game, where the players X and O


take turns filling 3 times 3×3 grid. Whoever places three respective marks in a
vertical, horizontal, or diagonal manner will be the winner of the game. A draw is
obtained whenever nobody wins.
Rules of the Game: -
● One of the players chooses ‘O’ and the other ‘X’ to mark their respective cells.
● The game starts with one of the players and the game ends when one of the players
has one whole row/ column/ diagonal filled with his/her respective character (‘O’ or
‘X’).
● If no one wins, then the game is said to be draw.

Program:

#include <stdio.h>
#include <algorithm>
#include <string>
using namespace std;

#define inf 1<<20


int posmax, posmin;
char board[15];

void print_board()
{
int i;
for (i = 1; i <= 9; i++)
{
printf("%c ",board[i]);
if (i % 3 == 0)
printf("\n");
}
printf("\n");
}

int check_win(char board[])


{
if ((board[1] == 'X' && board[2] == 'X' && board[3] == 'X') ||
(board[4] == 'X' && board[5] == 'X' && board[6] == 'X') ||
(board[7] == 'X' && board[8] == 'X' && board[9] == 'X') ||
(board[1] == 'X' && board[4] == 'X' && board[7] == 'X') ||
(board[2] == 'X' && board[5] == 'X' && board[8] == 'X') ||
(board[3] == 'X' && board[6] == 'X' && board[9] == 'X') ||
(board[1] == 'X' && board[5] == 'X' && board[9] == 'X') ||
(board[3] == 'X' && board[5] == 'X' && board[7] == 'X'))
{
return 1;
}
else if((board[1] == 'O' && board[2] == 'O' && board[3] == 'O') ||
(board[4] == 'O' && board[5] == 'O' && board[6] == 'O') ||
(board[7] == 'O' && board[8] == 'O' && board[9] == 'O') ||
(board[1] == 'O' && board[4] == 'O' && board[7] == 'O') ||
(board[2] == 'O' && board[5] == 'O' && board[8] == 'O') ||
(board[3] == 'O' && board[6] == 'O' && board[9] == 'O') ||
(board[1] == 'O' && board[5] == 'O' && board[9] == 'O') ||
(board[3] == 'O' && board[5] == 'O' && board[7] == 'O'))
{
return -1;
}
else return 0;
}

int check_draw(char board[])


{
if ((check_win(board) == 0) && (board[1] != '_') && (board[2] != '_') &&
(board[3] != '_') && (board[4] != '_') && (board[5] != '_') &&
(board[6] != '_') && (board[7] != '_') && (board[8] != '_') &&
(board[9] != '_'))
{
return 1;
}
else return 0;
}

int minimax(int player, char board[], int n)


{
int i, res, j;

int max = -inf;


int min = inf;
int chk = check_win(board);
if (chk == 1)
return 1;
else if (chk == (-1))
return -1;
else if (chk = check_draw(board))
return 0;

for (i = 1; i <= 9; i++)


{
if(board[i] == '_')
{
if(player == 2)
{
board[i] = 'O';
res = minimax(1, board, n + 1);

board[i] = '_';
if(res < min)
{
min = res;
if (n == 0)
posmin = i;
}
}
else if (player == 1)
{
board[i] = 'X';
res = minimax(2, board, n + 1);
board[i] = '_';
if (res > max)
{
max = res;
if (n == 0)
posmax = i;
}
}
}
}

if (player == 1)
return max;
else return min;
}

// 1 is X, 2 is O
int main()
{
int i, j, input, opt;

for(i = 1; i <= 9; i++)


board[i] = '_';

printf("Board:\n");
print_board();
for(i = 1; i <= 9; i++)
{
if (i % 2 == 0)
printf("Player O give input:\n");
else
printf("Player X give input:\n");

scanf("%d", &input);
if (i % 2 != 0)
board[input] = 'X';
else
board[input] = 'O';

printf("Board:\n");
print_board();

int chk = check_win(board);


if (chk == 1)
{
printf("Player X wins!\n");
break;
}
else if (chk == -1)
{
printf("Player O wins!\n");
break;
}
else if ((chk == 0) && (i != 9))
{
posmax = -1;
posmin = -1;
if(i % 2 == 0)
{
opt = minimax(1, board, 0);
printf("Optimal move for player X is %d\n", posmax);
}
else
{
opt = minimax(2, board, 0);
printf("Optimal move for player O is %d\n", posmin);
}
}
else
printf("The game is tied!\n");
}
return 0;
}
OUTPUT:
Practical – 5
Objective: To learn about Prolog.
There are only three basic constructs in Prolog: facts, rules, and queries. A collection of facts and
rules is called a knowledge base and Prolog programming is all about writing knowledge bases.

KNOWLEDGE BASE 1 (KB 1)


Knowledge Base 1 (KB1) is simply a collection of facts. Facts are used to state things that are
unconditionally true of some situation of interest. For example, we can state that Mia, Jody, and
Yolanda are women, that Jody plays air guitar, and that a party is taking place, using the following
five facts:
woman(mia).
woman(jody).
woman(yolanda).
playsAirGuitar(jody).
party.

OUTPUT:
KNOWLEDGE BASE 2 (KB 2)
FACT:
happy(yolanda).
listens2Music(mia).

RULES:
listens2Music(yolanda):- happy(yolanda).
playsAirGuitar(mia):- listens2Music(mia).
playsAirGuitar(yolanda):- listens2Music(yolanda).

OUTPUT:

There are two facts in KB2, listens2Music(mia). and happy (yolanda). The last three items it
contains are rules. Rules state information that is conditionally true of the situation of interest.
For example, the first rule says that Yolanda listens to music if she is happy. More generally, the
:- is read as “if”, or “is implied by”. The part on the left hand side of the :- is called the head of the
rule, the part on the right hand side is called the body. So in general rules say: if the body of the
rule is true, then the head of the rule is true too.
KNOWLEDGE BASE 3 (KB 3)
FACT:
happy(vincent).
listens2Music(butch).
RULE:
playsAirGuitar(vincent):- listens2Music(vincent), happy(vincent).
playsAirGuitar(butch):- happy(butch).
playsAirGuitar(butch):- listens2Music(butch).
Similar to KB2 in KB3, we have 2 facts and 3 rules. In the first rule there are two bodies which
states that if both the bodies are true only then head will be true otherwise it will be false, this is
the expression of logical conjunction in Prolog.
playsAirGuitar(butch):- happy(butch).
playsAirGuitar(butch):- listens2Music(butch).

This is a way of stating that Butch plays air guitar either if he listens to music, or if he is happy.
There is another way of representing logical disjunction in Prolog. We could replace the pair of
rules given above in the single rule.

playsAirGuitar(butch):- happy(butch); listens2Music(butch).


KNOWLEDGE BASE 4 (KB4):

woman(mia).
woman(jody).
woman(yolanda).

loves(vincent,mia).
loves(marsellus,mia).
loves(pumpkin,honey_bunny).
loves(honey_bunny,pumpkin).

In this knowledge base we have only facts no rules. Here we will use Prolog variable (i.e. capital
letter like X) that’s why we have not used any capital letter in any fact and rule yet.

For Example:
? woman(X)

Another Example:
?- loves(marsellus,X), woman(X).
We have studied that , means logical AND. So this query says: is there any individual X such that
Marsellus loves X and X is a woman? , if we look up in the kb4 , mia is a woman and marsellus
loves mia.
KNOWLEDGE BASE 5:
Knowledge Base 5 (KB5) contains variables in Knowledge Base.
loves(vincent,mia).
loves(marsellus,mia).
loves(pumpkin,honey_bunny).
loves(honey_bunny,pumpkin).
jealous(X,Y):- loves(X,Z), loves(Y,Z).
 KB5 contains four facts about the loves relation and one rule.
 it is defining a concept of jealousy. It says that an individual X will be jealous of an
individual Y if there is some individual Z that X loves, and Y loves that same individual Z
too.
OUTPUT:
ATOM: -
An atom is either:

 A string of characters made up of upper-case letters, lower-case letters, digits, and the
underscore character, that begins with a lower-case letter. For example : butch
,big_kahuna_burger , listens2Music and playsAirGuitar .

 An arbitrary sequence of characters enclosed in single quotes. For example ’ Vincent ’, ’


The Gimp ’, ’ Five_Dollar_Shake ’, ’ &^%&#@$ &* ’, and ’ ’. The sequence of characters
between the single quotes is called the atom name.

 A string of special characters. Here are some examples: @= and ====> and ; and :- are all
atoms. Some of these atoms, such as ; and :- have a pre-defined meaning.

NUMBERS: -
 Integers (that is: …, -2, -1, 0, 1, 2, 3,…) are useful for such tasks as counting the elements
of a list.
 Most Prolog implementations do support floating point numbers or floats (that is,
representations of real numbers such as 1657.3087 or π ).

VARIABLES: -
A variable is a string of upper-case letters, lower-case letters, digits and
underscore characters that starts either with an upper-case letter or with an
underscore. For example, X , Y , Variable , _tag , X_526 , List , List24 ,
_head, Tail , _input and Output are all Prolog variables.

COMPLEX TERMS: -
Complex terms are often called structures. Complex terms are build out of a functor followed by
a sequence of arguments. The arguments are put in ordinary parentheses, separated by commas,
and placed after the functor The number of arguments that a complex term has is called its arity.
Prolog would treat the two-place love and the three-place love as different predicates.
PRACTICAL-6

PROLOG PROJECT
This program implements a CALCULATOR in prolog.
Knowledge Base :-

start :-
nl, write('BMI CALCULATOR'), nl,
nl, write('Enter your WEIGHT in Kilo Grams : '),
read(W),
nl, write('Enter your HEIGHT in Meters : '),
read(H),
BMI is W/H/H,
nl, write('Your BODY MASS INDEX is '),
write(BMI), nl,
process(BMI).

process(BMI) :-
BMI < 18.5, nl, write('This is considered UNDERWEIGHT');
BMI > 18.5 , BMI < 24.9, nl, write('This is considered HEALTHY
WEIGHT');
BMI > 24.9, BMI < 30, nl, write('This is considered OVERWEIGHT');
BMI > 30, nl, write('This is considered OBESE').

OUTPUT :-
Practical - 7
Objective: Implementing various CLISP Programs.
Theory:
LISP - List processing
Various LISP Environment:
Emacs
LispWorks
Cygwin
Eclipse w/ plugin
LISP uses:
Numeric Literals
characters - Strings
Functions — Functions
— Marcos
— Special Operators

Lisp created by John McCarthy in late 1950’s


• Main idea was to study computability from a functional programming standpoint.
• CLISP now supports object oriented, imperative and functional programming.

Syntax: -
<s-expression>:=<atom>|<list>
<atom>= <no.>|<Identifiers>
<List>=(<s.expression>)
<identifiers>=string of printable characters not including parenthesis

Fundamental Data Types:


• Atoms : Anatom is a alphanmeric string, used either as a varialbe name ora data item.
• List: is a sequence of elements where each element is either an atom or a list.
A list is a left parenthesis followed by a a zero or more s-expression, followed by a right
parenthesis.
• An S-expression is a an atom or list.
• In 1965, McCarthy developed a function called “eval” used to evaluate other functions.
• Literals are evaluated to themselves e.g. if you type in 5, the interpreter will return 5.

Expressions that are calls to primitive functions are evaluated as follows:


• Each parameter is evaluated first.
• The primitive function is applied to parameter values.
• The result is displayed.

Functions: fundamental building blocks for LISP program


Format:
(Defun (Function_name parameter)
<expression(s)>)

Define or defun is used to bind a name to a value.


Direct numeric literals do not need parenthesis.
• 1.1
• 2
• 3/2

Some basic function:


• CAR return the head of a list.
• CDR returns the tail of a list.
• CONS inserts a new head in to the list.
• EQ compare two atoms for equality.
• ATOM tests if its argument is an atom.
• (NULL S) tests if Sis the empty list.
• (LISP S) tests is S a list.

LIST makes a list of its (evaluated) arguments.


• (LIST ‘A’(BC)’D) returns (A(BC)D)
• (LIST (CDR ‘(AB))’C) returns ((B)C)

APPEND concatenates two list


• (Append ‘(AB) ‘((X)Y)) return (AB(X)Y)
• (car ‘(a b c)) will return a.
Note: not using ‘will result in an error.

What is the o/p in case of CDR


• (A B C)
• ((XY)Z)
• (X)
• (()())
• ()

CONS takes two arguments:


• The first argument can be any s-expression.
• The secind argument shuld be a list.
• The result is a new list whose CAR is the first argument and whose CDR is the second.
• Obtaining the second element of a list:
• <first (rest ‘(a b c d))>

‘first’ returns the first element of a list


<first ‘(a b c d)>
Rest returns a list with the first element removed.
Quote ( ‘ ) stops expression evaluation
> (+ 1 2 3)
> ‘ (+ 1 2 3)
• to assign a value to a symbol(setq, set, setf)
> (setq x 3.0)
• setq is a special form of function (with two arguments)
• The first argument is a symbol which will not be evaluated

Example of a program:
Area of a circle:
(def constant PI 3.1416)
(defun area-circle (rad)
(terpri)
(format t “Radius: ~5f” rad)
(format t “~%Area: ~10f” (* PI rad rad)))
(area-circle 10)

CLISP PROGRAMS:
List Operations in LISP:

Loop:
PRACTICAL-9

LISP PROJECT

Code:
(terpri)
(terpri)
(format t "BMI CALCULATOR ~%")

(terpri)
(format t "Enter your Name : ")
(setq name (read))

(terpri)
(format t "Enter your Age : ")
(setq age (read))

(terpri)
(format t "Enter your Weight in Kilo Grams : ")
(setq weight (read))

(terpri)
(format t "Enter your Height in Meters : ")
(setq height (read))

(setq bmi (/ (/ weight height) height))


(terpri)
(format t "BMI : ~a ~%" bmi)

(cond ( (< bmi 18.5)


(format t "This is considered UNDERWEIGHT"))

( (and (> bmi 18.5) (< bmi 24.9))


(format t "This is considered HEALTHY WEIGHT"))

( (and (> bmi 24.9) (< bmi 30))


(format t "This is considered OVERWEIGHT"))

( (> bmi 24.9)


(format t "This is considered OBESE"))
)
EXAMPLES:
Practical - 9
PROLOG UNIFICATION
When programming in Prolog, we spend a lot of time thinking about how variables and rules
"match" or "are assigned." There are actually two aspects to this. The first, "unification," regards
how terms are matched and variables assigned to make terms match. The second, "resolution," is
described in separate notes. Resolution is only used if rules are involved. You may notice in these
notes that no rules are involved since we are only talking about unification.

TERMS
Prolog has three kinds of terms:
 Constants like 42 (numbers) and franklin (atoms, i.e., lower-case words).
 Variables like X and Person (words that start with upper-case).
 Complex terms like parent(franklin, bo) and baz(X, quux(Y)).

Two terms unify if they can be matched. Two terms can be matched if:

1. they are the same term (obviously), or


2. they contain variables that can be unified so that the two terms without variables are the
same.
3. For example, suppose our knowledge base is:

woman(mia).
loves(vincent, angela).
loves(franklin, mia).

 mia and mia unify because they are the same.


 mia and X unify because X can be given the value mia so that the two terms (without
variables) are the same.
 woman(mia) and woman(X) unify because X can be set to mia which results in identical
terms.
 loves(X, mia) and loves(vincent, X) cannot unify because there is no assignment for X
(given our knowledge base) that makes the two terms identical.
 loves(X, mia) and loves(franklin, X) also cannot unify.
ALGORITHM
 If term1 and term2 are constants, then term1 and term2 unify if and only if they are the
same atom, or the same number.
 If term1 is a variable and term2 is any type of term, then term1 and term2 unify, and term1
is instantiated to term2 . Similarly, if term2 is a variable and term1 is any type of term, then
term1 and term2 unify, and term2 is instantiated to term1 . (So if they are both variables,
they’re both instantiated to each other, and we say that they share values.)
 If term1 and term2 are complex terms, then they unify if and only if:
1) They have the same functor and arity, and
2) all their corresponding arguments unify, and
3) the variable instantiations are compatible. (For example, it is not possible to instantiate
variable X to mia when unifying one pair of arguments, and to instantiate X to vincent
when unifying another pair of arguments .)
 Two terms unify if and only if it follows from the previous three clauses that they unify.

We'll use the = predicate to test if two terms unify. Prolog will answer "Yes" if they do, as well as any sufficient
variable assignments to make the unification work.

EXAMPLES
Do these two terms unify?

?- mia = mia.

Yes, from rule 1.

Do these two terms unify?

?- mia = X.

Yes, from rule 2.

Do these two terms unify?

?- X = Y.
Yes, from rule 2.

Do these two terms unify?

?- k(s(g), Y) = k(X, t(k)).

Yes, from rule 3 and, in the recursion, from rule 2 in two cases (X set to s(g) and Y set to t(k)).

Do these two terms unify?

?- k(s(g), Y) = k(s(g, X), Y).

No, because rule 3 fails in the recursion (in which rule 3 is invoked again, and the arity of s(g) does not
match s(g, X)).

Examples using prolog:


1. Two identical atoms unify

2. Atoms don't unify if they aren't identical

3. The term 2*3+4 has principal functor therefore, unifies with x+y with x instantiated to
2*3 and y instantiated to 4.

4. Unification used to find the element, x, y ,z of a list .


Practical – 10

Objective: Implementation of Best First Search.

Best First Search: - Best-first search is a search algorithm which explores a graph by
expanding the most promising node chosen according to a specified rule. It selects the most
promising of the nodes we have generated so far. This can be achieved by applying appropriate
heuristic function to each of them.
Heuristic function:
f(n) = h(n)
where,
h(n) - estimated straight line distance from node n to goal
To implement the graph search procedure, we will need to use two list of nodes.
OPEN- nodes that have been generated but have not been visited yet.
Closed - nodes that have been already visited.

Algorithm: -
Best-First-Search( Maze m )
Insert( m.StartNode )
Until PriorityQueue is empty
c <- PriorityQueue.DeleteMin
If c is the goal
Exit
Else
Foreach neighbor n of c
If n "Unvisited"
Mark n "Visited"
Insert( n )
Mark c "Examined"
End procedure

Source Code: -
import java.util.*;
class Node {
String label;
ArrayList<Node> adjList = new ArrayList<Node>();
int Heuristic;
}
class SortbyHeuristic implements Comparator<Node>
{
public int compare(Node a, Node b) {
return a.Heuristic - b.Heuristic;
}
}
class bestfs {
ArrayList<Node> open = new ArrayList<Node>();
ArrayList<Node> closed = new ArrayList<Node>();

void addAdj(Node n){


for(int i = 0;i < n.adjList.size(); i++){
open.add(n.adjList.get(i));
}
}

Node bestOpen(){

int ans = 36000;


for(int i = 0; i < open.size();i++){
if(open.get(i).Heuristic < ans){
ans = i;
}
}
return open.get(ans);
}

void funct(Node n){


if(n.Heuristic != 0){
closed.add(n);
addAdj(n);
Collections.sort(open, new SortbyHeuristic());
if(closed.size() != 5){
n = open.get(0);
open.remove(0);
funct(n);
}
}
else{
closed.add(n);
return;
}

}
void printlist(){
System.out.print("OPEN LIST -> ");
for(int i=0;i<open.size();i++){
System.out.print(open.get(i).label);
}
System.out.print("\nCLOSED LIST -> ");
for(int i=0;i<closed.size();i++){
System.out.print(closed.get(i).label);
}
}
public static void main(String args[]) {
bestfs obj1 = new bestfs();
Node n1 = new Node();
Node n2 = new Node();
Node n3 = new Node();
Node n4 = new Node();
Node n5 = new Node();
n1.label = "A";
n2.label = "B";
n3.label = "C";
n4.label = "D";
n5.label = "E";
n1.Heuristic = 10;
n1.adjList.add(n2);
n1.adjList.add(n3);
n2.Heuristic = 10;
n2.adjList.add(n4);
n2.adjList.add(n5);
n3.Heuristic = 20;
n4.Heuristic = 30;
n5.Heuristic = 0;
Node n = n1;
obj1.funct(n);
obj1.printlist();

}
}

OUTPUT:
Practical – 11
Objective: - Implementation of Alpha-Beta Pruning.
Theory: -
Alpha-Beta pruning is not actually a new algorithm, rather an optimization technique for minimax
algorithm. It reduces the computation time by a huge factor. This allows us to search much faster and even
go into deeper levels in the game tree. It cuts off branches in the game tree which need not be searched
because there already exists a better move available. It is called Alpha-Beta pruning because it passes 2
extra parameters in the minimax function, namely alpha and beta.

Let’s define the parameters alpha and beta. Alpha is the best value that the maximizer currently can
guarantee at that level or above. Beta is the best value that the minimizer currently can guarantee at that
level or above.

The main condition which required for alpha-beta pruning is: α>=β .

Key points about alpha-beta pruning:

● The Max player will only update the value of alpha.


● The Min player will only update the value of beta.
● While backtracking the tree, the node values will be passed to upper nodes instead of values
of alpha and beta.
● We will only pass the alpha, beta values to the child nodes.

Pseudo Code:

fun minimax (n: node, d: int, min: int, max: int): int =
if leaf(n) or depth=0 return evaluate(n)
if n is a max node
v := min
for each child of n
v' := minimax (child,d-1,v,max)
if v' > v, v:= v'
if v > max return max
return v
if n is a min node
v := max
for each child of n
v' := minimax (child,d-1,min,v)
if v' < v, v:= v'
if v < min return min
return v

Source Code:
// working of Alpha-Beta Pruning
#include<bits/stdc++.h>
using namespace std;

// Initial values of
// Aplha and Beta
const int MAX = 1000;
const int MIN = -1000;

// Returns optimal value for


// current player(Initially called
// for root and maximizer)
int minimax(int depth, int nodeIndex,
bool maximizingPlayer,
int values[], int alpha,
int beta)
{

// Terminating condition. i.e


// leaf node is reached
if (depth == 3)
return values[nodeIndex];

if (maximizingPlayer)
{
int best = MIN;

// Recur for left and


// right children
for (int i = 0; i < 2; i++)
{

int val = minimax(depth + 1, nodeIndex * 2 + i,


false, values, alpha, beta);
best = max(best, val);
alpha = max(alpha, best);

// Alpha Beta Pruning


if (beta <= alpha)
break;
}
return best;
}
else
{
int best = MAX;

// Recur for left and


// right children
for (int i = 0; i < 2; i++)
{
int val = minimax(depth + 1, nodeIndex * 2 + i,
true, values, alpha, beta);
best = min(best, val);
beta = min(beta, best);

// Alpha Beta Pruning


if (beta <= alpha)
break;
}
return best;
}
}

// Driver Code
int main()
{
int values[8] = { 9, 11, 3, 7, 2, 5, 0, -5 };
cout <<"The optimal value is : "<< minimax(0, 0, true, values, MIN, MAX);;
return 0;
}
Output:
Practical – 12
SEMANTIC WEB
Abstract
The Semantic Web is the knowledge graph formed by combining connected, Linked Data with
intelligent content to facilitate machine understanding and processing of content, metadata, and
other information objects at scale.

The Semantic Web is an extension of the current web in which information is given well-defined
meaning, better enabling computers and people to work in cooperation. The semantic web focuses
on data rather than on documents, making it a much more immersive and detailed way of accessing
information compared to the World Wide Web.

The Semantic Web takes the solution further. It involves publishing in languages specifically
designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and
Extensible Markup Language (XML). HTML describes documents and the links between them.
RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or
airplane parts.

These technologies are combined in order to provide descriptions that supplement or replace the
content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-
accessible databases,[11] or as markup within documents (particularly, in Extensible HTML
(XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues
stored separately). The machine-readable descriptions enable content managers to add meaning to
the content, i.e., to describe the structure of the knowledge we have about that content. In this way,
a machine can process knowledge itself, instead of text, using processes similar to human
deductive reasoning and inference, thereby obtaining more meaningful results and helping
computers to perform automated information gathering and research.

The Semantic Web help us to enhance the usability and usefulness of the Web and its
interconnected resources. It is also important as it brings sentences to the Web i.e., the subject,
predicate, and objects (optionally) of a sentence are identified using hyperlinks.

One of the key benefits of the semantic web is having large amounts of data, knowledge and
information made understandable and accessible to machines, especially artificially intelligent
bots, virtual assistants and agents.

The simplicity of the RDF data structure and the schema’s optional nature mean that it’s easy to
combine different sets of data. This is particularly useful for big data projects where the variety of
data within an organization can present a challenge.
Introduction
The word Semantic Web is initiative by Tim Berners-Lee, the very person who invented the WWW
in the late 1980.It promotes common data formats on the World Wide Web. By encouraging the
inclusion of semantic content in web pages, the Semantic Web aims at converting the current web
dominated by unstructured and semi-structured documents into a “web of data”. The Semantic
Web stack builds on the W3C’s Resource Description Framework (RDF).

The concept is to offer people the information they’re looking for at the time they need it. One of
its key philosophies is that although the information presented on the internet is useful, it’s not
always needed at every point.

Because the majority of data is created using forms and then converted into HTML, there’s no way
all data can be managed by everyone at all times. The semantic web makes this information more
useful to everyone because it can be repurposed.

The semantic web essentially allows for the connection of information using a network that can be
easily read by machines – whether computers, IoT devices, mobile phones or other devices
commonly used to access information.

Schema.org has been formed by a number of organizations (notably Google, Bing and Yahoo) to
boost the extent of semantic metadata. The goal of this is to answer questions from the best sources
on the web, rather than serve up a search page full of document links.

The most important part of semantic web technologies is Resource Description Framework (RDF).
This is a common framework for describing resources. It can represent metadata that can be parsed
and processed by systems rather than just displayed to users.

The semantic web is very useful for solving many of the problems raised with the World Wide
Web.

For example, data silos can be pretty much eradicated, with links between data and the outside
world - or even more localized places such as within the organization - operating seamlessly. The
use of semantic metadata tags means the information can all reside in one place, searchable by the
tags to make it infinitely easier to uncover.

Technologies Used
Two important technologies for developing the Semantic Web already in used are:

1. eXtensible Markup Language (XML).


2. Resource Description Framework (RDF).
 eXtensible Markup Language (XML)
XML lets us create our own tags—hidden labels such as or that annotate Web pages or sections
of text on a page. Scripts, or programs, can make use of these tags in sophisticated ways, but
the script writer has to know what the page writer uses each tag for. The Semantic Web will
enable machines to understand semantic documents and data, not human speech
And writings. Meaning is expressed by RDF, which encodes it in sets of triples, each triple
being rather like the subject, verb and object of an elementary sentence. These triples can be
written using XML tags. In RDF, a document makes assertions that particular things (Web
pages) have properties with certain values (another Web page).

 Resource Description Framework (RDF)


Meaning is expressed by RDF, which put codes in sets of triples. These triples can be written
using XML tags. In RDF, a document makes assertions that particular things (Web pages) have
properties with certain values (another Web page). This structure turns out to be a natural way
to describe the vast majority of the data processed by machines. The triples of RDF form webs
of information about related things. Because RDF uses URIs to encode this information in a
document, the URIs ensure that concepts are not just words in a document but are tied to a
unique definition that everyone can find on the Web.

LANGUAGES USE IN SEMANTIC WEB

Ontology Languages: -
Ontologies play a key role in the Semantic Web by providing vocabularies that applications can
use to understand shared information. DAML+OIL an ontology language designed specifically for
use in the semantic Web. It was produced by merging two ontology languages, OIL and DAML.
OIL integrates features from frame-based systems and description logics (DLs), and has an RDF-
based syntax. DAML is more tightly integrated with RDF, enriching it with a larger set of
ontological primitives. Because DAML+OIL is based on description logic. a DL reasoned can be
used to compare (semantically) descriptions written in DAML+OIL. This provides a powerful
framework for defining and comparing e-commerce service descriptions.

Service Description Language: -

 WSDL: -
Web Services Description Language is an XML format for describing network services in
abstract terms derived from the specific data formats and protocols used for
implementation. As communication protocols and message formats are standardized in
the Web community, it becomes possible and important to describe communications
in a structured way. WSDL addresses this need by defining an XML grammar for
describing network services as collections of communication and capable of exchanging
messages.WSDL service definitions provide documentation for distributed systems.
WSDL doesn’t support semantic description of services.

 UDDI: -
UDDI (Universal Description, Discovery, and Integration) is another upcoming XML-
based standard for Web service description. It gives a business to describe its business
and services, discover other businesses that offer desired services, and integrate with
these other businesses by providing a registry of businesses and Web services. UDDI
describes businesses by their physical attributes, such as name, address, and the services
they provide. UDDI descriptions are augmented by a set of attributes, called tModels, ,
however, the tModels it uses only provide a tagging mechanism, and the search
performed is only done by string matching on some fields they have defined. Thus, it is of
no use for locating services based on a semantic specification of their functionality.

 DAML-S: -
DAML-S permits Web service providers with a core set of mark-up language constructs
for explaining the properties and capabilities of their Web services in unambiguous,
computer-interpretable form. DAML-S markup of Web services is intended to ease the
automation of Web service tasks, including automated Web service discovery,
Execution, interoperation and execution monitoring. In DAML-S, service descriptions
are structured into three essential types of knowledge: a Service Profile, a Service
Model and a Service Grounding. The Service Profile is typically required in a
matchmaking process because it provides the information needed for an agent to
discover a service that meets its requirements.

Why We Need Semantic Web


The Semantic Web not only improves traditional search, but it is facilitating more seamless,
intelligent, and integrated customer experience journeys as well. For example, with semantically
connected and described data, a digital assistant could send users local live music
recommendations in their area. This could be possible by gathering and connecting disparate data
published on the web, like information that nearby venues post online, and matching it to the data
about the type of music a user has chosen to share on their online playlists.

The applications of the Semantic Web are endless, but we cannot take advantage of these
possibilities until we have a truly intelligent web of global knowledge. We must make our content
"semantic", or annotated with meaningful metadata and relationships, to transform dull and
dormant fixed-text into live and electric linked concepts. This transformation is making the web
much more dynamic, allowing not only content, but also data, to travel freely and seamlessly.
The spread of the Semantic Web and the technologies it brings to the table puts the analytical
powers of machines to work in the domains of content production, management, learning, support,
media, ecommerce, scientific research, knowledge management, and publishing in general.
Anywhere we express knowledge will become semantic. Content discovery and presentation on
Google and Bing is only the tip of the iceberg, although SEO and SERP-placement might be reason
enough. When it comes to the applications of intelligent content, semantic search and smart
devices, the emerging semantic web of content and data is a massive opportunity to tap into.
Careers, companies, and global innovation leaders will continue to be born on the Semantic Web.

One could say that the Semantic Web is the technological backbone of the evolving content order,
bringing together omnichannel content with semantics, structure, and shared standards.

Using Semantic Web Technologies, publishers can:

 Build smart digital content infrastructures


 Connect content silos across a huge organization
 Leverage metadata to provide richer experiences
 Curate and reuse content more efficiently
 Connect internal and external content sets
 Build towards real augmented and artificial intelligence
 Power-up authoring experiences and workflow processes

Conclusion
The greatest advantage of the Semantic Web is that it abstracts away the tedious documents and
application layer to have a straight access to knowledge.
One of the key benefits of the semantic web is having large amounts of data, knowledge and
information made understandable and accessible to machines, especially artificially intelligent
bots, virtual assistants and agents.
Four important areas of research that need to be addressed to allow the Semantic Web to realize
its full potential have been described. Originally, hypertext research aimed to bring user
interaction with digitally stored information closer to the semantic relations implicit within the
information. Much of the more "hypertext-specific'' research, however, turned to system and
application-oriented topics, possibly through the lack of an available infrastructure to support
more explicit semantics. The introduction of the Web, as a highly distributed, but relatively
simple, hypermedia system has also influenced the character of hypermedia research. The
existence of XML and RDF, along with developments such as RDF Schema and
DAML+OIL, provides the impetus for realizing the Semantic Web. During these early stages of
its development, we want to ensure that the many hypertext lessons learned in the past will not
be lost, and that future research tackles the most urgent issues of the Semantic Web.
Practical - 13
Objective: - Study of Swarm Intelligence
Introduction
Swarm intelligence is an emerging field of biologically-inspired artificial intelligence
based on the behavioral models of social insects such as ants, bees, wasps, termites etc.
A Swarm is a configuration of tens of thousands of individuals that have chosen their own
will to converge on a common goal. Swarm Intelligence is the Complex Collective, Self-
Organized, Coordinated, Flexible and Robust Behavior of a group following the simple
rules.
SI systems are typically made up of a population of simple agents interacting locally with
one another and with their environment. The agents follow very simple rules, and although
there is no centralized control structure dictating how individual agents should behave,
local, and to a certain degree random, interactions between such agents lead to the
emergence of “intelligent” global behavior, unknown to the individual agents. Natural
examples of SI include ant colonies, bird flocking, animal herding, bacterial growth, and
fish schooling.

Ant-Based Routing

This uses a probabilistic routing table rewarding/reinforcing the route successfully


traversed by each "ant" (a small control packet) which flood the network. Reinforcement
of the route in the forwards, reverse direction and both simultaneously have been
researched: backwards reinforcement requires a symmetric network and couples the two
directions together; forwards reinforcement rewards a route before the outcome is known.
Mobile media and new technologies have the potential to change the threshold for
collective action due to swarm intelligence.

The location of transmission infrastructure for wireless communication networks is an


important engineering problem involving competing objectives. A minimal selection of
locations (or sites) are required subject to providing adequate area coverage for users. A
very different-ant inspired swarm intelligence algorithm, stochastic diffusion search
(SDS), has been successfully used to provide a general model for this problem, related to
circle packing and set covering. It has been shown that the SDS can be applied to identify
suitable solutions even for large problem instances.

Swarm grammars
Swarm grammars are swarms of stochastic grammars that can be evolved to describe
complex properties such as found in art and architecture. These grammars interact as agents
behaving according to rules of swarm intelligence. Such behavior can also suggest deep
learning algorithms, in particular when mapping of such swarms to neural circuits is
considered.

Innovation and Teambuilding


For complex problems, that needs knowledge/expertise from different domain, the swarm
is not regarded as a collection of simple and more or less equally structured individuals.In
this context of innovation the individuals in swarm have specialities, that are donated to
swarm to solve problems, that cannot be solved by a single individual alone. We can easily
list the special expertise of individual of a group/swarm, but if the individuals just work for
their personal benefit, then the collaborative network between group members may have
an impact on the swarm performance in the context of innovation.
Practical – 14
Objective: How Neural Networks Work?
Introduction
Artificial Neural Networks can be best described as the biologically inspired simulations
that are performed on the computer to do a certain specific set of tasks like clustering,
classification, pattern recognition etc. In general, Artificial Neural Networks is a
biologically inspired network of neurons (which are artificial in nature) configured to
perform a specific set of tasks. The basic idea behind a neural network is to simulate (copy
in a simplified but reasonably faithful way) lots of densely interconnected brain cells inside
a computer so you can get it to learn things, recognize patterns, and make decisions in a
humanlike way. The amazing thing about a neural network is that you don't have to
program it to learn explicitly: it learns all by itself, just like a brain.

History and Structure


The term neural network was derived from the work of Warren S. McCulloch and Walter
Pitts. These networks consist of artificial neurons called as nodes that process information
and perform operations. There are 3 layers present in a neural network:

Input Layers: This layer takes large volumes of input data in the form of texts, numbers,
audio files, image pixels, etc. The Input layers contain those artificial neurons (termed as
units) which are to receive input from the outside world. This is where the actual learning
on the network happens, or recognition happens else it will process.

Hidden Layers: Hidden layers are responsible to perform mathematical operation, pattern
analysis, feature extraction, etc. There can multiple hidden layers in a neural network. The
hidden layers are mentioned hidden in between input layers and the output layers. The only
job of a hidden layer is to transform the input into something meaningful that the output
layer/unit can use in some way.

Output Layer: This layer is responsible to generate the desired output. The output layers
contain units that respond to the information that is fed into the system and also whether it
learned any task or not.

A typical neural network has anything from a few dozen to hundreds, thousands, or even
millions of artificial neurons called units arranged in a series of layers, each of which
connects to the layers on either side. Some of them, known as input units, are designed to
receive various forms of information from the outside world that the network will attempt
to learn about, recognize, or otherwise process. Other units sit on the opposite side of the
network and signal how it responds to the information it's learned; those are known
as output units. In between the input units and output units are one or more layers of hidden
units, which, together, form the majority of the artificial brain.
Most neural networks are fully connected, which means each hidden unit and each output
unit is connected to every unit in the layers either side. The connections between one unit
and another are represented by a number called a weight, which can be either positive (if
one unit excites another) or negative (if one unit suppresses or inhibits another). The higher
the weight, the more influence one unit has on another.

Working
An artificial neural network consists of several parameters and hyperparameters that drive
the output of a neural network model. Some of these parameters are weights, biases,
number of epochs, the learning rate, batch size, number of batches, etc. Each node in the
network has some weights assigned to it. A transfer function is used to calculate the
weighted sum of inputs and a bias is added.

Artificial Neural Networks can be best viewed as weighted directed graphs, where the
nodes are formed by the artificial neurons and the connection between the neuron outputs
and neuron inputs can be represented by the directed edges with weights. The Artificial
Neural Network receives the input signal from the external world in the form of a pattern
and image in the form of a vector. These inputs are then mathematically designated by the
notations x(n) for every n number of inputs.

Each of the input is then multiplied by its corresponding weights (these weights are the
details used by the artificial neural networks to solve a certain problem). In general terms,
these weights typically represent the strength of the interconnection amongst neurons
inside the artificial neural network. All the weighted inputs are summed up inside the
computing unit (yet another artificial neuron).

If the weighted sum equates to zero, a bias is added to make the output non-zero or else to
scale up to the system’s response. Bias has the weight and the input to it is always equal to
1. Here the sum of weighted inputs can be in the range of 0 to positive infinity. To keep
the response in the limits of the desired value, a certain threshold value is benchmarked.
And then the sum of weighted inputs is passed through the activation function.

The activation function, in general, is the set of transfer functions used to get the desired
output of it. There are various flavors of the activation function, but mainly either linear or
non-linear sets of functions. Some of the most commonly used set of activation functions
are the Binary, Sigmoidal (linear) and Tan hyperbolic sigmoidal (non-linear) activation
functions.

Most of the artificial neural networks are all interconnected, which means that each of the
hidden layers is individually connected to the neurons in its input layer and also to its output
layer leaving nothing to hang in the air. This makes it possible for a complete learning
process and also learning occurs to the maximum when the weights inside the artificial
neural network get updated after each iteration.
Practical - 15
Objective: - Study of Genetic Algorithm
Introduction
A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural
evolution. Over many generations, natural populations evolve according to the principles of natural
selection and stated by Charles Darwin in The Origin of Species. Only the most suited elements in
a population are likely to survive and generate offspring, thus transmitting their biological heredity
to new generations.
Genetic algorithms are able to address complicated problems with many variables and a large
number of possible outcomes by simulating the evolutionary process of “survival of the fittest” to
reach a defined goal. They operate by generating many random answers to a problem, eliminating
the worst and cross-pollinating better answers. Repeating this elimination and regeneration process
gradually improves the quality of the answers to an optimal or near-optimal condition.
In computing terms, a genetic algorithm implements the model of computation by having arrays
of bits or characters (binary string) to represent the chromosomes. Each string represents a
potential solution. The genetic algorithm then manipulates the most promising chromosomes
searching for improved solutions. A genetic algorithm operates through a cycle of three stages:

1. Build and maintain a population of solutions to a problem


2. Choose the better solutions for recombination with each other
3. Use their offspring to replace poorer solutions.

Phases of Genetic Algorithm: -


1. Initial population
2. Fitness function
3. Selection
4. Crossover
5. Mutation

1.Initial Population
The process begins with a set of individuals which is called a Population. Each individual is a
solution to the problem you want to solve.
An individual is characterized by a set of parameters (variables) known as Genes. Genes are joined
into a string to form a Chromosome (solution).
In a genetic algorithm, the set of genes of an individual is represented using a string, in terms of
an alphabet. Usually, binary values are used (string of 1s and 0s). We say that we encode the genes
in a chromosome.
2.Fitness Function
The fitness function determines how fit an individual is (the ability of an individual to compete
with other individuals). It gives a fitness score to each individual. The probability that an
individual will be selected for reproduction is based on its fitness score.

3.Selection
The idea of selection phase is to select the fittest individuals and let them pass their genes to the
next generation.

Two pairs of individuals (parents) are selected based on their fitness scores. Individuals with high
fitness have more chances to be selected for reproduction.

4.Crossover
Crossover is the most significant phase in a genetic algorithm. For each pair of parents to be mated,
a crossover point is chosen at random from within the genes.

Offspring are created by exchanging the genes of parents among themselves until the crossover
point is reached.

5.Mutation
In certain new offspring formed, some of their genes can be subjected to a mutation with a low
random probability. This implies that some of the bits in the bit string can be flipped.
Mutation occurs to maintain diversity within the population and prevent premature convergence.

Termination
The algorithm terminates if the population has converged (does not produce offspring which are
significantly different from the previous generation). Then it is said that the genetic algorithm has
provided a set of solutions to our problems.

Algorithm
Initialize a random population of individuals
Compute the fitness of each individual
WHILE NOT finished DO
BEGIN /* produce new generation */
FOR population size DO
BEGIN /* reproductive cycle */
Select two individuals from old generation, recombine the
two individuals to give two offspring
Make a mutation for selected individuals by altering a
random bit in a string
Create a new generation (new populations)
END
IF population has converged THEN
finished := TRUE
END
Practical – 16
Fuzzy Logic
Description: The term fuzzy refers to things which are not clear or are vague. In the real world
many times we encounter a situation when we can’t determine whether the state is true or false,
their fuzzy logic provides a very valuable flexibility for reasoning. In this way, we can consider
the inaccuracies and uncertainties of any situation.
In Boolean system truth value, 1.0 represents absolute truth value and 0.0 represents absolute false
value. But in the fuzzy system, there is no logic for absolute truth and absolute false value. But in
fuzzy logic, there is intermediate value too present which is partially true and partially false.

ARCHITECTURE
Its Architecture contains four parts:

 RULE BASE: It contains the set of rules and the IF-THEN conditions provided by the
experts to govern the decision-making system, on the basis of linguistic information. Recent
developments in fuzzy theory offer several effective methods for the design and tuning of
fuzzy controllers. Most of these developments reduce the number of fuzzy rules.
 FUZZIFICATION: It is used to convert inputs i.e. crisp numbers into fuzzy sets. Crisp
inputs are basically the exact inputs measured by sensors and passed into the control system
for processing, such as temperature, pressure, rpm’s, etc.
 INFERENCE ENGINE: It determines the matching degree of the current fuzzy input with
respect to each rule and decides which rules are to be fired according to the input field. Next,
the fired rules are combined to form the control actions.
 DEFUZZIFICATION: It is used to convert the fuzzy sets obtained by inference engine into
a crisp value. There are several defuzzification methods available and the best suited one is
used with a specific expert system to reduce the error.

Membership function
Definition: A graph that defines how each point in the input space is mapped to membership value
between 0 and 1. Input space is often referred as the universe of discourse or universal set (u),
which contain all the possible elements of concern in each particular application.
There are largely three types of fuzzifiers:
 Singleton fuzzifier
 Gaussian fuzzifier
 Trapezoidal or triangular fuzzifier

What is Fuzzy Control?


 It is a technique to embody human-like thinking into a control system.
 It may not be designed to give accurate reasoning but it is designed to give acceptable
reasoning.
 It can emulate human deductive thinking, that is, the process people use to infer conclusions
from what they know.
 Any uncertainties can be easily dealt with the help of fuzzy logic.

Advantages of Fuzzy Logic System


 This system can work with any type of inputs whether it is imprecise, distorted or noisy input
information.
 The construction of Fuzzy Logic Systems is easy and understandable.
 Fuzzy logic comes with mathematical concepts of set theory and the reasoning of that is quite
simple.
 It provides a very efficient solution to complex problems in all fields of life as it resembles
human reasoning and decision making.
 The algorithms can be described with little data, so little memory is required.

Disadvantages of Fuzzy Logic Systems


 Many researchers proposed different ways to solve a given problem through fuzzy logic
which lead to ambiguity. There is no systematic approach to solve a given problem through
fuzzy logic.
 Proof of its characteristics is difficult or impossible in most cases because every time we do
not get mathematical description of our approach.
 As fuzzy logic works on precise as well as imprecise data so most of the time accuracy is
compromised.

Application
 It is used in the aerospace field for altitude control of spacecraft and satellite.
 It has used in the automotive system for speed control, traffic control.
 It is used for decision making support systems and personal evaluation in the large company
business.
 It has application in chemical industry for controlling the pH, drying, chemical distillation
process.
 Fuzzy logic is used in Natural language processing and various intensive applications in
Artificial Intelligence.
 Fuzzy logic is extensively used in modern control systems such as expert systems.
 Fuzzy Logic is used with Neural Networks as it mimics how a person would make decisions,
only much faster. It is done by Aggregation of data and changing into more meaningful data
by forming partial truths as Fuzzy sets.
Practical - 17
Objective: - Study of RuleML
Introduction
Rules have many uses, coming in a multitude of forms. RuleML is a unifying system of families
of languages for Web rules over Web documents and data. RuleML is specified syntactically
through schema languages (normatively, in Relax NG), originally developed for XML and
transferable to other formats such as JSON. Since Version 1.02, rather than assuming predefined
default semantics, RuleML allows partially constrained semantic profiles and fully-specified
semantics.

PSOA RuleML employs model-theoretic semantics and transformational realizations, bridges


between (deductive) graph and relational databases and other data paradigms, formalizes Cypher-
like labeled property graphs, and is illustrated via (blockchain, ...) examples and (air traffic control,
...) use cases.

RuleML Standards Effort


● Connects Web rule efforts across academia, standards bodies, and industry
● Dovetails with Web ontology efforts as part of the semantic technology stack
● Provides the de facto standard for rule-based knowledge representation
● Main input to RIF; part of SWRL and SWSL; foundation of LegalRuleML
● Uses RuleML Specification License, Version 1.0 (RSL1.0).

Foundational RuleML Technology


● User syntaxes (for knowledge acquisition and querying)
○ Presentation (symbolic): Positional-Slotted Language (POSL), PSOA/PS, Prova,
JSON (work in progress), …
○ Visualization (graphical): Graph inscribed logic (Grailog), ...
● Serialization syntax (for knowledge exchange): Valid w.r.t. a system of families of XML
schemas
○ In Relax NG Compact syntax (RNC)
■ MYNG Web GUI generates RNC schemas for fine-grained feature
customization (Demo)
○ In XML Schema Definition Language (XSD)
■ For Deliberation RuleML 1.0: RNC was first used, in parallel to manually-
written XSD.
■ Since Deliberation RuleML 1.01: XSD is generated from RNC via Trang.
● Transformations
○ XSLT formatters (for normalization to the most explicit and compactification to the
most concise RuleML/XML)
○ JAXB unmarshalling of RuleML/XML into Java objects
■ For PSOA RuleML, the PSOA RuleML API creates Java code by applying
JAXB to the manually specified XSD
■ Since Deliberation RuleML 1.01, Java code may be created by applying
JAXB to the monolothic XSD generated from the RNC
● Model-theoretic semantic profiles
○ Horn-PSOA Tarski (http://ruleml.org/1.02/profiles/HornPSOA-Tarski)
○ Horn Logic Herbrand (http://ruleml.org/1.02/profiles/Horn-Herbrand)
○ First-Order Logic Herbrand (http://ruleml.org/1.02/profiles/FOL-Herbrand)
● Engines (OO jDREW, Prova, DR-DEVICE, PSOATransRun,...)
PRACTICAL-18
AIM: K-NIGHT PROBLEM
THEORY:
Backtracking is an algorithmic paradigm that tries different solutions until finds a solution that
“works”. Problems which are typically solved using backtracking technique have following
property in common. These problems can only be solved by trying every possible configuration
and each configuration is tried only once. A Naive solution for these problems is to try all
configurations and output a configuration that follows given problem constraints. Backtracking
works in incremental way and is an optimization over the Naive solution where all possible
configurations are generated and tried.
Backtracking works in an incremental way to attack problems. Typically, we start from an
empty solution vector and one by one add items (Meaning of item varies from problem to
problem. In context of Knight’s tour problem, an item is a Knight’s move). When we add an
item, we check if adding the current item violates the problem constraint, if it does then we
remove the item and try other alternatives. If none of the alternatives work out then we go to
previous stage and remove the item added in the previous stage. If we reach the initial stage back
then we say that no solution exists. If adding an item doesn’t violate constraints then we
recursively add items one by one. If the solution vector becomes complete then we print the
solution.

Algorithm:
Following is the Backtracking algorithm for Knight’s tour problem.

If all squares are visited


print the solution
Else
a) Add one of the next moves to solution vector and recursively check if this
move leads to a solution. (A Knight can make maximum eight moves. We
choose one of the 8 moves in this step).

b) If the move chosen in the above step doesn't lead to a solution then remove
this move from the solution vector and try other alternative moves.

c) If none of the alternatives work then return false (Returning false will
remove the previously added item in recursion and if false is returned by the
initial call of recursion then "no solution exists" )
Program: -
#include<stdio.h>
#define N 8
int solveKTUtil(int x, int y, int movei, int sol[N][N], int xMove[], int yMove[]);

/* A utility function to check if i,j are valid indexes for N*N chessboard */
int isSafe(int x, int y, int sol[N][N])
{
return ( x >= 0 && x < N && y >= 0 &&
y < N && sol[x][y] == -1);
}
/* A utility function to print solution matrix sol[N][N] */
void printSolution(int sol[N][N])
{
for (int x = 0; x < N; x++)
{
for (int y = 0; y < N; y++)
printf(" %2d ", sol[x][y]);
printf("\n");
}
}

/* This function solves the Knight Tour problem using Backtracking. This function
mainly uses solveKTUtil() to solve the problem. It returns false if no complete tour
is possible, otherwise return true and prints the tour. Please note that there may be
more than one solutions, this function prints one of the feasible solutions. */
int solveKT()
{
int sol[N][N];

/* Initialization of solution matrix */


for (int x = 0; x < N; x++)
for (int y = 0; y < N; y++)
sol[x][y] = -1;

/* xMove[] and yMove[] define next move of Knight.


xMove[] is for next value of x coordinate
yMove[] is for next value of y coordinate */
int xMove[8] = { 2, 1, -1, -2, -2, -1, 1, 2 };
int yMove[8] = { 1, 2, 2, 1, -1, -2, -2, -1 };

// Since the Knight is initially at the first block


sol[0][0] = 0;

/* Start from 0,0 and explore all tours using


solveKTUtil() */
if (solveKTUtil(0, 0, 1, sol, xMove, yMove) == 0)
{
printf("Solution does not exist");
return 0;
}
else
printSolution(sol);

return 1;
}

/* A recursive utility function to solve Knight Tour problem */


int solveKTUtil(int x, int y, int movei, int sol[N][N],
int xMove[N], int yMove[N])
{
int k, next_x, next_y;
if (movei == N*N)
return 1;
/* Try all next moves from the current coordinate x, y */
for (k = 0; k < 8; k++)
{
next_x = x + xMove[k];
next_y = y + yMove[k];
if (isSafe(next_x, next_y, sol))
{
sol[next_x][next_y] = movei;
if (solveKTUtil(next_x, next_y, movei+1, sol,
xMove, yMove) == 1)
return 1;
else
sol[next_x][next_y] = -1;// backtracking
}
}
return 0;
}
/* Driver program to test above functions */
int main()
{
solveKT();
return 0;
}

Output: -

You might also like