Professional Documents
Culture Documents
Ai Lab File
Ai Lab File
of
Artificial Intelligence
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Practical – 1
Objective: To solve the N-queens problem using a program and
measuring the time taken for various values of N.
Naive Algorithm
In Naive algorithm, we will find all possible combinations of queen on the chessboard and
then we will generate a solution on the basis of the given constraints. The combinations in
which queens will not be in the attacking position will be considered, and the combinations
in which there exists an attacking position would be rejected.
ALGORITHM:
Nqueen (k,n)
{
fori= 1 to n
{
if place(k, i)
{
x[k]= I;
if (k=n)
write x[1:n]
else
Nqueen (k+1, n)
}
}
}
place (k,i)
{
for j= 1 to k-1
if ((x[j] =I ) or (abs(x[j] –i) = abs(j-k)))
then return false
return true
}
In this algorithm: -
n is a user input that denotes the number of queens that are to be placed on the n*n
chessboard.
x[100] is the array which stores the solution vector such that the index of the array
represents the queen number and the value at that index gives the column number where
the queen is to be placed.
board[100][100] is a 2-D matrix which stores the array x[] values in form of a chessboard.
place(int k, inti) is the function to check if the cell (k,i) is under the attack of any other
queen or not.
- Here, we check that whether there is any other queen in same column same diagonal.
- If the queen is found at any of the above cells, then function returns false(0) else it
returns true(1).
nqueen(int k, int n) is the main function where we implement the backtracking algorithm.
- The initial call to this function is nqueen(1,n).
- if (place(k,i)) checks whether the queen can be placed in cell (k,i) or not.
- If the function is returned true, then the queen k can be placed in cell (k,i) and solution
is written as x[k]=i.
- If k=n, that means all the queens have been placed and the print() function is called to
print the solution.
- Otherwise, the position for the next queen is found by using recursive call.
Program:
#include<iostream>
#include<conio.h>
#include<math.h>
using namespace std;
int a[30],count=0;
int place(int pos) {
int i;
for (i=1;i<pos;i++){
if((a[i]==a[pos])||(abs(a[i]-a[pos])==abs(i-pos)))
return 0;
}
return 1;
}
void print_sol(int n) {
int i,j;
count++;
}
void queen(int n) {
int k=1;
a[k]=0;
while(k!=0){
a[k]=a[k]+1;
while((a[k]<=n)&&!place(k)){
a[k]++;
}
if(a[k]<=n) {
if(k==n){
print_sol(n);
}
else{
k++;
a[k]=0;
}
}
else{
k--;
}
}
}
int main()
{
int i, n=15;
cout<<"Enter the number of Queens(4 or more)\n";
cin>>n;
queen(n);
cout<<"\nTotal solutions="<<count;
return 0;
}
OUTPUT:
THEORY:
BFS is a traversing algorithm where you should start traversing from a selected node
(source or starting node) and traverse the graph layer wise thus exploring the neighbour
nodes (nodes which are directly connected to source node). You must then move towards
the next-level neighbour nodes.
As the name BFS suggests, you are required to traverse the graph breadthwise as follows:
1. First move horizontally and visit all the nodes of the current layer
2. Move to the next layer
The time complexity of BFS is O(V + E), where V is the number of nodes and E is the
number of edges.
ALGORITHM:
mark s as visited.
Q is the Queue in which nodes are processed and after that they are marked
as visited and are dequeued.
void bfs(int v) {
for(i = 1; i <= n; i++)
if(a[v][i] && !visited[i])
q[++r] = i;
if(f <= r) {
visited[q[f]] = 1;
bfs(q[f++]);
}
}
int main() {
int v;
printf("\n Enter the number of vertices:");
scanf("%d", &n);
OUTPUT
1 2
3 4
Practical – 3
Objective: To represent the Depth-First Search (DFS) using linked lists.
Theory: The DFS algorithm is a recursive algorithm that uses the idea of
backtracking. It involves exhaustive searches of all the nodes by going ahead, if
possible, else by backtracking.
This recursive nature of DFS can be implemented using stacks. The basic idea is as
follows:
Pick a starting node and push all its adjacent nodes into a stack.
Pop a node from stack to select the next node to visit and push all its adjacent nodes
into a stack.
Repeat this process until the stack is empty. However, ensure that the nodes that are
visited are marked. This will prevent you from visiting the same node more than
once. If you do not mark the nodes that are visited and you visit the same node more
than once, you may end up in an infinite loop.
Algorithm:
DFS-iterative (G, s): //Where G is graph and s is source vertex
let S be stack
S.push( s ) //Inserting s in stack
mark s as visited.
while ( S is not empty):
//Pop a vertex from stack to visit next
v = S.top( )
S.pop( )
//Push all the neighbours of v in stack that are not visited
for all neighbours w of v in Graph G:
if w is not visited :
S.push( w )
mark w as visited
DFS-recursive(G, s):
mark s as visited
for all neighbours w of s in Graph G:
if w is not visited:
DFS-recursive(G, w)
Program:
#include<stdio.h>
void DFS(int);
int G[50][50],visited[50],n;
int main()
{
int i,j;
printf("Enter number of vertices:");
scanf("%d",&n);
//read the adjecency matrix
printf("\nEnter adjecency matrix of the graph:");
for(i=0;i<n;i++)
for(j=0;j<n;j++)
scanf("%d",&G[i][j]);
//visited is initialized to zero
for(i=0;i<n;i++)
visited[i]=0;
DFS(0);
}
void DFS(int i)
{
int j;
printf("\n%d",i);
visited[i]=1;
for(j=0;j<n;j++)
if(!visited[j]&&G[i][j]==1)
DFS(j);
}
OUTPUT:
1 2
3 4
Practical – 4
Objective: To implement the Tic-Tac-Toe using MiniMax algorithm.
Program:
#include <stdio.h>
#include <algorithm>
#include <string>
using namespace std;
void print_board()
{
int i;
for (i = 1; i <= 9; i++)
{
printf("%c ",board[i]);
if (i % 3 == 0)
printf("\n");
}
printf("\n");
}
board[i] = '_';
if(res < min)
{
min = res;
if (n == 0)
posmin = i;
}
}
else if (player == 1)
{
board[i] = 'X';
res = minimax(2, board, n + 1);
board[i] = '_';
if (res > max)
{
max = res;
if (n == 0)
posmax = i;
}
}
}
}
if (player == 1)
return max;
else return min;
}
// 1 is X, 2 is O
int main()
{
int i, j, input, opt;
printf("Board:\n");
print_board();
for(i = 1; i <= 9; i++)
{
if (i % 2 == 0)
printf("Player O give input:\n");
else
printf("Player X give input:\n");
scanf("%d", &input);
if (i % 2 != 0)
board[input] = 'X';
else
board[input] = 'O';
printf("Board:\n");
print_board();
OUTPUT:
KNOWLEDGE BASE 2 (KB 2)
FACT:
happy(yolanda).
listens2Music(mia).
RULES:
listens2Music(yolanda):- happy(yolanda).
playsAirGuitar(mia):- listens2Music(mia).
playsAirGuitar(yolanda):- listens2Music(yolanda).
OUTPUT:
There are two facts in KB2, listens2Music(mia). and happy (yolanda). The last three items it
contains are rules. Rules state information that is conditionally true of the situation of interest.
For example, the first rule says that Yolanda listens to music if she is happy. More generally, the
:- is read as “if”, or “is implied by”. The part on the left hand side of the :- is called the head of the
rule, the part on the right hand side is called the body. So in general rules say: if the body of the
rule is true, then the head of the rule is true too.
KNOWLEDGE BASE 3 (KB 3)
FACT:
happy(vincent).
listens2Music(butch).
RULE:
playsAirGuitar(vincent):- listens2Music(vincent), happy(vincent).
playsAirGuitar(butch):- happy(butch).
playsAirGuitar(butch):- listens2Music(butch).
Similar to KB2 in KB3, we have 2 facts and 3 rules. In the first rule there are two bodies which
states that if both the bodies are true only then head will be true otherwise it will be false, this is
the expression of logical conjunction in Prolog.
playsAirGuitar(butch):- happy(butch).
playsAirGuitar(butch):- listens2Music(butch).
This is a way of stating that Butch plays air guitar either if he listens to music, or if he is happy.
There is another way of representing logical disjunction in Prolog. We could replace the pair of
rules given above in the single rule.
woman(mia).
woman(jody).
woman(yolanda).
loves(vincent,mia).
loves(marsellus,mia).
loves(pumpkin,honey_bunny).
loves(honey_bunny,pumpkin).
In this knowledge base we have only facts no rules. Here we will use Prolog variable (i.e. capital
letter like X) that’s why we have not used any capital letter in any fact and rule yet.
For Example:
? woman(X)
Another Example:
?- loves(marsellus,X), woman(X).
We have studied that , means logical AND. So this query says: is there any individual X such that
Marsellus loves X and X is a woman? , if we look up in the kb4 , mia is a woman and marsellus
loves mia.
KNOWLEDGE BASE 5:
Knowledge Base 5 (KB5) contains variables in Knowledge Base.
loves(vincent,mia).
loves(marsellus,mia).
loves(pumpkin,honey_bunny).
loves(honey_bunny,pumpkin).
jealous(X,Y):- loves(X,Z), loves(Y,Z).
KB5 contains four facts about the loves relation and one rule.
it is defining a concept of jealousy. It says that an individual X will be jealous of an
individual Y if there is some individual Z that X loves, and Y loves that same individual Z
too.
OUTPUT:
ATOM: -
An atom is either:
A string of characters made up of upper-case letters, lower-case letters, digits, and the
underscore character, that begins with a lower-case letter. For example : butch
,big_kahuna_burger , listens2Music and playsAirGuitar .
A string of special characters. Here are some examples: @= and ====> and ; and :- are all
atoms. Some of these atoms, such as ; and :- have a pre-defined meaning.
NUMBERS: -
Integers (that is: …, -2, -1, 0, 1, 2, 3,…) are useful for such tasks as counting the elements
of a list.
Most Prolog implementations do support floating point numbers or floats (that is,
representations of real numbers such as 1657.3087 or π ).
VARIABLES: -
A variable is a string of upper-case letters, lower-case letters, digits and
underscore characters that starts either with an upper-case letter or with an
underscore. For example, X , Y , Variable , _tag , X_526 , List , List24 ,
_head, Tail , _input and Output are all Prolog variables.
COMPLEX TERMS: -
Complex terms are often called structures. Complex terms are build out of a functor followed by
a sequence of arguments. The arguments are put in ordinary parentheses, separated by commas,
and placed after the functor The number of arguments that a complex term has is called its arity.
Prolog would treat the two-place love and the three-place love as different predicates.
PRACTICAL-6
PROLOG PROJECT
This program implements a CALCULATOR in prolog.
Knowledge Base :-
start :-
nl, write('BMI CALCULATOR'), nl,
nl, write('Enter your WEIGHT in Kilo Grams : '),
read(W),
nl, write('Enter your HEIGHT in Meters : '),
read(H),
BMI is W/H/H,
nl, write('Your BODY MASS INDEX is '),
write(BMI), nl,
process(BMI).
process(BMI) :-
BMI < 18.5, nl, write('This is considered UNDERWEIGHT');
BMI > 18.5 , BMI < 24.9, nl, write('This is considered HEALTHY
WEIGHT');
BMI > 24.9, BMI < 30, nl, write('This is considered OVERWEIGHT');
BMI > 30, nl, write('This is considered OBESE').
OUTPUT :-
Practical - 7
Objective: Implementing various CLISP Programs.
Theory:
LISP - List processing
Various LISP Environment:
Emacs
LispWorks
Cygwin
Eclipse w/ plugin
LISP uses:
Numeric Literals
characters - Strings
Functions — Functions
— Marcos
— Special Operators
Syntax: -
<s-expression>:=<atom>|<list>
<atom>= <no.>|<Identifiers>
<List>=(<s.expression>)
<identifiers>=string of printable characters not including parenthesis
Example of a program:
Area of a circle:
(def constant PI 3.1416)
(defun area-circle (rad)
(terpri)
(format t “Radius: ~5f” rad)
(format t “~%Area: ~10f” (* PI rad rad)))
(area-circle 10)
CLISP PROGRAMS:
List Operations in LISP:
Loop:
PRACTICAL-9
LISP PROJECT
Code:
(terpri)
(terpri)
(format t "BMI CALCULATOR ~%")
(terpri)
(format t "Enter your Name : ")
(setq name (read))
(terpri)
(format t "Enter your Age : ")
(setq age (read))
(terpri)
(format t "Enter your Weight in Kilo Grams : ")
(setq weight (read))
(terpri)
(format t "Enter your Height in Meters : ")
(setq height (read))
TERMS
Prolog has three kinds of terms:
Constants like 42 (numbers) and franklin (atoms, i.e., lower-case words).
Variables like X and Person (words that start with upper-case).
Complex terms like parent(franklin, bo) and baz(X, quux(Y)).
Two terms unify if they can be matched. Two terms can be matched if:
woman(mia).
loves(vincent, angela).
loves(franklin, mia).
We'll use the = predicate to test if two terms unify. Prolog will answer "Yes" if they do, as well as any sufficient
variable assignments to make the unification work.
EXAMPLES
Do these two terms unify?
?- mia = mia.
?- mia = X.
?- X = Y.
Yes, from rule 2.
Yes, from rule 3 and, in the recursion, from rule 2 in two cases (X set to s(g) and Y set to t(k)).
No, because rule 3 fails in the recursion (in which rule 3 is invoked again, and the arity of s(g) does not
match s(g, X)).
3. The term 2*3+4 has principal functor therefore, unifies with x+y with x instantiated to
2*3 and y instantiated to 4.
Best First Search: - Best-first search is a search algorithm which explores a graph by
expanding the most promising node chosen according to a specified rule. It selects the most
promising of the nodes we have generated so far. This can be achieved by applying appropriate
heuristic function to each of them.
Heuristic function:
f(n) = h(n)
where,
h(n) - estimated straight line distance from node n to goal
To implement the graph search procedure, we will need to use two list of nodes.
OPEN- nodes that have been generated but have not been visited yet.
Closed - nodes that have been already visited.
Algorithm: -
Best-First-Search( Maze m )
Insert( m.StartNode )
Until PriorityQueue is empty
c <- PriorityQueue.DeleteMin
If c is the goal
Exit
Else
Foreach neighbor n of c
If n "Unvisited"
Mark n "Visited"
Insert( n )
Mark c "Examined"
End procedure
Source Code: -
import java.util.*;
class Node {
String label;
ArrayList<Node> adjList = new ArrayList<Node>();
int Heuristic;
}
class SortbyHeuristic implements Comparator<Node>
{
public int compare(Node a, Node b) {
return a.Heuristic - b.Heuristic;
}
}
class bestfs {
ArrayList<Node> open = new ArrayList<Node>();
ArrayList<Node> closed = new ArrayList<Node>();
Node bestOpen(){
}
void printlist(){
System.out.print("OPEN LIST -> ");
for(int i=0;i<open.size();i++){
System.out.print(open.get(i).label);
}
System.out.print("\nCLOSED LIST -> ");
for(int i=0;i<closed.size();i++){
System.out.print(closed.get(i).label);
}
}
public static void main(String args[]) {
bestfs obj1 = new bestfs();
Node n1 = new Node();
Node n2 = new Node();
Node n3 = new Node();
Node n4 = new Node();
Node n5 = new Node();
n1.label = "A";
n2.label = "B";
n3.label = "C";
n4.label = "D";
n5.label = "E";
n1.Heuristic = 10;
n1.adjList.add(n2);
n1.adjList.add(n3);
n2.Heuristic = 10;
n2.adjList.add(n4);
n2.adjList.add(n5);
n3.Heuristic = 20;
n4.Heuristic = 30;
n5.Heuristic = 0;
Node n = n1;
obj1.funct(n);
obj1.printlist();
}
}
OUTPUT:
Practical – 11
Objective: - Implementation of Alpha-Beta Pruning.
Theory: -
Alpha-Beta pruning is not actually a new algorithm, rather an optimization technique for minimax
algorithm. It reduces the computation time by a huge factor. This allows us to search much faster and even
go into deeper levels in the game tree. It cuts off branches in the game tree which need not be searched
because there already exists a better move available. It is called Alpha-Beta pruning because it passes 2
extra parameters in the minimax function, namely alpha and beta.
Let’s define the parameters alpha and beta. Alpha is the best value that the maximizer currently can
guarantee at that level or above. Beta is the best value that the minimizer currently can guarantee at that
level or above.
The main condition which required for alpha-beta pruning is: α>=β .
Pseudo Code:
fun minimax (n: node, d: int, min: int, max: int): int =
if leaf(n) or depth=0 return evaluate(n)
if n is a max node
v := min
for each child of n
v' := minimax (child,d-1,v,max)
if v' > v, v:= v'
if v > max return max
return v
if n is a min node
v := max
for each child of n
v' := minimax (child,d-1,min,v)
if v' < v, v:= v'
if v < min return min
return v
Source Code:
// working of Alpha-Beta Pruning
#include<bits/stdc++.h>
using namespace std;
// Initial values of
// Aplha and Beta
const int MAX = 1000;
const int MIN = -1000;
if (maximizingPlayer)
{
int best = MIN;
// Driver Code
int main()
{
int values[8] = { 9, 11, 3, 7, 2, 5, 0, -5 };
cout <<"The optimal value is : "<< minimax(0, 0, true, values, MIN, MAX);;
return 0;
}
Output:
Practical – 12
SEMANTIC WEB
Abstract
The Semantic Web is the knowledge graph formed by combining connected, Linked Data with
intelligent content to facilitate machine understanding and processing of content, metadata, and
other information objects at scale.
The Semantic Web is an extension of the current web in which information is given well-defined
meaning, better enabling computers and people to work in cooperation. The semantic web focuses
on data rather than on documents, making it a much more immersive and detailed way of accessing
information compared to the World Wide Web.
The Semantic Web takes the solution further. It involves publishing in languages specifically
designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and
Extensible Markup Language (XML). HTML describes documents and the links between them.
RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or
airplane parts.
These technologies are combined in order to provide descriptions that supplement or replace the
content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-
accessible databases,[11] or as markup within documents (particularly, in Extensible HTML
(XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues
stored separately). The machine-readable descriptions enable content managers to add meaning to
the content, i.e., to describe the structure of the knowledge we have about that content. In this way,
a machine can process knowledge itself, instead of text, using processes similar to human
deductive reasoning and inference, thereby obtaining more meaningful results and helping
computers to perform automated information gathering and research.
The Semantic Web help us to enhance the usability and usefulness of the Web and its
interconnected resources. It is also important as it brings sentences to the Web i.e., the subject,
predicate, and objects (optionally) of a sentence are identified using hyperlinks.
One of the key benefits of the semantic web is having large amounts of data, knowledge and
information made understandable and accessible to machines, especially artificially intelligent
bots, virtual assistants and agents.
The simplicity of the RDF data structure and the schema’s optional nature mean that it’s easy to
combine different sets of data. This is particularly useful for big data projects where the variety of
data within an organization can present a challenge.
Introduction
The word Semantic Web is initiative by Tim Berners-Lee, the very person who invented the WWW
in the late 1980.It promotes common data formats on the World Wide Web. By encouraging the
inclusion of semantic content in web pages, the Semantic Web aims at converting the current web
dominated by unstructured and semi-structured documents into a “web of data”. The Semantic
Web stack builds on the W3C’s Resource Description Framework (RDF).
The concept is to offer people the information they’re looking for at the time they need it. One of
its key philosophies is that although the information presented on the internet is useful, it’s not
always needed at every point.
Because the majority of data is created using forms and then converted into HTML, there’s no way
all data can be managed by everyone at all times. The semantic web makes this information more
useful to everyone because it can be repurposed.
The semantic web essentially allows for the connection of information using a network that can be
easily read by machines – whether computers, IoT devices, mobile phones or other devices
commonly used to access information.
Schema.org has been formed by a number of organizations (notably Google, Bing and Yahoo) to
boost the extent of semantic metadata. The goal of this is to answer questions from the best sources
on the web, rather than serve up a search page full of document links.
The most important part of semantic web technologies is Resource Description Framework (RDF).
This is a common framework for describing resources. It can represent metadata that can be parsed
and processed by systems rather than just displayed to users.
The semantic web is very useful for solving many of the problems raised with the World Wide
Web.
For example, data silos can be pretty much eradicated, with links between data and the outside
world - or even more localized places such as within the organization - operating seamlessly. The
use of semantic metadata tags means the information can all reside in one place, searchable by the
tags to make it infinitely easier to uncover.
Technologies Used
Two important technologies for developing the Semantic Web already in used are:
Ontology Languages: -
Ontologies play a key role in the Semantic Web by providing vocabularies that applications can
use to understand shared information. DAML+OIL an ontology language designed specifically for
use in the semantic Web. It was produced by merging two ontology languages, OIL and DAML.
OIL integrates features from frame-based systems and description logics (DLs), and has an RDF-
based syntax. DAML is more tightly integrated with RDF, enriching it with a larger set of
ontological primitives. Because DAML+OIL is based on description logic. a DL reasoned can be
used to compare (semantically) descriptions written in DAML+OIL. This provides a powerful
framework for defining and comparing e-commerce service descriptions.
WSDL: -
Web Services Description Language is an XML format for describing network services in
abstract terms derived from the specific data formats and protocols used for
implementation. As communication protocols and message formats are standardized in
the Web community, it becomes possible and important to describe communications
in a structured way. WSDL addresses this need by defining an XML grammar for
describing network services as collections of communication and capable of exchanging
messages.WSDL service definitions provide documentation for distributed systems.
WSDL doesn’t support semantic description of services.
UDDI: -
UDDI (Universal Description, Discovery, and Integration) is another upcoming XML-
based standard for Web service description. It gives a business to describe its business
and services, discover other businesses that offer desired services, and integrate with
these other businesses by providing a registry of businesses and Web services. UDDI
describes businesses by their physical attributes, such as name, address, and the services
they provide. UDDI descriptions are augmented by a set of attributes, called tModels, ,
however, the tModels it uses only provide a tagging mechanism, and the search
performed is only done by string matching on some fields they have defined. Thus, it is of
no use for locating services based on a semantic specification of their functionality.
DAML-S: -
DAML-S permits Web service providers with a core set of mark-up language constructs
for explaining the properties and capabilities of their Web services in unambiguous,
computer-interpretable form. DAML-S markup of Web services is intended to ease the
automation of Web service tasks, including automated Web service discovery,
Execution, interoperation and execution monitoring. In DAML-S, service descriptions
are structured into three essential types of knowledge: a Service Profile, a Service
Model and a Service Grounding. The Service Profile is typically required in a
matchmaking process because it provides the information needed for an agent to
discover a service that meets its requirements.
The applications of the Semantic Web are endless, but we cannot take advantage of these
possibilities until we have a truly intelligent web of global knowledge. We must make our content
"semantic", or annotated with meaningful metadata and relationships, to transform dull and
dormant fixed-text into live and electric linked concepts. This transformation is making the web
much more dynamic, allowing not only content, but also data, to travel freely and seamlessly.
The spread of the Semantic Web and the technologies it brings to the table puts the analytical
powers of machines to work in the domains of content production, management, learning, support,
media, ecommerce, scientific research, knowledge management, and publishing in general.
Anywhere we express knowledge will become semantic. Content discovery and presentation on
Google and Bing is only the tip of the iceberg, although SEO and SERP-placement might be reason
enough. When it comes to the applications of intelligent content, semantic search and smart
devices, the emerging semantic web of content and data is a massive opportunity to tap into.
Careers, companies, and global innovation leaders will continue to be born on the Semantic Web.
One could say that the Semantic Web is the technological backbone of the evolving content order,
bringing together omnichannel content with semantics, structure, and shared standards.
Conclusion
The greatest advantage of the Semantic Web is that it abstracts away the tedious documents and
application layer to have a straight access to knowledge.
One of the key benefits of the semantic web is having large amounts of data, knowledge and
information made understandable and accessible to machines, especially artificially intelligent
bots, virtual assistants and agents.
Four important areas of research that need to be addressed to allow the Semantic Web to realize
its full potential have been described. Originally, hypertext research aimed to bring user
interaction with digitally stored information closer to the semantic relations implicit within the
information. Much of the more "hypertext-specific'' research, however, turned to system and
application-oriented topics, possibly through the lack of an available infrastructure to support
more explicit semantics. The introduction of the Web, as a highly distributed, but relatively
simple, hypermedia system has also influenced the character of hypermedia research. The
existence of XML and RDF, along with developments such as RDF Schema and
DAML+OIL, provides the impetus for realizing the Semantic Web. During these early stages of
its development, we want to ensure that the many hypertext lessons learned in the past will not
be lost, and that future research tackles the most urgent issues of the Semantic Web.
Practical - 13
Objective: - Study of Swarm Intelligence
Introduction
Swarm intelligence is an emerging field of biologically-inspired artificial intelligence
based on the behavioral models of social insects such as ants, bees, wasps, termites etc.
A Swarm is a configuration of tens of thousands of individuals that have chosen their own
will to converge on a common goal. Swarm Intelligence is the Complex Collective, Self-
Organized, Coordinated, Flexible and Robust Behavior of a group following the simple
rules.
SI systems are typically made up of a population of simple agents interacting locally with
one another and with their environment. The agents follow very simple rules, and although
there is no centralized control structure dictating how individual agents should behave,
local, and to a certain degree random, interactions between such agents lead to the
emergence of “intelligent” global behavior, unknown to the individual agents. Natural
examples of SI include ant colonies, bird flocking, animal herding, bacterial growth, and
fish schooling.
Ant-Based Routing
Swarm grammars
Swarm grammars are swarms of stochastic grammars that can be evolved to describe
complex properties such as found in art and architecture. These grammars interact as agents
behaving according to rules of swarm intelligence. Such behavior can also suggest deep
learning algorithms, in particular when mapping of such swarms to neural circuits is
considered.
Input Layers: This layer takes large volumes of input data in the form of texts, numbers,
audio files, image pixels, etc. The Input layers contain those artificial neurons (termed as
units) which are to receive input from the outside world. This is where the actual learning
on the network happens, or recognition happens else it will process.
Hidden Layers: Hidden layers are responsible to perform mathematical operation, pattern
analysis, feature extraction, etc. There can multiple hidden layers in a neural network. The
hidden layers are mentioned hidden in between input layers and the output layers. The only
job of a hidden layer is to transform the input into something meaningful that the output
layer/unit can use in some way.
Output Layer: This layer is responsible to generate the desired output. The output layers
contain units that respond to the information that is fed into the system and also whether it
learned any task or not.
A typical neural network has anything from a few dozen to hundreds, thousands, or even
millions of artificial neurons called units arranged in a series of layers, each of which
connects to the layers on either side. Some of them, known as input units, are designed to
receive various forms of information from the outside world that the network will attempt
to learn about, recognize, or otherwise process. Other units sit on the opposite side of the
network and signal how it responds to the information it's learned; those are known
as output units. In between the input units and output units are one or more layers of hidden
units, which, together, form the majority of the artificial brain.
Most neural networks are fully connected, which means each hidden unit and each output
unit is connected to every unit in the layers either side. The connections between one unit
and another are represented by a number called a weight, which can be either positive (if
one unit excites another) or negative (if one unit suppresses or inhibits another). The higher
the weight, the more influence one unit has on another.
Working
An artificial neural network consists of several parameters and hyperparameters that drive
the output of a neural network model. Some of these parameters are weights, biases,
number of epochs, the learning rate, batch size, number of batches, etc. Each node in the
network has some weights assigned to it. A transfer function is used to calculate the
weighted sum of inputs and a bias is added.
Artificial Neural Networks can be best viewed as weighted directed graphs, where the
nodes are formed by the artificial neurons and the connection between the neuron outputs
and neuron inputs can be represented by the directed edges with weights. The Artificial
Neural Network receives the input signal from the external world in the form of a pattern
and image in the form of a vector. These inputs are then mathematically designated by the
notations x(n) for every n number of inputs.
Each of the input is then multiplied by its corresponding weights (these weights are the
details used by the artificial neural networks to solve a certain problem). In general terms,
these weights typically represent the strength of the interconnection amongst neurons
inside the artificial neural network. All the weighted inputs are summed up inside the
computing unit (yet another artificial neuron).
If the weighted sum equates to zero, a bias is added to make the output non-zero or else to
scale up to the system’s response. Bias has the weight and the input to it is always equal to
1. Here the sum of weighted inputs can be in the range of 0 to positive infinity. To keep
the response in the limits of the desired value, a certain threshold value is benchmarked.
And then the sum of weighted inputs is passed through the activation function.
The activation function, in general, is the set of transfer functions used to get the desired
output of it. There are various flavors of the activation function, but mainly either linear or
non-linear sets of functions. Some of the most commonly used set of activation functions
are the Binary, Sigmoidal (linear) and Tan hyperbolic sigmoidal (non-linear) activation
functions.
Most of the artificial neural networks are all interconnected, which means that each of the
hidden layers is individually connected to the neurons in its input layer and also to its output
layer leaving nothing to hang in the air. This makes it possible for a complete learning
process and also learning occurs to the maximum when the weights inside the artificial
neural network get updated after each iteration.
Practical - 15
Objective: - Study of Genetic Algorithm
Introduction
A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural
evolution. Over many generations, natural populations evolve according to the principles of natural
selection and stated by Charles Darwin in The Origin of Species. Only the most suited elements in
a population are likely to survive and generate offspring, thus transmitting their biological heredity
to new generations.
Genetic algorithms are able to address complicated problems with many variables and a large
number of possible outcomes by simulating the evolutionary process of “survival of the fittest” to
reach a defined goal. They operate by generating many random answers to a problem, eliminating
the worst and cross-pollinating better answers. Repeating this elimination and regeneration process
gradually improves the quality of the answers to an optimal or near-optimal condition.
In computing terms, a genetic algorithm implements the model of computation by having arrays
of bits or characters (binary string) to represent the chromosomes. Each string represents a
potential solution. The genetic algorithm then manipulates the most promising chromosomes
searching for improved solutions. A genetic algorithm operates through a cycle of three stages:
1.Initial Population
The process begins with a set of individuals which is called a Population. Each individual is a
solution to the problem you want to solve.
An individual is characterized by a set of parameters (variables) known as Genes. Genes are joined
into a string to form a Chromosome (solution).
In a genetic algorithm, the set of genes of an individual is represented using a string, in terms of
an alphabet. Usually, binary values are used (string of 1s and 0s). We say that we encode the genes
in a chromosome.
2.Fitness Function
The fitness function determines how fit an individual is (the ability of an individual to compete
with other individuals). It gives a fitness score to each individual. The probability that an
individual will be selected for reproduction is based on its fitness score.
3.Selection
The idea of selection phase is to select the fittest individuals and let them pass their genes to the
next generation.
Two pairs of individuals (parents) are selected based on their fitness scores. Individuals with high
fitness have more chances to be selected for reproduction.
4.Crossover
Crossover is the most significant phase in a genetic algorithm. For each pair of parents to be mated,
a crossover point is chosen at random from within the genes.
Offspring are created by exchanging the genes of parents among themselves until the crossover
point is reached.
5.Mutation
In certain new offspring formed, some of their genes can be subjected to a mutation with a low
random probability. This implies that some of the bits in the bit string can be flipped.
Mutation occurs to maintain diversity within the population and prevent premature convergence.
Termination
The algorithm terminates if the population has converged (does not produce offspring which are
significantly different from the previous generation). Then it is said that the genetic algorithm has
provided a set of solutions to our problems.
Algorithm
Initialize a random population of individuals
Compute the fitness of each individual
WHILE NOT finished DO
BEGIN /* produce new generation */
FOR population size DO
BEGIN /* reproductive cycle */
Select two individuals from old generation, recombine the
two individuals to give two offspring
Make a mutation for selected individuals by altering a
random bit in a string
Create a new generation (new populations)
END
IF population has converged THEN
finished := TRUE
END
Practical – 16
Fuzzy Logic
Description: The term fuzzy refers to things which are not clear or are vague. In the real world
many times we encounter a situation when we can’t determine whether the state is true or false,
their fuzzy logic provides a very valuable flexibility for reasoning. In this way, we can consider
the inaccuracies and uncertainties of any situation.
In Boolean system truth value, 1.0 represents absolute truth value and 0.0 represents absolute false
value. But in the fuzzy system, there is no logic for absolute truth and absolute false value. But in
fuzzy logic, there is intermediate value too present which is partially true and partially false.
ARCHITECTURE
Its Architecture contains four parts:
RULE BASE: It contains the set of rules and the IF-THEN conditions provided by the
experts to govern the decision-making system, on the basis of linguistic information. Recent
developments in fuzzy theory offer several effective methods for the design and tuning of
fuzzy controllers. Most of these developments reduce the number of fuzzy rules.
FUZZIFICATION: It is used to convert inputs i.e. crisp numbers into fuzzy sets. Crisp
inputs are basically the exact inputs measured by sensors and passed into the control system
for processing, such as temperature, pressure, rpm’s, etc.
INFERENCE ENGINE: It determines the matching degree of the current fuzzy input with
respect to each rule and decides which rules are to be fired according to the input field. Next,
the fired rules are combined to form the control actions.
DEFUZZIFICATION: It is used to convert the fuzzy sets obtained by inference engine into
a crisp value. There are several defuzzification methods available and the best suited one is
used with a specific expert system to reduce the error.
Membership function
Definition: A graph that defines how each point in the input space is mapped to membership value
between 0 and 1. Input space is often referred as the universe of discourse or universal set (u),
which contain all the possible elements of concern in each particular application.
There are largely three types of fuzzifiers:
Singleton fuzzifier
Gaussian fuzzifier
Trapezoidal or triangular fuzzifier
Application
It is used in the aerospace field for altitude control of spacecraft and satellite.
It has used in the automotive system for speed control, traffic control.
It is used for decision making support systems and personal evaluation in the large company
business.
It has application in chemical industry for controlling the pH, drying, chemical distillation
process.
Fuzzy logic is used in Natural language processing and various intensive applications in
Artificial Intelligence.
Fuzzy logic is extensively used in modern control systems such as expert systems.
Fuzzy Logic is used with Neural Networks as it mimics how a person would make decisions,
only much faster. It is done by Aggregation of data and changing into more meaningful data
by forming partial truths as Fuzzy sets.
Practical - 17
Objective: - Study of RuleML
Introduction
Rules have many uses, coming in a multitude of forms. RuleML is a unifying system of families
of languages for Web rules over Web documents and data. RuleML is specified syntactically
through schema languages (normatively, in Relax NG), originally developed for XML and
transferable to other formats such as JSON. Since Version 1.02, rather than assuming predefined
default semantics, RuleML allows partially constrained semantic profiles and fully-specified
semantics.
Algorithm:
Following is the Backtracking algorithm for Knight’s tour problem.
b) If the move chosen in the above step doesn't lead to a solution then remove
this move from the solution vector and try other alternative moves.
c) If none of the alternatives work then return false (Returning false will
remove the previously added item in recursion and if false is returned by the
initial call of recursion then "no solution exists" )
Program: -
#include<stdio.h>
#define N 8
int solveKTUtil(int x, int y, int movei, int sol[N][N], int xMove[], int yMove[]);
/* A utility function to check if i,j are valid indexes for N*N chessboard */
int isSafe(int x, int y, int sol[N][N])
{
return ( x >= 0 && x < N && y >= 0 &&
y < N && sol[x][y] == -1);
}
/* A utility function to print solution matrix sol[N][N] */
void printSolution(int sol[N][N])
{
for (int x = 0; x < N; x++)
{
for (int y = 0; y < N; y++)
printf(" %2d ", sol[x][y]);
printf("\n");
}
}
/* This function solves the Knight Tour problem using Backtracking. This function
mainly uses solveKTUtil() to solve the problem. It returns false if no complete tour
is possible, otherwise return true and prints the tour. Please note that there may be
more than one solutions, this function prints one of the feasible solutions. */
int solveKT()
{
int sol[N][N];
return 1;
}
Output: -