You are on page 1of 3

CEC414: AI -Revision Questions

1) a) State and briefly explain the benefits of Intelligent Software Agents, and give two examples of
real time agents.
(5 Marks)

b) Briefly outline the key components of an Intelligent BDI agent.

2) With an example of software agent development environment, briefly explain with the aid of
diagrams and/or code snippets, how you would design and demonstrate operations of a software
agents system for learning and/or training support?

3) In an agent support learning system, trace a typical communication exchange steps or process
between two agents . Name and explain key communication protocols that might be involved.

4) As far as A.I and neural networks is concern, what do you understand by: a) Liner Separation b)
Perceptrons;
c) State the use(s) of Linear separation and Perceptrons.
5) Name three types of Algorithms commonly used in AI systems; and how would you apply them to
address an industry or day-to-day problem?
6) Identify which of the following apply to AI Agents

a) have perceptors and effectors; b) May or may not have internal states c) May be
humanlike or rational;

d) A simple reflex agent e) has a finite memory; f) is goal based, g) measures its utility (4 Marks)

8) State the basic concepts of ACL and KQML;

9) Distinguish between agent-oriented design / programming and object oriented Design /


programing

…………………..

ANN: Linear Seperation Tutorials (See links attached to tect)


What is linear separability of classes and how to determine

Linear separability
In Euclidean geometry, linear separability is a geometric property of a pair of sets of points.
This is most easily visualized in two dimensions (the Euclidean plane) by thinking of one set of
points as being colored blue and the other set of points as being colored red. These two sets
are linearly separable if there exists at least one line in the plane with all of the blue points on
one side of the line and all the red points on the other side. This idea immediately generalizes to
higher-dimensional Euclidean spaces if line is replaced by hyperplane.
The problem of determining if a pair of sets is linearly separable and finding a separating
hyperplane if they are arises in several areas. In statistics and machine learning, classifying
certain types of data is a problem for which good algorithms exist that are based on this concept.
Examples
Three non-collinear points in two classes ('+' and '-') are always linearly separable in two
dimensions. This is illustrated by the three examples in the following figure (the all '+' case is not
shown, but is similar to the all '-' case):

However, not all sets of four points, no three collinear, are linearly separable in two dimensions.
The following example would need two straight lines and thus is not linearly separable:

Notice that three points which are collinear and of the form "+ ⋅⋅⋅ — ⋅⋅⋅ +" are also not linearly
separable.

Linear separability of Boolean functions in n variable


A Boolean function in n variables can be thought of as an assignment of 0 or 1 to each vertex of
a Boolean hypercube in n dimensions. This gives a natural division of the vertices into two sets.
The Boolean function is said to be linearly separable provided these two sets of points are
linearly separable.

Number of linearly separable Boolean functions in each dimension [1](sequence A000609 in the OEIS)
Number of variables Linearly separable Boolean functions
2 14
3 104
4 1882
5 94572
6 15028134
7 8378070864
8 17561539552946
9 144130531453121108
H1 does not separate the sets. H2does, but only with a small margin. H3separates them with the maximum
margin.
Classifying data is a common task in machine learning. Suppose some data points, each
belonging to one of two sets, are given and we wish to create a model that will decide which set
a new data point will be in. In the case of support vector machines, a data point is viewed as a p-
dimensional vector (a list of p numbers), and we want to know whether we can separate such
points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. There are many
hyperplanes that might classify (separate) the data. One reasonable choice as the best
hyperplane is the one that represents the largest separation, or margin, between the two sets. So
we choose the hyperplane so that the distance from it to the nearest data point on each side is
maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the
linear classifier it defines is known as a maximum margin classifier.

The Hebbian learning algorithm is performed locally, and doesn’t take into account the
overall system input-output characteristic. This makes it a plausible theory for biological
learning methods, and also makes Hebbian learning processes ideal in VLSI hardware
implementations where local signals are easier to obtain.
are linearly separable, we can select two hyperplanes in such a way that they separate
the data and there are no points between them, and then try to maximize their distance.
………………………………………………………………………………………

Artificial Neural Networks/Hebbian Learning

Hebbian Learning
Hebbian learning is one of the oldest learning algorithms, and is based in large part on the
dynamics of biological
…………………………….

You might also like