You are on page 1of 16

1st semester msc – 5027 – 2020 question solve

1
(a) define artificial intelligence. List the criteria to measure the performance
of different search strategies.

Artificial Intelligence (AI) refers to the simulation of human intelligence in


machines that are programmed to think and learn like humans. It involves the
development of computer systems or algorithms that can perform tasks that
typically require human intelligence, such as speech recognition, visual
perception, decision-making, and natural language processing.

When it comes to measuring the performance of different search strategies in


AI, several criteria can be used. Here are some commonly used criteria:

Completeness: A search strategy is considered complete if it guarantees finding


a solution if one exists. In other words, it will not get stuck in an infinite loop or
fail to find a solution even when it exists.

Optimality: An optimal search strategy finds the best possible solution, typically
the one with the lowest cost or highest utility. It ensures that the solution found
is the most desirable among all possible solutions.

Time Complexity: Time complexity measures the computational efficiency of a


search strategy. It quantifies the amount of time required by the strategy to find
a solution or determine that no solution exists. Lower time complexity is
generally preferred as it means faster execution.

Space Complexity: Space complexity measures the amount of memory required


by a search strategy to execute. It quantifies the resources used by the strategy,
such as the number of data structures and variables. Lower space complexity is
desirable as it means less memory usage.

Optimality vs. Completeness Trade-off: There is often a trade-off between


optimality and completeness. Some search strategies may sacrifice
completeness to achieve optimality, while others may prioritize completeness at
the expense of optimality. The choice depends on the specific problem domain
and requirements.

Heuristics: Heuristics are problem-specific techniques or rules that guide the


search process. The effectiveness of heuristics can be used as a measure of the
quality of a search strategy. Good heuristics can significantly improve the
efficiency and effectiveness of a search algorithm.

Scalability: Scalability refers to the ability of a search strategy to handle larger


problem instances or larger search spaces efficiently. A scalable search strategy
should be able to solve problems of increasing complexity without a significant
increase in time or space requirements.

These criteria provide a basis for evaluating and comparing different search
strategies in the field of AI. The choice of which criteria to prioritize depends
on the specific problem being addressed and the available computational
resources.

(b) define an expert systems. List the characteristics features of expert


system.

An expert system is a computer-based system that emulates the problem-


solving abilities and knowledge of a human expert in a specific domain. It
is designed to provide expert-level advice, analysis, and decision-making
capabilities to users.

The characteristics and features of expert systems include:


Knowledge Base: Expert systems have a knowledge base, which contains
the domain-specific information and expertise. This knowledge is
typically represented in the form of rules, facts, heuristics, or other
knowledge representation schemes.

Inference Engine: The inference engine is the reasoning component of an


expert system. It applies logical and heuristic reasoning techniques to the
knowledge base to draw conclusions, make inferences, and provide
recommendations or solutions.

Knowledge Acquisition: Expert systems require a mechanism for


acquiring knowledge from human experts or existing sources. This
process involves extracting, organizing, and encoding the knowledge into
a format suitable for the expert system's knowledge base.
Explanation Facility: Expert systems often include an explanation facility
to provide users with explanations or justifications for the reasoning and
recommendations made by the system. This helps users understand the
system's decision-making process and builds trust.
Uncertainty Handling: Expert systems can handle uncertainty and
incomplete information by incorporating techniques such as probabilistic
reasoning, fuzzy logic, or rule-based certainty factors. These methods
allow for reasoning and decision-making in situations where there is
ambiguity or uncertainty.

User Interface: Expert systems typically have a user interface that allows
users to interact with the system, input data or problem descriptions, and
receive recommendations or solutions. The interface can vary from text-
based command-line interfaces to graphical interfaces.

Consistency and Reliability: Expert systems strive to provide consistent


and reliable results by following predefined rules and logic. They aim to
replicate the decision-making processes of human experts, providing
accurate and dependable advice.

Knowledge Update and Maintenance: Expert systems should have


mechanisms for updating and maintaining the knowledge base as new
information becomes available or the domain evolves. This ensures that
the system remains up-to-date and continues to provide relevant and
accurate advice.

Limited Domain: Expert systems are typically designed for a specific


domain or problem area. They excel in well-defined and narrow domains
where human expertise can be captured and encoded effectively.

Decision Support: One of the primary purposes of expert systems is to


support decision-making. They assist users by providing
recommendations, analyzing complex problems, and suggesting solutions
based on the knowledge and expertise embedded in the system.

These characteristics collectively define the nature and functionality of


expert systems, enabling them to provide intelligent problem-solving
capabilities in specific domains.
(c) Briefly describe the structure of neural networks. point out the type of
problems that can be solved neural network

Neural networks are a type of computational model inspired by the


structure and functioning of biological neural networks, such as the
human brain. They consist of interconnected layers of artificial neurons,
also known as nodes or units, that process and transmit information.
The structure of a neural network typically consists of three main types of
layers:

Input Layer: This is the entry point of the network where the input data is
fed into the network. Each input neuron represents a feature or attribute
of the input data.

Hidden Layers: These layers are located between the input and output
layers. They are responsible for processing and transforming the input
data through a series of weighted connections and activation functions.
Neural networks can have multiple hidden layers, and the number of
neurons in each layer can vary.

Output Layer: This layer produces the final output or prediction of the
neural network. The number of neurons in the output layer depends on
the specific problem being solved. For instance, in a binary classification
problem, there might be a single output neuron representing the
probability of belonging to one of the two classes. In a multiclass
classification problem, there could be multiple output neurons, each
representing the probability of belonging to a different class.
The connections between neurons in a neural network are represented by
weights. Each connection has a weight associated with it, which
determines the strength of the connection. During training, these weights
are adjusted to optimize the network's performance.
Neural networks are known for their ability to solve a wide range of
problems, including:

Pattern Recognition and Classification: Neural networks excel at tasks


such as image and speech recognition, handwriting recognition, and
object detection. They can learn to identify and classify patterns in
complex data.

Regression: Neural networks can be used for regression tasks, where the
goal is to predict a continuous numerical value based on input data. For
example, they can be used for predicting house prices based on features
like location, size, and number of rooms.

Natural Language Processing: Neural networks have been used for


various natural language processing tasks, including language translation,
sentiment analysis, text generation, and question answering.
Time Series Analysis: Neural networks can effectively model and predict
time-dependent data, making them suitable for tasks such as stock market
prediction, weather forecasting, and anomaly detection in sensor data.
Recommender Systems: Neural networks can be used to build
personalized recommendation systems that suggest products, movies, or
content based on a user's preferences and behavior.
Control Systems: Neural networks can be employed in control systems to
learn optimal control policies for autonomous vehicles, robotics, and
industrial processes.

These are just a few examples of the broad range of problems that neural
networks can tackle. Their versatility, ability to learn from data, and
capacity to handle complex relationships make them a powerful tool in
the field of artificial intelligence.

(a) define heuristic search. draw the state space graph of hill climbing
search

Heuristic search is a problem-solving technique used in artificial


intelligence (AI) and computer science to find solutions by
efficiently exploring a search space. It employs heuristic functions
that estimate the desirability or quality of states in order to guide
the search towards promising paths.
A state space graph is a visual representation of the possible states
and transitions in a problem-solving domain. For the purpose of
illustrating a hill climbing search, let's consider a simple example
of finding the highest peak in a landscape. The landscape can be
represented by a two-dimensional grid, where each cell represents
a state, and the elevation of that cell represents the value of the
state.
Here's an example of a state space graph for hill climbing search:
In this example, each number represents the elevation of a state. The goal is to
find the highest peak (highest elevation) in this landscape using hill climbing
search.
Hill climbing is a local search algorithm that starts from an initial state and
iteratively moves to the neighboring state with the highest heuristic value. The
algorithm terminates when it reaches a state where no neighbor has a higher
heuristic value.
In the state space graph, the algorithm would start from an initial state (e.g.,
state 1) and move to the neighboring state with the highest elevation. If there are
multiple options with the same highest elevation, the choice can be arbitrary.
For instance, if we start from state 1, the algorithm might move to state 2 since
it has the highest elevation among its neighbors (states 2 and 4). The process
continues until a local maximum is reached, where no neighboring state has a
higher elevation.
Note that hill climbing is a local search algorithm and can get stuck in
suboptimal solutions, known as local optima. It does not guarantee finding the
global maximum in a search space.

(b) Consider the block world problem with four blocks A, B,C,D with
the start and goal states given below-

Start Goal
A D
B C
C B
D A
Block Words

Assume the following two operations: pick a block and put it on table,
pick up a block and put it on another block. Solve the problem using hill
climbing algorithm and a suitable heuristic function.

The block world problem is a classic AI problem that involves moving blocks from one
location to another. In this problem, we have four blocks, A, B, C, and D, and we want to
move them from the start state to the goal state.
The start state is:
A
B
C
D

The goal state is:


D
C
B
A
We can use a hill climbing algorithm to solve this problem. A hill climbing algorithm is a
simple algorithm that iteratively moves from one state to another, always choosing the state
that has the highest heuristic value. The heuristic value is a measure of how close the state is
to the goal state.
In this case, we can use the following heuristic value:
Copy
h(n) = |n - g|
where:
h(n) is the heuristic value of state n
n is the current state
g is the goal state
The hill climbing algorithm will start with the start state and then iteratively move to the state
that has the highest heuristic value. The algorithm will continue until it reaches the goal state.
The following is the hill climbing algorithm for solving the block world problem:
pf
Copy
1. Start with the start state.
2. Generate all possible next states.
3. Calculate the heuristic value of each next state.
4. Choose the next state with the highest heuristic value.
5. Repeat steps 2-4 until the goal state is reached.
The following is the solution to the block world problem using the hill climbing algorithm:

Start state:

A
B
C
D

Iteration 1:

Move block A to the table.

A
B
C
-

Iteration 2:

Move block B to block A.

-
B
C
A

Iteration 3:

Move block C to block B.

-
-
C
B
A

Iteration 4:

Move block D to block C.

-
-
-
C
B
A

Goal state:

D
C
B
A
The hill climbing algorithm found the solution to the block world problem in four iterations.

(c)Write the drawback of hill climbing search.

Drawbacks of Hill Climbing Search:


Local Optima: Hill climbing search can get stuck in a local optimum, which is a state that is
not the global optimum but is better than all of its neighboring states. This can happen when
the search space is rugged and there are many local optima.

Plateaus: Hill climbing search can also get stuck on a plateau, which is a region of the search
space where all of the states have the same heuristic value. This can happen when the
heuristic function is not able to distinguish between different states.

Slow Convergence: Hill climbing search can be slow to converge to the goal state, especially
if the search space is large and the heuristic function is not very informative.
Sensitive to Initial State: Hill climbing search is sensitive to the initial state. If the initial state
is far from the goal state, the search may take a long time to find the solution.

Additional Drawbacks:
No guarantee of finding the optimal solution: Hill climbing search is a greedy algorithm,
which means that it always chooses the best option at each step. This can lead to the
algorithm getting stuck in a local optimum and not finding the optimal solution.
Can be computationally expensive: Hill climbing search can be computationally expensive,
especially for large search spaces. This is because the algorithm needs to evaluate the
heuristic value of each neighbor at each step.

Overall, hill climbing search is a simple and easy-to-implement algorithm, but it has several
drawbacks that can make it less effective for certain problems.

How to overcome the drawbacks of hill climbing search:


Use a more informed heuristic function: A more informed heuristic function can help the
search algorithm avoid local optima and plateaus.

Use a randomized search: A randomized search algorithm can help the search algorithm
escape from local optima and plateaus.
Use a hybrid search algorithm: A hybrid search algorithm combines hill climbing search with
another search algorithm, such as simulated annealing or genetic algorithms. This can help
the search algorithm overcome the drawbacks of hill climbing search.

(a) Compare between (i) fuzzy set and crisp set (ii) fuzzy relation and crisp relation.
(b) Consider the following two fuzzy sets A and B defined over a universe of discourse
[0,5] of real numbers with their membership functions.

x −x
μ A ( x )= ∧μ B ( x )=2
1+ x

Determine the membership fuctions of the following and draw them graphically

i. A, B
ii. A ∪B
iii. A ∩B
c
iv. ( A ∪ B)

7
A) Briefly describe the different modes of communication in ants.

Ants communicate through chemical signals called pheromones, tactile interactions like
antennation, vibrational signals, and, in some cases, acoustic signals produced through
stridulation. These modes of communication allow ants to coordinate activities, share
information about food sources and nest conditions, and convey messages about danger
or alarm.

B) write the decision needed to be taken by any ant m in every iteration

In each iteration, an ant m needs to make several decisions to navigate and forage
effectively. Here are some of the key decisions that an ant m might need to make:
Direction Selection: The ant m needs to decide which direction to move in search of food.
It can assess environmental cues, such as the presence of pheromone trails or visual
landmarks, to determine the most promising direction to explore.

Path Evaluation: As the ant m moves along its chosen path, it needs to evaluate the
quality of the trail or substrate. It can assess the strength of pheromone signals left by
other ants or evaluate the texture or scent of the surface to determine if it is a viable path
to follow.
Food Source Assessment: When the ant m encounters a potential food source, it needs to
decide whether the food is suitable for consumption and worth collecting. It can evaluate
factors such as odor, taste, and nutritional content to determine if the food meets the
colony's needs.

Risk Assessment: The ant m needs to assess potential risks and dangers in its
surroundings. It must decide whether to continue foraging in the current location or move
to a safer area. It can evaluate the presence of predators, competitors, or other threats and
make decisions accordingly.

Interaction with Nestmates: When encountering other ants, the ant m needs to decide how
to interact with them. It may need to communicate information about food sources or
share resources. The ant m can assess the behavior and signals of other ants to determine
the appropriate response.

Trail Marking: If the ant m discovers a new food source, it needs to decide whether to
mark a trail to guide other nestmates to the location. It can assess the abundance and
quality of the food source and make a decision based on the needs of the colony.
These decisions are made based on a combination of sensory input, chemical signals, and
previous experience. By making informed decisions at each iteration, the ant m can
navigate the environment effectively and contribute to the success of the colony's
foraging activities.

C) explain the equation of particle's velocity for the particle swarm algorithm

The velocity update equation in PSO is as follows:


v(t+1) = w * v(t) + c1 * rand1 * (pbest - x(t)) + c2 * rand2 * (gbest - x(t))

In this equation:
v(t+1) represents the updated velocity of the particle in the next iteration.
v(t) is the current velocity of the particle.
w is the inertia weight that controls the impact of the particle's previous velocity on the
new velocity. It balances exploration and exploitation.
c1 and c2 are the acceleration coefficients that determine the influence of the particle's
personal best (pbest) and the global best (gbest) positions on its velocity, respectively.
rand1 and rand2 are random values between 0 and 1.
pbest is the best position achieved by the particle so far.
x(t) is the current position of the particle.

The equation consists of three components:


Inertia term (w * v(t)): This term allows the particle to retain a portion of its previous
velocity, ensuring its movement has some momentum.

Cognitive term (c1 * rand1 * (pbest - x(t))): This term directs the particle towards its
personal best position, encouraging individual exploration.

Social term (c2 * rand2 * (gbest - x(t))): This term attracts the particle towards the global
best position found by any particle in the swarm, promoting collective exploration.
By updating the velocity using these components, particles in the PSO algorithm can
navigate the search space, explore promising regions, and converge towards optimal
solutions over time. The interplay between inertia, cognitive, and social terms allows the
particles to balance between exploiting their personal best and exploring the global best
found by the swarm.

D) write the advantage and disadvantage of swarm intelligence.

Advantages of Swarm Intelligence:


*Adaptability to dynamic environments
*Robustness against individual failures
*Scalability to large group sizes
*Balance between exploration and exploitation

Disadvantages of Swarm Intelligence:


*Lack of centralized control
*Limited information sharing
*Sensitivity to initial conditions
*Potentially slower convergence speed

a) Define genetic programming. Draw the working flow diagram of a neuro genetic
hybrid system.

(B) Neuro Genetic Hybrid systems:


A Neuro Genetic hybrid system is a system that combines Neural networks: which
are capable to learn various tasks from examples, classify objects and establish
relations between them, and a Genetic algorithm: which serves important search
and optimization techniques. Genetic algorithms can be used to improve the
performance of Neural Networks and they can be used to decide the connection
weights of the inputs. These algorithms can also be used for topology selection and
training networks.
Working Flow:
 GA repeatedly modifies a population of individual solutions. GA uses
three main types of rules at each step to create the next generation from
the current population:
1. Selection to select the individuals, called parents, that contribute
to the population at the next generation
2. Crossover to combine two parents to form children for the next
generation
3. Mutation to apply random changes to individual parents in
order to form children
 GA then sends the new child generation to ANN model as a new input
parameter.
 Finally, calculating the fitness by the developed ANN model is performed.

a) Name and describe the main features of gentic algorithm.


The main features of a genetic algorithm (GA) are as follows:

Population: The GA operates on a population of candidate solutions, called individuals or


chromosomes. Each individual represents a potential solution to the problem.

Fitness Evaluation: Each individual in the population is evaluated using a fitness function that
quantifies how well it solves the problem. The fitness function determines the individuals'
quality and guides the search for better solutions.

Selection: Individuals with higher fitness values have a higher probability of being selected
for reproduction and passing their genetic material to the next generation. This selection
process mimics the principle of "survival of the fittest."

Reproduction: Selected individuals undergo genetic operations, such as crossover and


mutation, to create offspring. Crossover involves combining genetic material from two
parents, while mutation introduces random changes to the offspring's genetic makeup.

Iteration: The selection and reproduction process is repeated iteratively, creating new
generations of individuals. Over time, the population evolves towards better solutions as fitter
individuals are more likely to be selected and pass on their genetic material.

Termination Condition: The GA continues until a termination condition is met, such as


reaching a maximum number of generations, achieving a desired fitness level, or running out
of computational resources.

Genetic algorithms are known for their ability to explore large solution spaces and find near-
optimal solutions in various problem domains. They are particularly effective when dealing
with complex, multi-dimensional, and non-linear optimization problems where traditional
optimization methods may struggle.

b) Define uniform corss over.

Uniform crossover is a genetic operator used in genetic algorithms and genetic


programming. It involves randomly selecting genes from two parent individuals and
swapping them to create offspring individuals. In uniform crossover, each gene has an
equal probability of being selected for exchange. This operator allows for the mixing of
genetic material between the parents, promoting exploration of different genetic
combinations and increasing the diversity of the offspring population.

4
.
a ) write notes on fuzzy arithmetic operation.
Arithmetic Operations On Fuzzy Numbers

Definition:

Let A and B denote fuzzy numbers and let we define a fuzzy set on R, A*B. by defining its -
cut α A∗B =α A∗α B ∀ ∈¿

A*B = ∪ α ∈[0 , 1]
αA ∗B

Since,α A∗B is closed interval for each αϵ [0 ,1] and A,B are fuzzy numbers.

Definition:

Let * denote any of the four basic arithmetic operators and let A,B denote fuzzy numbers.
Then we define a fuzzy set on R, A*B by the equation,
¿
(A*B) (z) = z=x∗y min [ A ( x ) , B ( y ) ] where ∀ z ∈ R

You might also like