Professional Documents
Culture Documents
Module I
Introduction
Definition of Computing, Conventional Computing vs. Intelligent
Computing, Necessity of
Intelligent Computing, Current trends in Intelligent Computing
AI Concepts
Introduction to AI, AI problems and Solution approaches,
Fundamentals of problem solving using
Search and Heuristics, Overview of Knowledge-base creation, and
Intelligent Agents, Classification of AI
Computing
Computing is any goal-oriented activity requiring, benefiting from,
or creating computing machinery.
It includes the study and experimentation of algorithmic processes,
and development of both hardware and software.
Computing has scientific, engineering, mathematical, technological and social
aspects.
Major computing disciplines include computer engineering, computer science,
cybersecurity, data science, information systems, information technology, digital
art and software engineering.
Computing
• Computing is the process of using computers and computer technology to
solve problems, perform tasks, and create new products and services.
• It encompasses a broad range of activities, including software development,
hardware design, data analysis, networking, and artificial intelligence.
• At its core, computing involves the use of algorithms and programming
languages to create instructions that computers can follow.
• These instructions are used to perform a wide range of tasks, from simple
calculations to complex data analysis and machine learning.
Computational Intelligence
• It is the study of the design of intelligent agents.
• An agent is something that act in an environment- if does something.
• Agent include worms, dogs, thermostats, airplanes, humans,
organization and society.
• Intelligent agent is a system that acts intelligently:
Computational Intelligence
• its actions are appropriate for its goals and circumstances
• it is flexible to changing environments and goals
• it learns from experience
• it makes appropriate choices given perceptual limitations and finite
computation
Artificial or Computational Intelligence?
• Computational Intelligence is a subset of Artificial Intelligence
• CI is based on Fuzzy Logics, Probabilistic, Mechanisms, Natural Swarm
Intelligence, Neural Networks and Evolutionary Computing approches
• The field is often called Artificial Intelligence.
• Scientific goal: to understand the principles that make intelligent
behavior possible, in natural or artificial systems.
• Engineering goal: to specify methods for the design of useful,
intelligent artifacts.
Computational Intelligence Paradigms
• intellection
Generic intelligence (Machine Intelligence)
Generic intelligence represents the ability of computers to solve complex
problems with the wide extension, including image recognition , natural
language processing , speech recognition , target detection and tracking
etc.
Biological intelligence can be transplanted to a computer on the following
four levels
Data Intelligence : Math expression, calculation, storage
Perceptual Intelligence: Vision, hearing , touch, taste
automatic driving utilizes light detection and ranging methods , other
sensing devices, and AI algorithms for driving information computation
Generic intelligence (Machine Intelligence)
Cognitive (Machine) Intelligence : learning, thinking, decision, reasoning
It denotes machines with human-like logical thinking and cognition abilities,
especially to actively learn, think, understand, summarize, interpret, plan,
and apply knowledge
Autonomous Intelligence :
Autonomous intelligence implies that the machine can act like a human
Why we need highly efficient
and intelligent computing devices.
Rise in demand of the following
• internet of things,
• big data,
• and artificial intelligence
Features of Intelligent Computing
• self-learning and evaluability
• high computing capability
• high energy efficiency in architecture
• security and reliability
• automation and precision in operational
• collaboration and ubiquity in serviceability
self-learning and evaluability
• Self-learning refers to obtaining experiences by mining rules and
knowledge from massive amounts of data and optimizing the
calculation paths with the usable results.
• While evaluability represents a heuristic self-optimization ability that
simulates the evolutionary process of organisms in nature,
• where the machines learn from the environment and subsequently
make self-adjustments to adapt to the environment
High computing capability and high energy efficiency.
• Aiming to exceed the traditional Von Neumann’s architecture,
intelligent computing evolves to new computing architectures with
respect to the
• processing-in-memory,
• heterogeneous integration,
• and wide-area collaboration.
• High computing power refers to the computing capability that meets
the needs of an intelligent society
• high energy efficiency is to maximize computing efficiency and reduce
energy consumption as much as possible to ensure efficient processing
of big data.
Security and reliability.
• high security means network security, storage security, content security,
• High trust refers to the trust of identity, data, computing process, and
computing environment through trusted hardware, operating system,
software, network, and private computing.
Automation and precision.
• automatic resource management and scheduling, automatic service
creation and provision, and automatic management of the task life
cycle, availability, and service of intelligent computing.
• The precision :
fast processing of computing tasks and timely matching of computing
resources.
Collaboration and ubiquity.
• Collaboration between humans and machines improves intelligence
levels in intelligent tasks,
• and ubiquity enables computing to be conducted everywhere through
combining intelligent computing theoretical methods, architectural
systems, and technical approaches together.
Fusion of Intelligence and Computation
• The intelligence improves the performance and efficiency of computing
systems through intelligent technology.
• new computing mechanisms, such as hardware and software refactoring
and cooperative evolution, to deal with different types of tasks.
• New computing architectures, such as human-computer interaction,
combine human perception and cognitive ability with the operation and
storage ability of computer.
• And such new architectures are effective in improving the sensing and
reasoning ability of the computers.
• Machines can have high computing speed and accuracy, and also efficiently
obtain information from the physical environment through various sensors.
Fusion of Intelligence and Computation
• New distributed computing architectures such as end-to-end cloud and
wide-area collaboration are adopted to effectively integrate
supercomputing, cloud computing, edge computing, and terminal
computing resources.
• A new secure and trusted intelligent computing system is established by
constructing secure methods and trusted computing mechanisms.
• It ensures the security and trust of the computing process, identity, data,
and results
Computing by Intelligence
Another critical point of intelligent computing is how to improve the
intellectual level of computing.
we all need to learn from intelligent creatures in nature with no exception
for the computation, such as the three classical intelligence methods:
artificial neural network,
fuzzy system,
and evolutionary computing.
Artificial Neural Network
ANN BNN
Input Dendrites
Weight Synapses
Output Axon
Hidden layer Cell body
3 (x,y) if x>0 (x-d, y) Pour some water out of the 4-gallon jug
4 (x,y) if y>0 (x, y-d) Pour some water out of the 4-gallon jug
0 0 Start State
0 3 2 (Fill the 3-gallon jug)
3 0 9 (Pour all the water from the 3- gallon jug Into the 4-
gallon jug)
3 3 2 (Fill the 3-gallon jug)
4 2 7 (Pour water from the 3 gallon jug into the 4 gallon
jug until the 4 gallon jug is full )
0 2 5 or 12 Empty the 4-gallon jug on the ground
We also know the eight puzzle problem by the name of N puzzle problem or sliding
puzzle problem.
N-puzzle that consists of N tiles (N+1 titles with an empty tile) where N can be 8, 15, 24
and so on.
In our example N = 8. (that is square root of (8+1) = 3 rows and 3 columns).
The puzzle can be solved by moving the tiles one by one in the single empty space and
thus achieving the Goal state.
8 Puzzle Problem
The empty space can only move in four directions (Movement of empty space)
• Up
• Down
• Right or
• Left
The empty space cannot move diagonally and can take only one step at a time.
8 Puzzle Problem
Let's solve the problem without Heuristic Search that is Uninformed Search or
Blind Search ( Breadth First Search and Depth First Search)
Breath First Search to solve Eight puzzle problem
Note: If we solve this problem with depth first search, then it will go to depth instead
of exploring layer wise nodes.
Time complexity:
In worst case time complexity in BFS is O(b^d) know as order of b raise to power d.
In this particular case it is (3^20).
b-branch factor
d-depth factor
8 Puzzle Problem
8 Puzzle Problem
D
L R U
8 Puzzle Problem
R
L
8 Puzzle Problem
Comments:
• This problem requires a lot of space for saving the different trays.
• Time complexity is more than that of other problems.
• The user has to be very careful about the shifting of tiles in the trays.
• Very complex puzzle games can be solved by this technique.
8 Puzzle Problem
Travelling Salesman Problem
You are given-
• A set of some cities
• Distance between every pair of cities
B
20 30
Starting 13
Point 22
A D
40 12
C
State Space : Initial State (State A)
C B A
Monkey Banana Problem
Initially, the monkey is at location ‘A’,
the banana is at location ‘B’
and the box is at location ‘C’.
The monkey and box have height “low”;
but if the monkey climbs onto the box will have height “High”,
the same as the bananas.
Monkey Banana Problem
The action available to the monkey include:
“GO” from one place to another.
“PUSH” an object from one place to another.
“Climb” onto an object.
“Grasp” an object.
Grasping results in holding the object if the monkey and the object
are in the same place at the same height.
Monkey Banana Problem
So the solution to the planning problem may be of following
• GO(A,C)
• PUSH (Box, C, B, Low)
• Climb Up(Box , B)
• Grasp(banana, B, High)
• Climb down(Box)
• Push(Box, B, C, Low)
Search Algorithm
• Search is the systematic examination of states to find path from the
start/root state to the goal state.
• Many traditional search algorithms are used in AI applications.
• For complex problems, the traditional algorithms are unable to find the
solution within some practical time and space limits.
• Consequently, many special techniques are developed; using heuristic
functions. The algorithms that use heuristic functions are called heuristic
algorithms.
• Heuristic algorithms are not really intelligent; they appear to be
intelligent because they achieve better performance.
• Heuristic algorithms are more efficient because they take advantage of
feedback from the data to direct the search path.
Search Algorithm
• A search algorithm takes a problem as input and returns the solution
in the form of an action sequence.
• Once the solution is found, the actions it recommends can be carried
out.
• This phase is called as the execution phase.
• After formulating a goal and problem to solve the agent cells a search
procedure to solve it.
Search Algorithm
• A problem can be defined by 5 components.
• The initial state: The state from which agent will start.
• The goal state: The state to be finally reached.
• The current state: The state at which the agent is present after
starting from the initial state.
• Successor function: It is the description of possible actions and their
outcomes.
• Path cost: It is a function that assigns a numeric cost to each path.
Breadth-first search on a simple binary tree. At each stage, the node to be expanded next is
indicated by a marker.
Breadth-first search : Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E.
If NODE-LIST was empty, quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST
Breadth-first search : Algorithm:
In the below tree structure, we have shown the traversing of the tree using
BFS algorithm from the root node S to goal node K.
BFS search algorithm traverse in layers, so it will follow the path which is
shown by the dotted arrow, and the traversed path will be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Breadth-first search : Algorithm:
. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Breadth-first search : Algorithm:
Breadth first search is: .
• Time complexity : O(b^d )
• Space complexity : O(b^d )
• Optimality :Yes
• b - branching factor(maximum no of successors of any node),
• d – Depth of the shallowest goal node
• Maximum length of any path (m) in search space
Breadth-first search : Algorithm:
Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem, then BFS
will provide the minimal solution which requires the least number of
steps.
• Disadvantages:
• It requires lots of memory since each level of the tree must be saved
into memory to expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
Depth-first search : Algorithm:
Advantage:
• DFS requires very less memory as it only needs to store a stack of the nodes on the path
from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm
• Disadvantage:
• There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Depth-first search : Algorithm:
In the below search tree, we have shown the flow of depth-first search,
and it will follow the order as:
Root node--->Left node ----> right node.
Depth-first search : Algorithm:
Depth-first search : Algorithm:
Completeness:
DLS search algorithm is complete if the solution is above the depth-limit.
Uniform-cost Search Algorithm:
• Uniform-cost search is a searching algorithm used for traversing a weighted
tree or graph.
• This algorithm comes into play when a different cost is available for each
edge.
• The primary goal of the uniform-cost search is to find a path to the goal
node which has the lowest cumulative cost.
• Uniform-cost search expands nodes according to their path costs form the
root node. It can be used to solve any graph/tree where the optimal cost is
in demand.
• A uniform-cost search algorithm is implemented by the priority queue. It
gives maximum priority to the lowest cumulative cost.
• Uniform cost search is equivalent to BFS algorithm if the path cost of all
edges is the same.
Uniform-cost Search Algorithm:
• Advantages:
• Uniform cost search is optimal because at every state the path with
the least cost is chosen.
• Disadvantages:
• It does not care about the number of steps involve in searching and
only concerned about path cost. Due to which this algorithm may be
stuck in an infinite loop.
Uniform-cost Search Algorithm:
Iterative deepening depth-first Search:
• The iterative deepening algorithm is a combination of DFS and BFS
algorithms.
• This search algorithm finds out the best depth limit and does it by gradually
increasing the limit until a goal is found.
• This algorithm performs depth-first search up to a certain "depth limit",
and it keeps increasing the depth limit after each iteration until the goal
node is found.
• This Search algorithm combines the benefits of Breadth-first search's fast
search and depth-first search's memory efficiency.
• The iterative search algorithm is useful uninformed search when search
space is large, and depth of goal node is unknown
Iterative deepening depth-first Search:
• Advantages:
• It combines the benefits of BFS and DFS search algorithm in terms of
fast search and memory efficiency.
• Disadvantages:
• The main drawback of IDDFS is that it repeats all the work of the
previous phase.
Iterative deepening depth-first Search:
• 1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will
find the goal node.
Bidirectional Search Algorithm:
• Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory
• Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in advance.
Bidirectional Search Algorithm:
Informed(Heuristic) search
informed search strategy—one that uses problem-specific knowledge
beyond the definition of the problem itself—can find solutions more
efficiently than can an uninformed strategy.
informed search algorithm contains an array of knowledge such as how
far we are from the goal, path cost, how to reach to goal node, etc.
This knowledge helps agents to explore less to the search space and
find more efficiently the goal node.
The informed search algorithm is more useful for large search space.
Informed search algorithm uses the idea of heuristic, so it is also called
Heuristic search
Informed(Heuristic) search
Heuristics function: Heuristic is a function which is used in Informed
Search, and it finds the most promising path. It takes the current state
of the agent as its input and produces the estimation of how close
agent is from the goal.
Generate and Test Search
• Generate and Test Search is a heuristic search technique based on
Depth First Search with Backtracking which guarantees to find a
solution if done systematically and there exists a solution. In this
technique, all the solutions are generated and tested for the best
solution. It ensures that the best solution is checked against all
possible generated solutions.
• It is also known as British Museum Search Algorithm as it’s like
looking for an exhibit at random or finding an object in the British
Museum by wandering randomly.
Generate and Test Search
• Generate a possible solution. For example, generating a particular
point in the problem space or generating a path for a start state.
• Test to see if this is a actual solution by comparing the chosen point
or the endpoint of the chosen path to the set of acceptable goal
states
• If a solution is found, quit. Otherwise go to Step 1
Hill Climbing Algorithm
Simple hill climbing is the simplest way to implement a hill climbing algorithm.
It only evaluates the neighbor node state at a time and selects the first one
which optimizes current cost and set it as a current state.
It only checks it's one successor state, and if it finds better than the current
state, then move else be in the same state.
This algorithm has the following features:
• Less time consuming
• Less optimal solution and the solution is not guaranteed
Hill Climbing Algorithm
Step 1: Evaluate the starting state. If it is a goal state then stop and return success.
Step 2: Else, continue with the starting state as considering it as a current state.
Step 3: Continue step-4 until a solution is found i.e. until there are no new states left
to be applied in the current state.
Step 4: a) Select a state that has not been yet applied to the current state and apply it
to produce a new state.
b) Procedure to evaluate a new state.
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed
further.
iii. If it is not better than the current state, then continue in the loop until a
solution is found.
Step 5: Exit.
Hill Climbing Algorithm
Advantages:
It is also helpful to solve pure optimization problems where the objective is to find the best
state according to the objective function.
It requires much less conditions than other search techniques.
Disadvantages :
Local Maxim
A local maxima is a state that is better than each of its neighbouring states, but not better
than some other states further away. Generally this state is lower than the global maximum.
At this point, one cannot decide easily to move in which direction.
This difficulties can be extracted by the process of backtracking i.e. backtrack to any of one
earlier node position and try to go on a different event direction.
To implement this strategy, maintaining in a list of path almost taken and go back to one of
them. If the path was taken that leads to a dead end, then go back to one of them.
Hill Climbing Algorithm
Hill Climbing Algorithm
• Ridges: A ridge is a special form of the local maximum. It has an area
which is higher than its surrounding areas, but itself has a slope, and
cannot be reached in a single move.
• Solution: With the use of bidirectional search, or by moving in
different directions, we can improve this problem
Hill Climbing Algorithm
Plateau:
A plateau is the flat area of the search space in which all the neighbor
states of the current state contains the same value, because of this
algorithm does not find any best direction to move.
A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little
steps while searching, to solve the problem.
Steepest-Ascent hill climbing
• The steepest-Ascent algorithm is a variation of simple hill climbing
algorithm.
• This algorithm examines all the neighboring nodes of the current
state and selects one neighbor node which is closest to the goal state.
This algorithm consumes more time as it searches for multiple
neighbor.
• Algorithm for Steepest-Ascent hill climbing:
• Step 1: Evaluate the initial state, if it is goal state then return success
and stop, else make current state as initial state
Steepest-Ascent hill climbing
• Step 2: Loop until a solution is found or the current state does not
change.
• a. Let SUCC be a state such that any successor of the current state will be better
than it.
• b. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state to SUCC.
Stochastic hill climbing:
• Stochastic hill climbing does not examine for all its neighbor before
moving.
• Search algorithm selects one neighbor node at random and decides
whether to choose it as ae current state or examine another stat
Simulated Annealing Algorithm
• SA is a techniques to find solutions to optimization problems.
• It is less likely to get stuck in a local minimum, where the solution is
not the best possible but is good enough
• Simulated annealing is not a guaranteed method of finding the best
solution to an optimization problem, but it is a powerful tool that can
be used to find good solutions in many cases.
Simulated Annealing Algorithm
The AO* method divides any given difficult problem into a smaller
group of problems that are then resolved using the AND-OR graph
concept.
AND OR graphs are specialized graphs that are used in problems that can
be divided into smaller problems.
The AND side of the graph represents a set of tasks that must be
completed to achieve the main goal, while the OR side of the graph
represents different methods for accomplishing the same main goal.
AO* Algorithm
The evaluation function in AO* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.
AO* Algorithm
AO* Algorithm
We have two ways for A to D or A
to B-C (becauase of and condition)
Cost
F(A-D) = 1+10 =11
F(A-BC) = 1+1+6+12=20
• Reactive Machines
• most basic types of Artificial Intelligence
• do not store memories or past experiences for future actions.
• These machines only focus on current scenarios and react on it as per
possible best action
• Example
• IBM's Deep Blue system
• Google's AlphaGo
Based on functionality
• Limited Memory
• machines can store past experiences or some data for a short period
of time.
• Example
• Self-driving cars .
• These cars can store recent speed of nearby cars, the distance of
other cars, speed limit, and other information to navigate the road.
Based on functionality
• Theory of Mind
• Theory of Mind AI should understand the human emotions, people,
beliefs, and be able to interact socially like humans.
• Example
• This type of AI machines are still not developed, but researchers are
making lots of efforts and improvement for developing such AI
machines.
Based on functionality
• Self-Awareness
• Self-awareness AI is the future of Artificial Intelligence.
• These machines will be super intelligent, and will have their own
consciousness, sentiments, and self-awareness.
• These machines will be smarter than human mind.
Partial tabulation of a
simple agent function
for the vacuum-cleaner
world
Agent
A rational agent is one that does the right thing.
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.
For each possible percept sequence, a rational agent should select an
action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
Task environment
PEAS
(Performance, Environment, Actuators, Sensors)
It keeps track of the world state as well as a set of goals it is trying to achieve, and
chooses an action that will (eventually) lead to the achievement of its goals
Goal-based agents
• Expansion of Model Reflex agent based
• Desirable situation(Goal)
• Searching and planning
Goal-based agents
A goal-based agent, in principle, could reason that if the car in front has its
brake lights on, it will slow down.
Given the way the world usually evolves, the only action that will achieve
the goal of not hitting other cars is to brake.
If it starts to rain, the agent can update its knowledge of how effectively its
brakes will operate;
this will automatically cause all of the relevant behaviors to be altered to
suit the new conditions.
For the reflex agent, on the other hand, we would have to rewrite many
condition–action rules.
The goal-based agent’s behavior can easily be changed to go to a different
destination, simply by specifying that destination as the goal.
Utility-based agents
• Focus on Utility not goal
• Utility function
• Deals with happy and unhappy state
Utility-based agents
• Goals alone are not enough to generate high-quality behavior in most
environments.
• For example, many action sequences will get the taxi to its destination
(thereby achieving the goal) but some are quicker, safer, more
reliable, or cheaper than others.
• Goals just provide a crude binary distinction between “happy” and
“unhappy” states.
• A more general performance measure should allow a comparison of
different world states according to exactly how happy they would
make the agent.
• Because “happy” does not sound very scientific, economists and
computer scientists use the term utility instead
Utility-based agents
• An agent’s utility function is essentially an internalization of the
performance measure.
• If the internal utility function and the external performance measure
are in agreement, then an agent that chooses actions to maximize its
utility will be rational according to the external performance measure.
Utility-based agents
A model-based, utility-
based agent. It uses a
model of the world, along
with a utility function that
measures its preferences
among states of the
world. Then it chooses the
action that leads to the
best expected utility,
where expected utility is
computed by averaging
over all possible outcome
states, weighted by the
probability of the
outcome
Learning agents
Learning agents
learning element, which is responsible for making improvements, and the
performance element, which is responsible for selecting external actions.
it takes in percepts and decides on actions.
learning element uses feedback from the critic on how the agent is doing and
determines how the performance element should be modified to do better in the
future.
The design of the learning element depends very much on the design of the
performance element.
When trying to design an agent that learns a certain capability, the first question is not
“How am I going to get it to learn this?” but “What kind of performance element will
my agent need to do this once it has learned how?” Given an agent design, learning
mechanisms can be constructed to improve every part of the agent.
Learning agents
The critic tells the learning element how well the agent is doing with
respect to a fixed performance standard.
The critic is necessary because the percepts themselves provide no
indication of the agent’s success.
For example, a chess program could receive a percept indicating that it has
checkmated its opponent, but it needs a performance standard to know
that this is a good thing;
The last component of the learning agent is the problem generator. It is
responsible for suggesting actions that will lead to new and informative
experiences
Learning agents
The critic observes the world and passes information along to the learning
element. For example, after the taxi makes a quick left turn across three
lanes of traffic, the critic observes the shocking language used by other
drivers. From this experience, the learning element is able to formulate a
rule saying this was a bad action, and the performance element is
modified by installation of the new rule.
Knowledge Base
• A knowledge base is a database used for knowledge sharing and
management.
• It promotes the collection, organization and retrieval of knowledge.
Many knowledge bases are structured around artificial intelligence
and not only store data but find solutions for further problems using
data from previous experience stored as part of the knowledge base.
• Knowledge management systems depend on data management
technologies ranging from relational databases to data warehouses.
Some knowledge bases are little more than indexed encyclopedic
information; others are interactive and behave/respond according to
the input prompted from the user.
Knowledge Base
• A knowledge base is not merely a space for data storage, but can be
an artificial intelligence tool for delivering intelligent decisions.
Various knowledge representation techniques, including frames and
scripts, represent knowledge. The services offered are explanation,
reasoning and intelligent decision support.
• Knowledge-based computer-aided systems engineering (KB-CASE)
tools assist designers by providing suggestions and solutions, thereby
helping to investigate the results of design decisions. The knowledge
base analysis and design allows users to frame knowledge bases, from
which informative decisions are made.
Knowledge Base
• The two major types of knowledge bases are human readable and machine
readable.
• Human readable knowledge bases enable people to access and use the
knowledge. They store help documents, manuals, troubleshooting information
and frequently answered questions.
• They can be interactive and lead users to solutions to problems they have, but
rely on the user providing information to guide the process.
• Machine readable knowledge bases store knowledge, but only in system readable
forms.
• Solutions are offered based upon automated deductive reasoning and are not so
interactive as this relies on query systems that have software that can respond to
the knowledge base to narrow down a solution.
• This means that machine readable knowledge base information shared to other
machines is usually linear and is limited in interactivity, unlike the human
interaction which is query based.
knowledge-based agent
The central component of a knowledge-based agent is its knowledge
base, or KB.
Knowledge base is a set of sentences. (It is related but not identical to
the sentences of English and other natural languages.)
Each sentence is expressed in a language called a knowledge
representation language and represents some assertion about the
world.
There must be a way to add new sentences to the knowledge base and
a way to query what is known. The standard names for these
operations are TELL and ASK, respectively
knowledge-based agent
Both operations may involve inference—that is, deriving new sentences
from old.
Inference must obey the requirement that when one ASKs a question
of the knowledge base, the answer should follow from what has been
told (or TELLed) to the knowledge base previously.
Like all our agents, it takes a percept as input and returns an action.
The agent maintains a knowledge base, KB, which may initially contain
some background knowledge
knowledge-based agent
Each time the agent program is called, it does three things.
First, it TELLs the knowledge base what it perceives.
Second, it ASKs the knowledge base what action it should perform.
In the process of answering this query, extensive reasoning may be
done about the current state of the world, about the outcomes of
possible action sequences, and so on.
Third, the agent program TELLs the knowledge base which action was
chosen, and the agent executes the action
knowledge-based agent
function KB-AGENT(percept) returns an action
persistent: KB, a knowledge base t, a counter, initially 0, indicating time
TELL(KB, MAKE-PERCEPT-SENTENCE(percept,t))
action ← ASK(KB, MAKE-ACTION-QUERY(t))
TELL(KB, MAKE-ACTION-SENTENCE(action,t))
t←t+1
return action
A generic knowledge-based agent. Given a percept, the agent adds the percept
to its knowledge base, asks the knowledge base for the best action, and tells
the knowledge base that it has in fact taken that action
By observing the environment,
Knowledge based Architecture the knowledge-based agent
(KBA) receives input from it.
The input is taken by the agent's
inference engine, which also
communicates with KB to make
decisions based on the
knowledge store in KB.
KBA's learning component
keeps the KB up to date by
learning new information.
Why use a knowledge base?
• For an agent to learn from experiences and take action based on the
knowledge, a knowledge base is required.
• Inference system
• Inference is the process of creating new sentences from existing ones.
We can add a new sentence to the knowledge base using the
inference mechanism.
• A proposition about the world is a sentence. The inference system
uses logical rules to deduce new information from the KB.
Why use a knowledge base?
• For an agent to learn from experiences and take action based on the
knowledge, a knowledge base is required.
• Inference system
• Inference is the process of creating new sentences from existing ones.
We can add a new sentence to the knowledge base using the
inference mechanism.
• A proposition about the world is a sentence. The inference system
uses logical rules to deduce new information from the KB.
• he inference system generates new facts for an agent to update the
knowledge base
Various levels of knowledge-based agent:
• Knowledge level
• where we must explain what the agent knows and what the agent's goals are.
• Let's say an automated taxi agent needs to get from station A to station B, and he
knows how to get there, so this is a knowledge problem.
• Logical level:
• how the knowledge representation of knowledge is stored at this level. Sentences are
encoded in various logics at this level.
• At the logical level, knowledge is encoded into logical statements.
• We can expect the automated taxi agent to arrive at destination B on a rational level.
• Implementation level:
• Physical representation of logic and knowledge (implementation level). Agents at the
implementation level take actions based on their logical and knowledge levels. At this
phase, an autonomous cab driver puts his knowledge and logic into action in order to
go to his destination.
Approaches to designing a knowledge-based agent:
• Building a knowledge-based agent can be done in one of two ways:
• 1.Declarative approach:
• A knowledge-based agent can be created by starting with an empty
knowledge base and informing the agent all the sentences we wish to
start with.
Declarative approach is the name given to this method.
• 2. Procedural technique:
• We directly express desired behavior as a program code in the
procedural approach.
• develop a program that already has the intended behavior or agent
encoded in it.
Techniques of knowledge representation
• Logical Representation
• It's made up of well-defined syntax and semantics that facilitate sound
inference. Each sentence could be translated into logics using the syntax
and semantics.
• Semantic Network Representation
• We can express our knowledge in Semantic Networks as graphical
networks.
• This network is made up of nodes that represent things and arcs that
describe their relationships.
• Semantic networks may classify objects in a variety of ways and link them
together.
Techniques of knowledge representation
• Jerry is a cat.
• Jerry is a mammal.
• Jerry is owned by Priya.
• Jerry is brown colored.
• All Mammals are animal.
Techniques of knowledge representation
• Jerry is a cat.
• Jerry is a mammal.
• Jerry is owned by Priya.
• Jerry is brown colored.
• All Mammals are animal.
Frame Representation
• A frame is a record-like structure that contains a set of properties and
their values to describe a physical thing. Frames are a sort of artificial
intelligence data structure that splits knowledge into substructures.
• It is made up of a set of slots and slot values
Production Rules
• Production rules system consists of (condition, action) pairs which
means, "If condition then action“.
Example:
IF (at bus stop AND bus arrives) THEN action (get into the bus)
• Working memory stores a description of the present state of
problem-solving, and rules can be used to write knowledge to it.
Other rules may be triggered by this knowledge.
Online:
https://en.wikipedia.org/wiki/Computing.
https://www.cs.ubc.ca/~poole/ci/slides/ch1/html.html
http://www.cs.mun.ca/~oram/cs3754/AI6.pdf
https://courses.engr.illinois.edu/cs440/fa2018/Lectures/Bonnie-Dorr-4queens.pdf
https://www.javatpoint.com/ai-uninformed-search-algorithms#:~:text=Uniform%2Dcost%20search%20is%20a,has%20the%20lowest%20cumulative%20cost.
https://www.javatpoint.com/ai-informed-search-algorithms#:~:text=But%20informed%20search%20algorithm%20contains,useful%20for%20large%20search%20space.
https://www.brainkart.com/article/Hill-Climbing-Search-Algorithm--Concept,-Algorithm,-Advantages,-Disadvantages_8885/
https://www.aiforanyone.org/glossary/simulated-annealing
https://www.javatpoint.com/types-of-artificial-intelligence
https://www.youtube.com/watch?v=mtSn_Lh750g
https://www.techopedia.com/definition/2511/knowledge-base-klog
https://tutorialforbeginner.com/knowledge-based-agent-in-ai
https://www.javatpoint.com/ai-informed-search-algorithms