You are on page 1of 14

Artificial Intelligence?

Artificial Intelligence (Al) refers to the simulation of human intelligence in


machines that are programmed to think and learn like humans. It encompasses a wide range of
technologies and techniques that enable computers to perform tasks that typically require human
intelligence, such as problem-solving, reasoning, learning, perception, and language understanding.
Artificial intelligence exists when a machine can have human based skills such as learning,
reasoning, and solving problems. With artificial intelligence, we do not need to pre-program a
machine to do some work, despite that we can create a machine with programmed algorithms
which can work with own intelligence, and that is the awesomeness of Al.It is believed that Al is not
a new technology, and some people say that as per Greek myth, there were mechanical men in early
days which could work and behave like humans.According to the father of Artificial Intelligence,
John McCarthy, it is "The science and engineering of making intelligent machines, especially
intelligent computer programs"Artificial intelligence is a way of making a computer, a computer-
controlled robot, or a software think intelligently, in a similar manner to intelligent humans think.Al
is accomplished by studying how the human brain thinks and how humans learn, decide, and work
while trying to solve a problem, and then using the outcomes of this study as a basis for developing
intelligent software and systems.
Forms Of AI

Narrow or Weak AI: This form of Al is designed for specific tasks and lacks general intelligence.
Examples include virtual personal assistants like Siri, recommendation systems, and chatbots.
ii. General or Strong AI: General Al possesses human-level intelligence and can perform any intellectual
task that a human can. This form of Al is still largely theoretical and not yet achieved. It is designed to
learn, think and perform at similar levels to humans.iii. Artificial Superintelligence: This hypothetical
form of Al would surpass human intelligence in every aspect and could potentially outperform humans
in any domain. It is able to surpass the knowledge and capabilities of humansAbove are the main forms
of Al. Other forms of Al include:
L Reactive Machines: Al is capable of responding to external stimuli in real time; unable to build
memory or store information
for the future.ii. Limited
Memory: Al that can store
knowledge and use it to learn
and train for future tasks.
Theory of Mind: Al that can
sense and respond to human
emotions, plus perform the
tasks of limited memory
machines.iv. Self-aware: Al that
can recognize others' emotions,
1
History of Artificial
Intelligence:The term
"artificial intelligence" was
coined in 1955 by John
McCarthy.Early Al research
focused on symbolic
reasoning and expert
systems.The 1956
Dartmouth Workshop is
considered the birth of Al
as a field.
Al saw both periods of
optimism (Al booms) and
"Al winters" (periods of
reduced funding and
progress). Breakthroughs
in machine learning,
neural networks, and deep
learning have fueled
recent Al advancements.

Risks and Benefits in Al As the world witnesses unprecedented growth in Artificial Intelligence (AI)
technologies, it's essential to consider the potential risks and challenges associated with their
widespread adoption.Risks in Al Lack of Transparency: Lack of transparency in Al systems, particularly in
deep learning models that can be complex and difficult to interpret, is a pressing issue. This opaqueness
obscures the decision-making processes and underlying logic of these technologies. When people can't
comprehend how an Al system arrives at its conclusions, it can lead to distrust and resistance to
adopting these technologies.Bias and Discrimination: Al systems can inadvertently perpetuate or
amplify societal biases due to biased training data or algorithmic design. To minimize discrimination and
ensure fairness, it is crucial to invest in the development of unbiased algorithms and diverse training
data sets. Privacy Concerns: AI technologies often collect and analyse large amounts of personal data,
raising issues related to data privacy and security. To mitigate privacy risks, we must advocate for strict
data protection regulations and safe data handling practices.
Characteristics of Intelligent Agents Sensing: Agents perceive their environment through sensors.
Reasoning: Agents use logic or algorithms to make decisions.iii. Acting: Agents take actions to achieve
their goals. iv. Learning: Agents can adapt and improve their performance over time. V. Autonomy:
Agents operate independently and make decisions without human intervention. Structure of Agents
The task of Al is to design an agent program which implements the agent function. The structure of an
intelligent agent is a combination of architecture and agent program. It can be viewed as: Agent =
Architecture + Agent programFollowing are the main three terms involved in the structure of an Al
agent:Architecture: Architecture is machinery that an Al agent executes on.Agent Function: Agent
function is used to map a percept to an action.Agent program: Agent program is an implementation of
agent function. An agent program executes on the physical architecture to produce function.
2
Types of agent:Simple Reflex agent 1 The
Simple reflex agents are the simplest agents.
These agents take decisions on the basis of the
current percepts and ignore the rest of the
percept history.2 These agents only succeed in
the fully observable environment.
Model-based reflex agent: The Model-based
agent can work in a partially observable
environment, and track the situation.1 model-
based agent has two important factors: Model:
It is knowledge about "how things happen in
the world," so it is called a Model-based agent.
2. Internal State: It is a representation of the
current state based on percept history.

Goal-based agents: The knowledge of the


current state environment is not always
sufficient to decide for an agent to what to do.
The agent needs to know its goal which
describes desirable situations.
Goal-based agents expand the capabilities of the
model-based agent by having the "goal"
information. They choose an action, so that they
can achieve the goal.
Utility-based agents: These agents are similar to
the goal-based agent but provide an extra
component of utility measurement which makes
them different by providing a measure of success
at a given state. Utility-based agent act based
not only goals but also the best way to achieve
the goal. The Utility-based agent is useful when
there are multiple possible alternatives, and an
agent has to choose in order to perform the best
action.
Learning Agents: A learning agent in Al is the
type of agent which can learn from its past
experiences, or it has learning capabilities.It starts
to act with basic knowledge and then able to act
and adapt automatically through learning.

3
Problem Solving Methods / TechniquesThe process of problem-solving is frequently used to achieve
objectives or resolve particular situations. In computer science, the term "problem-solving" refers to
artificial intelligence methods, which may include formulating, ensuring appropriate, using algorithms,
and conducting root-cause analyses that identify reasonable solutions. Artificial Intelligence (AI)
problem-solving often involves investigating potential solutions to problems through reasoning
techniques, using polynomial and differential equations, and carrying them out and using modelling
frameworks. The same issue has many solutions that are all accomplished using a unique algorithm.
Additionally, certain issues have original remedies. Everything depends on how the particular situation is
framed. For examples:Chess ii. N-Queen problem Tower of Hanoi Problem iv. Travelling Salesman
Problem Water-Jug Problem
Problem Solving Agents Problem-solving in artificial intelligence is the process of finding a solution to a
problem. There are many different types of problems that can be solved, and the methods used will
depend on the specific problem. The most common type of problem is finding a solution to a maze or
navigation puzzle.
Search Algorithm Terminologies Search: Searching is a step by step procedure to solve a search-
problem in a given search space. A search problem can have three main factors:
Search Space: Search space represents a set of possible solutions which a system may have.
Start State: It is a state from where agent begins the search. Goal test: It is a function which observes
the current state and returns whether the goal state is achieved or not.Search tree: A tree
representation of search problem is called Search tree. The root of the search tree is the root node
which corresponds to the initial state. Actions: It gives the description of all the available actions to the
agent.Transition model: A description of what each action does, can be represented as a transition
model.Path Cost: It is a function which assigns a numeric cost to each path.Solution: It is an action
sequence which leads from the start node to the goal node.Optimal Solution: If a solution has the
lowest cost among all solutions.
Tree Structure Tree is a way of organizing objects, related in a hierarchical fashion. Tree is a type of data
structure in which each element is attached to one or more elements directly beneath it. The
connections between elements are called branches. iii. Tree is often called an inverted tree because it is
drawn with the root at the top. iv. The elements that have no elements below them are called leaves.
A binary tree is a special type; each element has only two branches below it.
Generative Al Generative Al models can create new content, including text, images, and music. They
have applications in content generation, art, and entertainment. Generative Artificial Intelligence (also
generative Al or GenAI) is artificial intelligence capable of generating text, images, or other media, using
generative models. Generative Al models learn the patterns and structure of their input training data
and then generate new data that has similar characteristics.

4
Breadth First Search (BFS) A search
strategy, in which the highest layer of a
decision tree is searched completely
before proceeding to the next layer is
called Breadth-First Search (BFS). In this
strategy, no viable solutions are omitted
and therefore it is guaranteed that an
optimal solution is found.This strategy is
often not feasible when the search space
is large.Breadth-first search is the most
common search strategy for traversing a
tree or graph. This algorithm searches breadth wise in a tree or graph, so it is called breadth-first
search.BFS algorithm starts searching from the root node of the tree and expands all successor nodes at
the current level before moving to nodes of next level.The breadth-first search algorithm is an example
of a general-graph search algorithm.Breadth-first search is implemented using FIFO queue data
structure. Algorithm: 1.Create a variable called LIST and set it to be the starting state. 2. Loop until a
goal state is found or LIST is empty, Do a) Remove the first element from the LIST and call it E. If the LIST
is empty, quit.b) For every path each rule can match the state E, Do • Apply the rule to generate a new
state. *If the new state is a goal state, quit and return this state.*Otherwise, add the new state to the
end of LIST.Advantages:i Guaranteed to find an optimal solution (in terms of shortest number of steps
to reach the goal). ii. Can always find a goal node if one exists (complete). Disadvantages:High storage
requirement: exponential with tree depth.
Depth-First Search A search strategy that extends
the current path as far as possible before
backtracking to the last choice point and trying the
next alternative path is called Depth-First Search
(DFS). i.This strategy does not guarantee that the
optimal solution has been found.iiIn this strategy,
search reaches a satisfactory solution more rapidly
than breadth first, an advantage when the search
space is large,iii. Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
iv It is called the depth-first search because it starts from the root node and follows each path to its
greatest depth node before moving to the next path.V. DFS uses a stack data structure for its
implementation.vi The process of the DFS algorithm is similar to the BFS algorithm Algorithm: Depth-
first search applies operators to each newly generated state, trying to drive directly toward the goal.
iIf the starting state is a goal state, quit and return success. ii. Otherwise, do the following until success
or failure is signalled: 1.Generate a successor E to the starting state. If there are no more successors,
then signal failure.2. Call Depth-First Search with E as the starting state.3.If success is returned signal
success; otherwise, continue in the loop.Advantages: 1.Low storage requirement: Linear with tree
depth.2.Easily programmed: Function call stack does most of the work of maintaining state of the
search. Disadvantages:1 May find a sub-optimal solution (one that is deeper or more costly than the
best solution).2. Incomplete: Without a depth bound, may not find a solution even if one exists

5
Depth Limited Search A depth-limited search algorithm is
similar to depth-first search with a predetermined limit.
Depth-limited search can solve the drawback of the
infinite path in the Depth-first search. In this algorithm,
the node at the depth limit will be treated as it has no
successor nodes further. Depth-limited search can be
terminated with two conditions of failure: i. Standard
failure value: It indicates that problem does not have any
solution. ii. Cutoff failure value: It defines no solution for
the problem within a given depth limit. Advantage Depth-limited search is memory efficient.
Disadvantages i. Depth-limited search also has the disadvantage of incompleteness. ii. It may not be
optimal if the problem has more than one solution.
Depth First Iterative Deepening (DFID) The iterative
deepening algorithm is a combination of DFS and BFS
algorithms. This search algorithm finds out the best
depth limit and does it by gradually increasing the limit
until a goal is found. This algorithm performs depth-
first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the
goal node is found. This search algorithm combines the
benefits of breadth-first search's fast search and depth-
first search's memory efficiency. The iterative search algorithm is useful uninformed search when search
space is large, and depth of goal node is unknown. Advantage: It combines the benefits of BFS and DFS
search algorithms in terms of fast search and memory efficiency. Disadvantage: The main drawback of
IDDFS is that it repeats all the work of the previous phase.
A* Search Algorithm A search is the most commonly
known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the
start state g(n). It has combined features of UCS and
greedy best-first search, by which it solves the problem
efficiently. A search algorithm finds the shortest path
through the search space using the heuristic function. This search algorithm expands less search tree
and provides optimal result faster. A algorithm is similar to UCS except that it uses g(n)+h(n) instead of
g(n). In A search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can
combine both costs as following, and this sum is called as a fitness number. Advantages A search
algorithm is the best algorithm than other search algorithms. ii. A search algorithm is optimal and
complete This algorithm can solve very complex problems. Disadvantages It does not always produce
the shortest path as it is mostly based on heuristics and approximation. ii. A search algorithm has some
complexity issues.
Constraint Satisfaction Problem (CSP) A Constraint Satisfaction Problem in artificial intelligence involves
a set of variables, each of which has a domain of possible values and a set of constraints that define the
allowable combinations of values for the variables. The goal is to find a value for each variable such that
all the constraints are satisfied.
6
AO* Search Algorithm Best-first search is what the AO algorithm does. The AO method divides any
given difficult problem into a smaller group of problems that are then resolved using the AND- OR graph
concept. AND OR graphs are specialized graphs that are used in problems that can be divided into
smaller problems. The AND side of the graph represents a set of tasks that must be completed to
achieve the main goal, while the OR side of the graph represents different methods for accomplishing
the same main goal.
Hill Climbing Search Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution to the problem.
It terminates when it reaches a peak value where no neighbor has a higher value. Hill climbing algorithm
is a technique which is used for optimizing the mathematical problems. One of the widely discussed
examples of Hill climbing algorithm is Traveling-Salesman Problem in which we need to minimize the
distance traveled by the salesman.
Optimal Decisions in Game Theory Optimal decision-making in games is a fundamental concept in game
theory and artificial intelligence. It refers to the process of selecting the best possible move or strategy
in a game to maximize one's chances of winning or achieving a desired outcome. Here are some key
aspects of optimal decision-making in games: i. Objective Function: In most games, there is a clear
objective or goal, such as winning the game, maximizing the score, or achieving a specific outcome.
Optimal decisions are those that help progress toward this objective. ii.
Game Theory: Game theory is the study of how rational players make decisions in strategic situation,
like games. It involves analyzing the choices and strategies of all players to determine the optimal course
of action. iii. Search Algorithms: Many games, especially board games like chess and Go, involve
extensive search spaces with a large number of possible moves. Search algorithms, such as minimax
with alpha-beta pruning or Monte Carlo Tree Search (MCTS), are used to explore these spaces efficiently
and find optimal moves. iv. Heuristics: In situations where an exhaustive search is not feasible, heuristics
or rule- based approaches can be used to estimate the quality of different moves. These heuristics guide
decision-making by evaluating the current state of the game. V. Partial Information: Some games
involve hidden or imperfect information, making it challenging to determine the optimal move. In such
cases, players may use probabilistic reasoning and adapt their strategies based on available information.
vi. Nash Equilibrium: In games involving multiple players and strategies, the Nash equilibrium
represents a situation where no player has an incentive to change their strategy unilaterally. Optimal
decisions can lead to achieving or disrupting Nash equilibria, depending on the game's context. vii .
Mixed Strategies: In certain games, players may employ mixed strategies, where they choose moves
with specific probabilities rather than always following a deterministic strategy. These mixed strategies
can optimize outcomes when facing opponents with unpredictable behaviors. viii. Simulation and
Learning: In complex and dynamic games, agents can use simulations or machine learning techniques to
estimate the value of different moves. This is common in video games and Al-driven decision-making. ix.
Adaptation: Optimal decisions may change as the game progresses or as the opponent's strategy
becomes clearer. Adapting and reacting to the opponent's moves are crucial aspects of optimal
decision-making. X. Time Constraints: Real-time games and scenarios with time constraints require fast
decision-making. Algorithms like alpha-beta pruning are adapted to these situations to find good moves
within limited time frames. xi. Psychological Aspects: In games involving human players, psychology and
bluffing can be part of optimal decision-making. Predicting the opponent's intentions and reactions
becomes important.
7
xii. Risk and Uncertainty: Evaluating risks and uncertainties is integral to making optimal decisions,
especially in games of chance or games with imperfect information. Optimal decision-making in games is
a rich and evolving field, with various techniques and strategies depending on the specific game and
context. It often involves a combination of mathematical analysis, computational algorithms, strategic
thinking, and adaptability to achieve the best possible outcomes.
1. Heuristic Alpha-Beta Tree Search
Alpha-beta pruning is a popular algorithm used in game theory and artificial intelligence for minimizing
the number of nodes evaluated in the search tree. It works by keeping track of two values, alpha and
beta, which represent the minimum score the maximizing player is assured of and the maximum score
the minimizing player is assured of, respectively. Nodes in the search tree are pruned when it's
determined that they won't affect the final decision, which significantly reduces the number of nodes
evaluated. Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization
technique for the minimax algorithm.
Stochastic Games Stochastic games are a type of game in which uncertainty, often in the form of
randomness or chance, plays a role in determining the outcome. These games are more challenging to
analyze and solve compared to deterministic games. Common examples include card games with
shuffled decks and games with random events, like dice rolls.. Stochastic games, first introduced by
Shapley model dynamic interactions in which the environment changes in response to the behaviour of
the players. Various Terminologies Involved A stochastic game: A repeated interaction between several
participants in which the underlying state of the environment changes stochastically, and it depends on
the decisions of the participants. A strategy: A rule that dictates how a participant in an interaction
makes his decisions as a function of the observed behavior of the other participants and of the evolution
of the environment. iii. Evaluation of stage payoffs: The way that a participant in a repeated interaction
evaluates the stream of stage payoffs that he receives (or stage costs that he pays) along the
interaction. iv. An equilibrium: A collection of strategies, one for each player, such that each player
maximizes (or minimizes, in case of stage costs) his evaluation of stage payoffs given the strategies of
the other players. A correlated equilibrium: An equilibrium in an extended game in which at the outset
of the game each player receives a private signal, and the vector of private signals is chosen according to
a known joint probability distribution. In the extended game, the strategy of a player depends, in
addition to past play, on the signal he received.
Partially Observable Games i. In some games and decision-making scenarios, not all information is
available to the player. ii. Partial observability refers to situations where a player cannot fully perceive
the current state of the game or environment. iii. Strategies in partially observable games often involve
making probabilistic inferences based on available information.
Introduction to Explainable Al Explainable Al aims to make Al models more transparent and
interpretable, addressing concerns about bias and trust. It helps users understand why Al systems make
specific decisions. Explainable Artificial Intelligence (XAI) is a set of processes and methods that allows
human users to comprehend and trust the results and output created by machine learning algorithms
Explainable AI is used to describe an Al model, its expected impact and potential biases. It helps
characterize model accuracy, fairness, transparency and outcomes in Al-powered decision making.
Explainable Al is crucial for an organization to build trust and confidence when putting Al models into
production. Al explainability also helps an organization adopt a responsible approach to Al development.

8
Limitations of Game Search AlgorithmsGame search algorithms like alpha-beta pruning and MCTS have
limitations. ii. They can become computationally expensive when dealing with large search spaces or
deep game trees. iii. They may not perform well in games with complex, non-linear dynamics or when
dealing with imperfect information. iv. Finding optimal strategies in certain games, like poker, is
particularly challenging due to the massive state space and the role of hidden information. V. They rely
on the assumption that the game is fully observable, deterministic and has perfect information. vi. Their
effectiveness is highly dependent on the complexity of the game and the branching factor of the game
tree. vii. They can be computationally expensive and time-consuming to run. viii. The assumption that
players are rational may not always hold, leading to suboptimal decisions. ix. Game search algorithms
cannot solve games beyond their rule-based definitions and cannot cope with games that may
incorporate random events. X. They may not always result in an optimal or even satisfactory solution as
they do not take into account human behaviour or intuition, which can sometimes lead to unexpected
outcomes.
Knowledge representation is a fundamental concept in Artificial Intelligence (AI) that involves creating
models and structures to represent information and knowledge in a way that intelligent systems can
use. The process of capturing knowledge in a form suitable for reasoning. Various methods like logic,
graphs, frames, and semantic networks are used. The goal of knowledge representation is to enable
machines to reason about the world like humans by capturing and encoding knowledge in a format that
can be easily processed and utilized by AI systems. Types of Knowledge i. Logical representation: This
involves representing knowledge in a symbolic logic or rule-based system, which uses formal languages
to express and infer new knowledge. Semantic networks: This involves representing knowledge through
nodes and links, where nodes represent concepts or objects, and links represent their relationships. iii.
Frames: This approach involves representing knowledge in the form of structures called frames, which
capture the properties and attributes of objects or concepts and the relationships between them. iv.
Ontologies: This involves representing knowledge in the form of a formal, explicit specification of the
concepts, properties, and relationships between them within a particular domain. V. Neural networks:
This involves representing knowledge in the form of patterns or connections between nodes in a
network, which can be used to learn and infer new knowledge from data.
Representations and Mappings Object: The Al needs to know all the facts about the objects in our
world domain. Example. A keyboard has keys, a guitar has strings, etc. Events: The actions which occur
in our world are called events. iii. Performance: It describes a behavior involving knowledge about how
to do things. iv. Meta-knowledge: The knowledge about what we know is called meta-knowledge. Facts:
The things in the real world that are known and proven true. vi. Knowledge Base: A knowledge base in
artificial intelligence aims to capture human expert knowledge to support decision-making, problem-
solving, and more.
Approaches to Knowledge Representations There are various approaches to knowledge representation
in Al, including: L Simple relational knowledge: a. It is the simplest way of storing facts which uses the
relational method, and each fact about a set of object is set out systematically in columns. b. This
approach of knowledge representation is famous in database systems where the relationship between
different entities is represented. C. This approach has little opportunity for inference.

9
Logical Agents These are the Agents that use logic to represent and reason about knowledge. It uses
propositions and inference rules.
Knowledge-Based Agents These are the Agents that use knowledge to make intelligent decisions. It
combines knowledge representation and reasoning. An intelligent agent needs knowledge about the
real world for taking decisions and reasoning to act efficiently. Knowledge-based agents are those
agents who have the capability of maintaining an internal state of knowledge, reason over that
knowledge, update their knowledge after observations and take actions. These agents can represent the
world with some formal representation and act intelligently.
Knowledge-based agents are composed of two main parts: i Knowledge-base and ii. Inference system.
A knowledge-based agent must be able to do the following: An agent should be able to represent states,
actions, etc. An agent should be able to incorporate new perceptions. iii. An agent can update the
internal representation of the world. iv. An agent can deduce the internal representation of the world.
V. An agent can deduce appropriate actions.
Propositional Logic Propositional logic in Artificial Intelligence is one of the many methods of how
knowledge is represented to a machine so that its automatic learning capacity can be enhanced.
Machine Learning and Knowledge Representation and Logic (KR&R) are imperative for building smart
machines that can perform tasks that typically require human intelligence.
Inference in First-Order Logic Inference in First-Order Logic is used to deduce new facts or sentences
from existing sentences. Before understanding the FOL inference rule, let's understand some basic
terminologies used in FOL
Propositional vs. First-Order Inference:Propositional Logic (PL) Propositional logic is an analytical
statement which is either true or false. It is basically a technique that represents the knowledge in
logical and mathematical form. There are two types of propositional logic; Atomic and Compound
Propositions. Facts about Propositional Logic Since propositional logic works on 0 and 1 thus it is also
known as "Boolean Logic". Proposition logic can be either true or false it can never be both.
In this type of logic, symbolic variables are used in order to represent the logic and any logic can be used
for representing the variable. It is comprised of objects, relations, functions, and logical connectives.
Proposition formula which is always false is called 'Contradiction' whereas a proposition formula which
is always true is called "Tautology'.
key differences between Propositional Logic and First-Order Logic Propositional Logic converts a
complete sentence into a symbol and makes it logical whereas in First-Order Logic relation of a
particular sentence will be made that involves relations, constants, functions, and constants.
The limitation of PL. is that it does not represent any individual entities whereas FOL can easily
represent the individual establishment that means if you are writing a single sentence then it can be
easily represented in FOL.. PL. does not signify or express the generalization, specialization or pattern for
example 'QUANTIFIERS' cannot be used in PL. but in FOL. users can easily use quantifiers as it does
express the generalization, specialization, and pattern.
Unification and First-Order Inference Unification is a process of making two different logical atomic
expressions identical by finding a substitution. Unification depends on the substitution process. It takes
two literals as input and makes them identical using substitution. Let, and ; be two atomic sentences
and a be a unifier such that, Ψσ = Ψσ, then it can be expressed as UNIFY(Ψ), Ψ₂).
Example: Find the MGU for Unify(King(x), King(John))

10
Forward Chaining, Backward ChainingIn artificial intelligence, forward and backward chaining is one of
the important topics, but before understanding forward and backward chaining lets first understand
that from where these two terms came. Forward chaining starts with the known facts and derives
conclusions by applying production rules until no more can be inferred. It is commonly used in rule-
based systems. Backward chaining begins with a query and works backward by applying rules to find a
path to the query. It's commonly used in expert systems and goal-driven reasoning.
Properties of Forward-Chaining:It is a down-up approach, as it moves from bottom to top.
ii. It is a process of making a conclusion based on known facts or data, by starting from the initial state
and reaches the goal state. iii. Forward-chaining approach is also called as data-driven as we reach to
the goal using available data
Properties of backward chaining: i. It is known as a top-down approach. ii. Backward-chaining is based
on modus ponens inference rule. iii. In backward chaining, the goal is broken into sub-goal or sub-goals
to prove the facts true. iv. It is called a goal-driven approach, as a list of goals decides which rules are
selected and used.
Categories and Objects The knowledge that needs to be represented in Al can be classified into several
categories, including objects, events, performance, facts, meta-knowledge, and knowledge-base.
i. Objects: Objects refer to things in the world that have physical properties and can be observed,
touched, or manipulated. Examples of objects include cars, buildings, and people. Object-oriented
programming is an example of a technique that uses objects to represent knowledge in AI.
ii. Events: Events refer to actions or occurrences that take place in the world. Examples of events include
driving a car, cooking food, or attending a concert. Event-based systems use events to represent
knowledge in AI.
iii. Performance: Performance refers to the behavior of agents or systems that perform a task. It
includes the goals and objectives of the task and the criteria used to evaluate performance.
Performance-based systems use performance to represent knowledge in AL..
iv. Facts: Facts refer to propositions that are either true or false. They are statements that can be
verified using evidence or logical deduction. Examples of facts include "the sky is blue," "the earth
revolves around the sun," and "water boils at 100 degrees Celsius." Knowledge-based systems use facts
to represent knowledge in AI.
Mental Objects and Modal Logic: The mental object is the range of what one has perceived, discovered,
or learned. The term is related to the cognition of the human brain. The mental object towards the same
stimuli or in the same surrounding differs variedly. Every brain perceives and learns the information in a
very different way. Modal logic is an extension of traditional propositional logic. It adds more operators,
such as "it is possible that," "it is necessary that," "it will always be the case that," "an agent believes
that," etc. Modal logic extends FOL to reason about necessity, possibility, belief, and knowledge. It is
used for modeling agents' beliefs and reasoning about what is true in different possible worlds.
Reasoning Systems for Categories: Reasoning systems can be designed to classify objects into
categories, often using rules, decision trees, or machine learning algorithms. These systems are used in
applications like natural language processing, image recognition, and recommendation systems.
Types of Reasoning:In artificial intelligence, reasoning can be divided into the following categories:
Deductive Reasoning, Inductive Reasoning, . Abductive Reasoning,, Common Sense Reasoning,
Monotonic Reasoning, Non-monotonic Reasoning

11
Reasoning with Default Information: Default reasoning is a form of nonmonotonic reasoning where
plausible conclusions are inferred based on general rules which may have exceptions (defaults).
When giving information, we don't want to enumerate all of the exceptions, even if we could think of
them all. In default reasoning, we specify general knowledge and modularly add exceptions. The general
knowledge is used for cases we don't know are exceptional.
Classical logic is monotonic: If g logically follows from A, it also follows from any superset of A.
Default reasoning is nonmonotonic: When we add that something is exceptional, we can't conclude
what we could before.
Applications of Al Al is being applied across various domains, from healthcare and finance to education
and entertainment.. It is used for tasks such as recommendation systems, fraud detection, and customer
support automation. Artificial Intelligence has various applications in today's society. It is becoming
essential for today's time because it can solve complex problems in an efficient way in multiple
industries, such as Healthcare, entertainment, finance, education, etc. Al is making our daily life more
comfortable and fast.
Language Models: A simple definition of a Language Model is an Al model that has been trained to
predict the next word or words in a text based on the preceding words, its part of the technology that
predicts the next word you want to type on your mobile phone allowing you to complete the message
faster. The task of predicting the next word/s is referred to as self-supervised learning, it does not need
labels it just needs lots of text. The process applies its own labels to the text. A language mode can
mono linguistic or poly linguistic. Wikipedia suggests that there should be separate language models for
each document collection, however Jeremy and Sebastian found that using the Wikipedia sets have
sufficient overlap that it's not necessary.
Information Retrieval Al-powered
information retrieval systems help users
find relevant information from vast
datasets, such as search engines and
recommendation systems. Information
Retrieval (IR) may be defined as a software
program that deals with the organization,
storage, retrieval and evaluation of
information from document repositories
particularly textual information. The system
assists users in finding the information they
require but it does not explicitly return the
answers of the questions. It informs the
existence and location of documents that
might consist of the required information. The documents that satisfy user's requirement are called
relevant documents. A perfect IR system will retrieve only relevant documents.

12
Information Extraction Information extraction techniques are used to automatically extract structured
information from unstructured text data, enhancing data analysis. Information extraction is the process
of extracting information from unstructured textual sources to enable finding entities as well as
classifying and storing them in a database. Semantically enhanced information extraction (also known as
semantic annotation) couples those entities with their semantic descriptions and connections from a
knowledge graph. By adding metadata to the extracted concepts, this technology solves many
challenges in enterprise content management and knowledge discovery. Information extraction is the
process of extracting specific (pre-specified) information from textual sources. One of the most trivial
examples is when your email extracts only the data from the message for you to add in your Calendar.
Introduction to Natural Language Processing (NLP) NLP
focuses on enabling computers to understand, interpret, and
generate human language. NLP applications include
sentiment analysis, chatbots , and language translation. NLP
stands for Natural Language Processing, which is a part of
Computer Science, Human language, and Artificial
Intelligence. It is the technology that is used by machines to
understand, analyse, manipulate, and interpret human
languages. It helps developers to organize knowledge for
performing tasks such as translation, automatic
summarization, Named Entity Recognition (NER), speech
recognition, relationship extraction, and topic segmentation
Advantages of NLP
i. NLP helps users to ask questions about any subject and get a direct response within seconds
ii. NLP offers exact answers to the question means it does not offer unnecessary and unwanted
information. iii. NLP helps computers to communicate with humans in their languages.
iv. It is very time efficient. V Most of the companies use NLP to improve the efficiency of documentation
processes, the accuracy of documentation, and to identify the information from large databases.
Disadvantages of NLP 1 NLP may not show context.2NLP is unpredictable.3. NLP may require more
keystrokes.4. NLP is unable to adapt to the new domain, and it has a limited function that's why NLP is
built for a single and specific task only.
Reinforcement Learning and Robotics Reinforcement learning is being used to train robots and
autonomous systems to perform tasks in dynamic environments. Applications range from autonomous
vehicles to warehouse automation. Reinforcement Learning is a feedback-based Machine learning
technique in which an agent learns to behave in an environment by performing the actions and seeing
the results of actions. For each good action, the agent gets positive feedback, and for each bad action,
the agent gets negative feedback or penalty. In Reinforcement Learning, the agent learns automatically
using feedbacks without any labeled data, unlike supervised learning

13
Computer Vision Breakthroughs
Advances in computer vision have
led to improved image recognition,
object detection, and facial
recognition. Applications include
autonomous vqhicles and
surveillance systems. Computer
vision is a sub-field of Al and
machine learning mat enables the
machine to see, understand, and
interpret the visuals such as
images, video, etc., and extract
useful information from them that
can be helpful in the decision-
making of Al applications. It can be
considered as an eye for an Al
application. With the help of computer vision technology, such tasks can be done that would be
impossible without this technology, such as Self Driving Cars.
Al in Healthcare Al is used in medical imaging for disease diagnosis and drug discovery. Chatbots and
virtual assistants help with patient engagement and healthcare information dissemination.
Artificial Intelligence in Healthcare is used to analyze the treatment techniques of various diseases and
to prevent them. AI is used in various areas of healthcare such as diagnosis processes, drug research
sector, medicine, patient monitoring care centre, etc. In the healthcare industry. Al helps to gather past
data through electronic health records for disease prevention and diagnosis. There are various medical
institutes that have developed their own Al algorithms for their department, such as Memorial Sloan
Kettering Cancer Center and The Mayo clinic, etc.
Al in Finance In finance, Al is used for algorithmic trading, fraud detection, and risk assessment.
Chatbots and robo-advisors are also becoming prevalent in the financial industry. Al in finance can help
in five general areas: personalize services and products, create opportunities, manage risk and fraud,
enable transparency and compliance, and automate operations and reduce costs.
Autonomous SystemsAl-driven autonomous systems, such as drones and self-driving cars, are becoming
more capable and safer. They rely on sensor data and machine learning algorithms for decision-making.
An Autonomous System (AS) consists of several connected Internet Protocol routing prefixes. One or
more network operators manage all these routing prefixes in place of a single administrative entity or
organization. These IP prefixes have a clearly defined routing policy that states how the autonomous
system will exchange the routing data with other nodes and similar systems. Each autonomous system is
provided with a unique Autonomous System Number (ASN) to implement in the Border Gateway
Protocol (BGP) routing.
Their regional internet registries provide the ASN to the Local Internet Registries (LIRs) and end-user
organizations. It will receive blocks of autonomous system numbers from the reassignment from the
Internet Assigned Number authority

14

You might also like