You are on page 1of 17

IIS

Components of Expert System


An expert system mainly consists of three components:
• User Interface
• Inference Engine
• Knowledge Base

1. User Interface
With the help of a user interface, the expert system interacts with the
user, takes queries as an input in a readable format, and passes it to
the inference engine. After getting the response from the inference
engine, it displays the output to the user. In other words, it is an
interface that helps a non-expert user to communicate with the
expert system to find a solution.
2. Inference Engine(Rules of Engine)
• The inference engine is known as the brain of the expert system
as it is the main processing unit of the system. It applies
inference rules to the knowledge base to derive a conclusion or
deduce new information. It helps in deriving an error-free
solution of queries asked by the user.
• With the help of an inference engine, the system extracts the
knowledge from the knowledge base.

Knowledge Base
• The knowledgebase is a type of storage that stores knowledge
acquired from the different experts of the particular domain. It is
considered as big storage of knowledge. The more the
knowledge base, the more precise will be the Expert System.
• It is similar to a database that contains information and rules of
a particular domain or subject.

Inductive Learning Algorithm (ILA) is an iterative and inductive


machine learning algorithm which is used for generating a set of a
classification rule, which produces rules of the form “IF-THEN”, for a
set of examples, producing rules at each iteration and appending to
the set of rules.

THE ILA ALGORITHM:


General requirements at start of the algorithm:-
• list the examples in the form of a table ‘T’ where each row
corresponds to an example and each column contains an
attribute value.
• create a set of m training examples, each example composed of k
attributes and a class attribute with n possible decisions.
• create a rule set, R, having the initial value false.
• initially all rows in the table are unmarked.
Steps in the algorithm:-
Step 1:
divide the table ‘T’ containing m examples into n sub-tables (t1,
t2,…..tn). One table for each possible value of the class attribute.
(repeat steps 2-8 for each sub-table)
Step 2:
Initialize the attribute combination count ‘ j ‘ = 1.
Step 3:
For the sub-table on which work is going on, divide the attribute list
into distinct combinations, each combination with ‘j ‘ distinct
attributes.
Step 4:
For each combination of attributes, count the number of occurrences
of attribute values that appear under the same combination of
attributes in unmarked rows of the sub-table under consideration, and
at the same time, not appear under the same combination of attributes
of other sub-tables. Call the first combination with the maximum
number of occurrences the max-combination ‘ MAX’.
Step 5:
If ‘MAX’ = = null , increase ‘ j ‘ by 1 and go to Step 3.
Step 6:
Mark all rows of the sub-table where working, in which the values of
‘MAX’ appear, as classi?ed.
Step 7:
Add a rule (IF attribute = “XYZ” –> THEN decision is YES/ NO) to R
whose left-hand side will have attribute names of the ‘MAX’ with their
values separated by AND, and its right-hand side contains the decision
attribute value associated with the sub-table.
Step 8:
If all rows are marked as classi?ed, then move on to process another
sub-table and go to Step 2. else, go to Step 4. If no sub-tables are
available, exit with the set of rules obtained till then.

Inductive Learning has also been used in education. For example,


because global education allows students to obtain education from
multiple education providers through various study exchange
programs, it necessitates comparing available study courses in foreign
institutions to courses on the institution's curriculum that issue the
degree.

What is Human-Computer Interaction (HCI)?


Human-computer interaction (HCI) is a multidisciplinary field of study
focusing on the design of computer technology and, in particular, the
interaction between humans (the users) and computers. While
initially concerned with computers, HCI has since expanded to cover
almost all forms of information technology design.

The intention of this subject is to learn the ways of designing user-


friendly interfaces or interactions. Considering which, we will learn
the following −
• Ways to design and assess interactive systems.
• Ways to reduce design time through cognitive system and task
models.
• Procedures and heuristics for interactive system design.

HCI is a broad field which overlaps with areas such as user-centered


design (UCD), user interface (UI) design and user experience (UX)
design. In many ways, HCI was the forerunner to UX design.

Despite that, some differences remain between HCI and UX design.


Practitioners of HCI tend to be more academically focused. They're
involved in scientific research and developing empirical
understandings of users. Conversely, UX designers are almost
invariably industry-focused and involved in building products or
services—e.g., smartphone apps and websites.
HCI can be used in all disciplines wherever there is a possibility of
computer installation. Some of the areas where HCI can be
implemented with distinctive importance are mentioned below −
• Computer Science − For application design and engineering.
• Psychology − For application of theories and analytical purpose.
• Sociology − For interaction between technology and
organization.
• Industrial Design − For interactive products like mobile
phones, microwave oven, etc.

What is natural language processing?


Natural language processing (NLP) is the ability of a computer
program to understand human language as it is spoken and written --
referred to as natural language. It is a component of artificial
intelligence (AI).
NLP has existed for more than 50 years and has roots in the field of
linguistics. It has a variety of real-world applications in a number of
fields, including medical research, search engines and business
intelligence. There are two main phases to natural language
processing: data preprocessing and algorithm development.
Data preprocessing involves preparing and "cleaning" text data for
machines to be able to analyze it. preprocessing puts data in workable
form and highlights features in the text that an algorithm can work
with. There are several ways this can be done, including:
• Tokenization. This is when text is broken down into smaller
units to work with.
• Stop word removal. This is when common words are removed
from text so unique words that offer the most information about
the text remain.
• Lemmatization and stemming. This is when words are
reduced to their root forms to process.
• Part-of-speech tagging. This is when words are marked based
on the part-of speech they are -- such as nouns, verbs and
adjectives.
Once the data has been preprocessed, an algorithm is developed to
process it. There are many different natural language processing
algorithms, but two main types are commonly used:
• Rules-based system. This system uses carefully designed
linguistic rules. This approach was used early on in the
development of natural language processing, and is still used.
• Machine learning-based system. Machine learning algorithms
use statistical methods. They learn to perform tasks based on
training data they are fed, and adjust their methods as more data
is processed. Using a combination of machine learning, deep
learning and neural networks, natural language processing
algorithms hone their own rules through repeated processing
and learning
Steps in NLP
There are general five steps −
• Lexical Analysis − It involves identifying and analyzing the
structure of words. Lexicon of a language means the collection of
words and phrases in a language. Lexical analysis is dividing the
whole chunk of text into paragraphs, sentences, and words.
• Syntactic Analysis (Parsing) − It involves analysis of words in the
sentence for grammar and arranging words in a manner that
shows the relationship among the words. The sentence such as
“The school goes to boy” is rejected by English syntactic
analyzers.

IIS
Components of Expert System
An expert system mainly consists of three components:
• User Interface
• Inference Engine
• Knowledge Base

1. User Interface
With the help of a user interface, the expert system interacts with the
user, takes queries as an input in a readable format, and passes it to
the inference engine. After getting the response from the inference
engine, it displays the output to the user. In other words, it is an
interface that helps a non-expert user to communicate with the
expert system to find a solution.
2. Inference Engine(Rules of Engine)
• The inference engine is known as the brain of the expert system
as it is the main processing unit of the system. It applies
inference rules to the knowledge base to derive a conclusion or
deduce new information. It helps in deriving an error-free
solution of queries asked by the user.
• With the help of an inference engine, the system extracts the
knowledge from the knowledge base.

Knowledge Base
• The knowledgebase is a type of storage that stores knowledge
acquired from the different experts of the particular domain. It is
considered as big storage of knowledge. The more the
knowledge base, the more precise will be the Expert System.
• It is similar to a database that contains information and rules of
a particular domain or subject.

Inductive Learning Algorithm (ILA) is an iterative and inductive


machine learning algorithm which is used for generating a set of a
classification rule, which produces rules of the form “IF-THEN”, for a
set of examples, producing rules at each iteration and appending to
the set of rules.

THE ILA ALGORITHM:


General requirements at start of the algorithm:-
• list the examples in the form of a table ‘T’ where each row
corresponds to an example and each column contains an
attribute value.
• create a set of m training examples, each example composed of k
attributes and a class attribute with n possible decisions.
• create a rule set, R, having the initial value false.
• initially all rows in the table are unmarked.
Steps in the algorithm:-
Step 1:
divide the table ‘T’ containing m examples into n sub-tables (t1,
t2,…..tn). One table for each possible value of the class attribute.
(repeat steps 2-8 for each sub-table)
Step 2:
Initialize the attribute combination count ‘ j ‘ = 1.
Step 3:
For the sub-table on which work is going on, divide the attribute list
into distinct combinations, each combination with ‘j ‘ distinct
attributes.
Step 4:
For each combination of attributes, count the number of occurrences
of attribute values that appear under the same combination of
attributes in unmarked rows of the sub-table under consideration, and
at the same time, not appear under the same combination of attributes
of other sub-tables. Call the first combination with the maximum
number of occurrences the max-combination ‘ MAX’.
Step 5:
If ‘MAX’ = = null , increase ‘ j ‘ by 1 and go to Step 3.
Step 6:
Mark all rows of the sub-table where working, in which the values of
‘MAX’ appear, as classi?ed.
Step 7:
Add a rule (IF attribute = “XYZ” –> THEN decision is YES/ NO) to R
whose left-hand side will have attribute names of the ‘MAX’ with their
values separated by AND, and its right-hand side contains the decision
attribute value associated with the sub-table.
Step 8:
If all rows are marked as classi?ed, then move on to process another
sub-table and go to Step 2. else, go to Step 4. If no sub-tables are
available, exit with the set of rules obtained till then.

Inductive Learning has also been used in education. For example,


because global education allows students to obtain education from
multiple education providers through various study exchange
programs, it necessitates comparing available study courses in foreign
institutions to courses on the institution's curriculum that issue the
degree.

What is Human-Computer Interaction (HCI)?


Human-computer interaction (HCI) is a multidisciplinary field of study
focusing on the design of computer technology and, in particular, the
interaction between humans (the users) and computers. While
initially concerned with computers, HCI has since expanded to cover
almost all forms of information technology design.

The intention of this subject is to learn the ways of designing user-


friendly interfaces or interactions. Considering which, we will learn
the following −
• Ways to design and assess interactive systems.
• Ways to reduce design time through cognitive system and task
models.
• Procedures and heuristics for interactive system design.

HCI is a broad field which overlaps with areas such as user-centered


design (UCD), user interface (UI) design and user experience (UX)
design. In many ways, HCI was the forerunner to UX design.

Despite that, some differences remain between HCI and UX design.


Practitioners of HCI tend to be more academically focused. They're
involved in scientific research and developing empirical
understandings of users. Conversely, UX designers are almost
invariably industry-focused and involved in building products or
services—e.g., smartphone apps and websites.

HCI can be used in all disciplines wherever there is a possibility of


computer installation. Some of the areas where HCI can be
implemented with distinctive importance are mentioned below −
• Computer Science − For application design and engineering.
• Psychology − For application of theories and analytical purpose.
• Sociology − For interaction between technology and
organization.
• Industrial Design − For interactive products like mobile
phones, microwave oven, etc.

What is natural language processing?


Natural language processing (NLP) is the ability of a computer
program to understand human language as it is spoken and written --
referred to as natural language. It is a component of artificial
intelligence (AI).
NLP has existed for more than 50 years and has roots in the field of
linguistics. It has a variety of real-world applications in a number of
fields, including medical research, search engines and business
intelligence. There are two main phases to natural language
processing: data preprocessing and algorithm development.
Data preprocessing involves preparing and "cleaning" text data for
machines to be able to analyze it. preprocessing puts data in workable
form and highlights features in the text that an algorithm can work
with. There are several ways this can be done, including:
• Tokenization. This is when text is broken down into smaller
units to work with.
• Stop word removal. This is when common words are removed
from text so unique words that offer the most information about
the text remain.
• Lemmatization and stemming. This is when words are
reduced to their root forms to process.
• Part-of-speech tagging. This is when words are marked based
on the part-of speech they are -- such as nouns, verbs and
adjectives.
Once the data has been preprocessed, an algorithm is developed to
process it. There are many different natural language processing
algorithms, but two main types are commonly used:
• Rules-based system. This system uses carefully designed
linguistic rules. This approach was used early on in the
development of natural language processing, and is still used.
• Machine learning-based system. Machine learning algorithms
use statistical methods. They learn to perform tasks based on
training data they are fed, and adjust their methods as more data
is processed. Using a combination of machine learning, deep
learning and neural networks, natural language processing
algorithms hone their own rules through repeated processing
and learning
Steps in NLP
There are general five steps −
• Lexical Analysis − It involves identifying and analyzing the
structure of words. Lexicon of a language means the collection of
words and phrases in a language. Lexical analysis is dividing the
whole chunk of text into paragraphs, sentences, and words.
• Syntactic Analysis (Parsing) − It involves analysis of words in the
sentence for grammar and arranging words in a manner that
shows the relationship among the words. The sentence such as
“The school goes to boy” is rejected by English syntactic
analyzers.

• Semantic Analysis − It draws the exact meaning or the dictionary


meaning from the text. The text is checked for meaningfulness. It
is done by mapping syntactic structures and objects in the task
domain. The semantic analyzer disregards sentence such as “hot
ice-cream”.
• Discourse Integration − The meaning of any sentence depends
upon the meaning of the sentence just before it. In addition, it
also brings about the meaning of immediately succeeding
sentences.
• Pragmatic Analysis − During this, what was said is re-interpreted
on what it actually meant. It involves deriving those aspects of
language which require real world knowledge.

Components of Intelligence
Reasoning: It is the set of processes that enables us to provide basis
for judgment, making decisions, and prediction. There are broadly two
types −
Inductive Reasoning Deductive Reasoning
It conducts specific observations It starts with a general statement and
to make broad general examines the possibilities to reach a specific,
statements. logical conclusion.
Example − "Nita is a teacher. Example − "All women of age above 60 years
Nita is studious. Therefore, All are grandmothers. Shalini is 65 years old.
teachers are studious." Therefore, Shalini is a grandmother."
Learning − It is the activity of gaining knowledge or skill by studying,
practicing, being taught, or experiencing something.
Learning is categorized as −
• Auditory Learning − It is learning by listening and hearing. For
example, students listening to recorded audio lectures.
• Episodic Learning − To learn by remembering sequences of
events that one has witnessed or experienced. This is linear and
orderly.
• Motor Learning − It is learning by precise movement of muscles.
For example, picking objects, Writing, etc.
• Observational Learning − To learn by watching and imitating
others. For example, a child tries to learn by mimicking parents.
• Perceptual Learning − It is learning to recognize stimuli that one
has seen before. For example, identifying and classifying objects
and situations.
• Relational Learning − It involves learning to differentiate among
various stimuli on the basis of relational properties, rather than
absolute properties. For Example, Adding salt at the time of
cooking came up salty last time, so next time we try to reduce it.
• Spatial Learning − It is learning through visual stimuli such as
images, colors, maps, etc. For Example, A person can create a
roadmap in mind before actually following the road.
• Stimulus-Response Learning − It is learning to perform a
particular behavior when a certain stimulus is present. For
example, a dog raises its ear on the hearing doorbell.
Problem Solving: (1) It is the process in which one perceives and tries
to arrive at a desired solution from a present situation by taking some
path. (2) It includes decision making, which is the process of selecting
the best suitable alternative out of multiple alternatives to reach the
desired goal.
Perception: It is the process of acquiring, interpreting, selecting, and
organizing sensory information(sensing). In humans, perception is
aided by sensory organs. In the domain of AI, perception mechanism
puts the data acquired by the sensors together in
a meaningful manner.
Linguistic Intelligence − It is one’s ability to use, comprehend, speak,
and write the verbal and written language. It is important in
interpersonal communication.

Types of Artificial Intelligence:

1. Weak AI or Narrow AI:


• Narrow AI is a type of AI which is able to perform a dedicated
task with intelligence.The most common and currently available
AI is Narrow AI in the world of Artificial Intelligence.
• Narrow AI cannot perform beyond its field or limitations, as it is
only trained for one specific task. Hence it is also termed as weak
AI. Narrow AI can fail in unpredictable ways if it goes beyond its
limits.
• Apple Siriis a good example of Narrow AI, but it operates with a
limited pre-defined range of functions.

2. General AI:
• General AI is a type of intelligence which could perform any
intellectual task with efficiency like a human.
• The idea behind the general AI to make such a system which
could be smarter and think like a human by its own.
• Currently, there is no such system exist which could come under
general AI and can perform any task as perfect as a human.

3. Super AI:
• Super AI is a level of Intelligence of Systems at which machines
could surpass human intelligence, and can perform any task
better than human with cognitive properties. It is an outcome of
general AI.
• Super AI is still a hypothetical concept of Artificial Intelligence.
Development of such systems in real is still world changing task.

Artificial Intelligence type-2: Based on


functionality
1. Reactive Machines
• Purely reactive machines are the most basic types of Artificial
Intelligence.
• Such AI systems do not store memories or past experiences for
future actions.
• These machines only focus on current scenarios and react on it
as per possible best action.
• IBM's Deep Blue system is an example of reactive machines.
• Google's AlphaGo is also an example of reactive machines.
2. Limited Memory
• Limited memory machines can store past experiences or some
data for a short period of time.
• These machines can use stored data for a limited time period
only.
• Self-driving cars are one of the best examples of Limited Memory
systems. These cars can store recent speed of nearby cars, the
distance of other cars, speed limit, and other information to
navigate the road.
3. Theory of Mind
• Theory of Mind AI should understand the human emotions,
people, beliefs, and be able to interact socially like humans.
• This type of AI machines are still not developed, but researchers
are making lots of efforts and improvement for developing such
AI machines.
4. Self-Awareness
• Self-awareness AI is the future of Artificial Intelligence. These
machines will be super intelligent, and will have their own
consciousness, sentiments, and self-awareness.
• These machines will be smarter than human mind.
• Self-Awareness AI does not exist in reality still and it is a
hypothetical concept.

Architecture of an Expert System

The knowledge base contains the specific domain knowledge that is


used by an expert to derive conclusions from facts.
In the case of a rule-based expert system, this domain knowledge is
expressed in the form of a series of rules.
The explanation system provides information to the user about how
the inference engine arrived at its conclusions. This can often be
essential, particularly if the advice being given is of a critical nature,
such as with a medical diagnosis system.

If the system has used faulty reasoning to arrive at its conclusions,


then the user may be able to see this by examining the data given by
the explanation system.
The fact database contains the case-specific data that are to be used in
a particular case to derive a conclusion.
In the case of a medical expert system, this would contain information
that had been obtained about the patient’s condition.
The user of the expert system interfaces with it through a user
interface, which provides access to the inference engine, the
explanation system, and the knowledge-base editor.
The inference engine is the part of the system that uses the rules and
facts to derive conclusions. The inference engine will use forward
chaining, backward chaining, or a combination of the two to make
inferences from the data that are available to it.
The knowledge-base editor allows the user to edit the information
that is contained in the knowledge base.
1. User Interface
With the help of a user interface, the expert system interacts with the
user, takes queries as an input in a readable format, and passes it to
the inference engine. After getting the response from the inference
engine, it displays the output to the user. In other words, it is an
interface that helps a non-expert user to communicate with the expert
system to find a solution.
2. Inference Engine(Rules of Engine)
• The inference engine is known as the brain of the expert system
as it is the main processing unit of the system. It applies
inference rules to the knowledge base to derive a conclusion or
deduce new information. It helps in deriving an error-free
solution of queries asked by the user.
• With the help of an inference engine, the system extracts the
knowledge from the knowledge base.
• There are two types of inference engine:
• Deterministic Inference engine: The conclusions drawn from this
type of inference engine are assumed to be true. It is based on
facts and rules.
• Probabilistic Inference engine: This type of inference engine
contains uncertainty in conclusions, and based on the
probability.
Inference engine uses the below modes to derive the solutions:
• Forward Chaining: It starts from the known facts and rules, and
applies the inference rules to add their conclusion to the known
facts.
• Backward Chaining: It is a backward reasoning method that
starts from the goal and works backward to prove the known
facts.
3. Knowledge Base
• The knowledgebase is a type of storage that stores knowledge
acquired from the different experts of the particular domain. It is
considered a big storage of knowledge. The more the knowledge
base, the more precise will be the Expert System.
• It is similar to a database that contains information and rules of
a particular domain or subject.
• One can also view the knowledge base as collections of objects
and their attributes. Such as a Lion is an object and its attributes
are it is a mammal, it is not a domestic animal, etc.
Components of Knowledge Base
• Factual Knowledge: The knowledge which is based on facts and
accepted by knowledge engineers comes under factual
knowledge.
• Heuristic Knowledge: This knowledge is based on practice, the
ability to guess, evaluation, and experiences.

Similarly, there are four categories of machine learning algorithms as


shown below −
• Supervised learning algorithm
• Unsupervised learning algorithm
• Semi-supervised learning algorithm
• Reinforcement learning algorithm
However, the most commonly used ones are supervised and
unsupervised learning.
Supervised Learning
Supervised learning is commonly used in real world applications, such
as face and speech recognition, products or movie recommendations,
and sales forecasting. Supervised learning can be further classified
into two types - Regression and Classification.
Regression trains on and predicts a continuous-valued response, for
example predicting real estate prices.
Classification attempts to find the appropriate class label, such as
analyzing positive/negative sentiment, male and female persons,
benign and malignant tumors, secure and unsecure loans etc.
In supervised learning, learning data comes with description, labels,
targets or desired outputs and the objective is to find a general rule
that maps inputs to outputs. This kind of learning data is called
labeled data. The learned rule is then used to label new data with
unknown outputs.

Unsupervised Learning
Unsupervised learning is used to detect anomalies, outliers, such as
fraud or defective equipment, or to group customers with similar
behaviors for a sales campaign. It is the opposite of supervised
learning. There is no labeled data here.
When learning data contains only some indications without any
description or labels, it is up to the coder or to the algorithm to find
the structure of the underlying data, to discover hidden patterns, or to
determine how to describe the data. This kind of learning data is
called unlabeled data.

Semi-supervised Learning
If some learning samples are labeled, but some other are not labeled,
then it is semi-supervised learning. It makes use of a large amount of
unlabeled data for training and a small amount of labeled data for
testing. Semi-supervised learning is applied in cases where it is
expensive to acquire a fully labeled dataset while more practical to
label a small subset.

Reinforcement Learning
Here learning data gives feedback so that the system adjusts to
dynamic conditions in order to achieve a certain objective. The system
evaluates its performance based on the feedback responses and reacts
accordingly. The best known instances include self-driving cars and
chess master algorithm AlphaGo.

You might also like