You are on page 1of 9

COMP325 : Artificial Intelligence Programming

Name: Owen Wanjohi Wacira


Reg No: COM/054/17
Task: Assignment 1

Topic 1
i. What is Artificial Intelligence? Artificial intelligence is a branch of computer
science that is concerned with design of computer Systems that exhibit human
intelligence.
ii. Real Life application areas that AI has revolutionized.
 Banking: A lot of banks all over the world have adopted AI-based systems to
provide customer support, detect anomalies and credit card frauds. E.g
HDFC Bank developed an AI-based chatbot called EVA. Since its launch, EVA
has addressed over 3 million customer queries, interacted with over half a
million unique users, and held over a million conversations. EVA can collect
knowledge from thousands of sources and provide simple answers in less
than 0.4 seconds.
 Gaming: AI has become an integral part of the gaming industry.
Deepmind’s AlphaGo software is famous for defeating Lee Sedol, the world
champion in the game of GO. Deep Blue in 1997 also defeated world chess
champion Garry Kasparov. Other examples include AlphaGo Zero, F.E.A.R (a
first-person shooter video game)
 Space Exploration: Space expeditions and discoveries always require
analyzing vast amounts of data. After rigorous research astronomers used
AI to shift through years of data obtained by Kepler telescope in order to
identify a distant eight-planet solar system. Also AI is being used foe NASA’s
next rover mission to Mars, the Mars 2020 Rover.
 Autonomous Vehicles: Companies like Waymo conducted several test
drives in Phoenix before deploying their first AI-based public ride-hailing
service. The AI system collects data from the vehicles radar, cameras, GPS
and cloud services to produce control signals that operate the vehicle.
Another famous example is Tesla’s self driving car.
 Social Media: In social media platforms like Facebook, AI is used for face
verification. They use Machine Learning and deep learning concepts to
detect facial features and tag your friends. Twitter also uses AI to identify
hate speech and terroristic language in tweets.
 Creativity: Wordsmith is a content automation tool used by Yahoo,
Microsoft, Tableau to generate around 1.5 billion pieces of content every
year. Wordsmith is a natural language generation platform that can
transform your data into insightful narratives. Another application in
Musenet which is a deep neural network capable of generating 4-minute
musical composition with 10 different instruments.
 Agriculture: Issues such as climate change, population growth and food
security concerns have encourage use of innovative ideas in farming.
PEAT(berlin based agricultural tech company) has developed an application
called Plantix that identifies potential defects and nutrient deficiencies in
soil through images.
 Commuting: Google Maps can analyze the speed of traffic at any given time
and with its access to vast amounts of data being fed to proprietary
algorithms means they Maps can reduce commutes by suggesting the
fastest routes. Also, ridesharing Apps like Uber and Lyft use Machine
Learning. Uber uses ML for ETAs for rides, estimated meal delivery times on
UberEATS, computing optimal pickup stations as well as for fraud detection.
 Email: Spam filters must continuously learn from a variety of signals and
further personalize its results based on your own definition of what
constitutes a spam. Through use of machine learning algorithms, Gmail
successfully filters 99.9% of spam. It also uses similar approach to
categorize your emails into primary, social and promotion inboxes as well as
labelling emails as important.
 Mobile Use: Voice-to-text, a standard feature in smartphones convers what
you say(audio) into text. Google uses artificial neural networks to power
voice search. This has led to development of more sophisticated smart
personal assists like Google Assistant, Alexa, Echo, Cortana e.t.c. They are
all AI powered. Google Assistant can perform internet searches, set
reminders and integrate with your calender.

Topic 2
i. Rational Agent: An agent that acts in a way that is expected to maximize to its
performance measure given the evidence provided by what it perceived and
whatever built-in knowledge it has.
ii. Simple reflex agent: Agents that can only work if the environment is fully
observable, or the correct action is based on what it currently perceived. They
select an action based on current state only, ignoring history of perceptions .
iii. Goal based agents: improvement of model based agents used in cases where
knowing current state of environment is not enough. They combine the provided
goal information with the environment model to chose the actions which achieve
that goal.
iv. Utility based agents: Improvement over goal based agents used in cases where
achieving desired goal is not enough, we might need to consider cost. A utility
agent will chose the action that maximizes the expected utility.
v. Model based reflex agents: Unlike simple reflex agents they keep track of
partially observable environment. They have an internal state depending on
perception history. The environment is modelled based on how it evolves
independently from the agent and how agent actions affects the environment.
(Agents with Memory)
vi. What is PEAS. Given a part picking robot demonstrate PEAS: PEAS
stands Performance (The output which we get from the agent), Environment (All
the surrounding things and conditions of an agent), Actuators (The devices,
hardware or software through which the agent performs any actions or processes
any information to produce a result) and Sensors (Devices through which the
agent observes and perceives its environment). They are the properties which
define an agent.
Demonstration using a part picking robot:
Performance measure: Percentage of parts in the correct bins.
Environment: Conveyor belt with parts, bins.
Actuators: Jointed arm and hand.
Sensors: Camera, joint angle sensors.
vii. Problem solving agent and planning agent.
Problem solving agent is an agent that tries to come up with a sequence of
actions that will bring the environment into a desired state. It is a goal driven
agent and focuses on satisfying the goal. It involves planning, as just a stage.
Works towards solving the given problem to reach a goal.
Planning Agent is an agent that designs a course of action that when executed
will result in the achievement of some desired goal. Similar to problem solving
agent in that they both construct plans to achieve goals, different in
Representation of goals, states, actions and way it searches for solutions. Planning
can be seen as a part of the problem solving process which is completed by the
execution of the process.
Topic 3
i. Breadth First Search: It is a simple search strategy which starts at the root
node (or arbitrary node of a graph, search key) and expands all successor nodes at
the current level before moving to the nodes of the next level. Expands shallowest
node first using FIFO order. Thus, new nodes remain in the queue and old
unexpanded nodes which are shallower than the new nodes get expanded first.
Goal test is applied to each node at the time of its generation rather than when it
is selected.
Time complexity is O(bd) where d = depth of the shallowest solution and b = node
at every state.
Space Complexity is given by the memory size of the frontier which is O(bd).
BFS is complete which means if the shallowest goal node is at some finite depth
then BFS will find a solution.
BFS is optimal if path cost is a non-decreasing function of the depth of the node.

Frankfurt
ii. Depth First Search: Unlike BFS, it explores the node branch as far as possible
before being forced to backtrack and expand other node. It is a recursive
algorithm. It starts from the rootWurzburg
Mannheim Kassel depth
node and follows each path to its greatest
node before moving to the next path. It uses stack data structure for its
implementation.
Karlsruhe Erfurt
m Nurnberg Munchen
Time complexity is O(n ) where m = maximum depth of any node and this can be
much larger than d (Shallowest solution depth). It will be equivalent to the node
traversed by the algorithm
Augsburg Munchen Stuttgart
Space complexity: DFS needs to store only single path from the root node hence it
will be equal to size of the fringe se t, which is O(bm).
DFS is non-optimal as it may generate a large number of steps or high cost to
reach to the goal node.
DFS is complete within finite state space as it will expand every node within a
limited search tree.
iii. Search tree to identify all the possible unique paths to the goal of the graph:

85km 217km 173km

80km 186km 103km 502km

250km 167km 183km


84km

iv. Calculate the shortest path:


1. FrankfurtManheimKarlsruheAugsburgMunchen = 499km
2. FrankfurtWurzburgNurnbergMunchen = 487km(Shortest path)
3. FrankfurtKasselMunchen=675km

Explanation Facility Case specific


User User database

Interface

Inference Engine

Knowledge
Expert & Base
Knowledge
Developer’s Knowledge Acquisition
Engineer Interface subsysytem

Path 2(FrankfurtWurzburgNurnbergMunchen) is the shortest path with


a distance of 487km.
v. Ways of evaluating the performance of a search strategy:
Completeness: Is the strategy guaranteed to find a solution when there is one?
Time complexity: how long does it take to find a solution?
Space complexity: how much memory does the search strategy need?
Munchen
Optimality: does the strategy find the highest quality solution when there are
several different solutions?

Topic 4
i. What is an expert system: An information system that is capable of mimicking
human thinking and making considerations during the process of decision making.
ii. With the aid of a diagram explain what an expert system is:
An Expert System is an information system that merges knowledge, facts and reasoning
techniques in producing a decision. In order to do that it has several components.
1. Knowledge base: A repository of special heuristics or rules that direct the use of
knowledge, facts. It contains knowledge necessary for understanding, formulating and
problem solving.
2. Working Memory: Contains facts of problems that are happening during the
consultation process. Contains info that is supplied by the user, or the reasoning done
by the ES itself.
3. User Interface: Interfaces with user through Natural Language Processing, or menus
and graphics. Acts as Language processor for friendly, problem oriented
communication.
4. Explanation Facility: Acts to help user understand how the Expert System reaches a
certain decision or conclusion of the problem that needs to be solved. By answering
question: Why?, How?, What?, Where?, When?, Who?.
5. Inference Engine: The deduction system used to infer results from the user input and
Knowledge base. It is the brain of the Expert System, the control structure. Provides
methodology for reasoning.
6. Knowledge Acquisition: A process of gathering and transferring problem-solving
expertise from all sources of knowledge in a computer programme.

Human elements in an Expert System are:


1. Expert: Has the special knowledge, judgement, experience and methods to give
advice and solve problems in a particular domain.
2. Knowledge Engineer: Helps the expert(s) structure the problem are by interpreting
and integrating human answers to questions, drawing analogies, posing counter
examples and bringing to light conceptual difficulties.
3. User: He/She can be a non-expert client seeking direct advice(Expert System acts as
Consultant or advisor), a student who wants to learn(Expert System acts as an
instructor), Expert System builder improving or increasing knowledge base(Expert
System acts as a partner) or an Expert(Expert System acts as a Colleague or an
Assistant)

Topic 5
i. Backward chaining and forward chaining:
Backward chaining: The inference engine starts from decision and moves
`backward` to obtain supporting facts for the decision made. If there are no
matching facts that support the decision chosen, the decision will be rejected and
another decision will be selected. The process continues until suitable decision
and the facts that support it are obtained.
Forward chaining: The inference engine starts reasoning from the facts provided
and moves on until it achieves a decision. It is guided by facts in the memory
space and premises which it can obtain them from. The inference engine will try
to match the required the required premise (IF) for all rules in the knowledge
database. If there are several rules that match, solving procedures will be used.
The inference engine will repeatedly match the rules of basic knowledge to data
stored in its memory.

ii. Predicate and proposition logic:


Proposition logic (Boolean logic) is a simple form of logic which has TRUTH values
(0 and 1) which means it can either have one of the two values i.e true or false. It
is used in AI for planning, problem-solving, intelligent control and decision
making. It is a useful tool in reasoning, but it has limitation because it cannot see
inside prepositions and take advantage of relationships among them.
Predicate logic (First-Order Logic) is a collection of formal systems which uses
quantified variables over non-logical objects and allows the use of sentences
which contain variables.
Proposition logic deals with simple declarative propositions while first-order logic
deals additionally covers predicates and quantification.
A proposition logic is a collection of declarative statements that has either a truth
value ‘true’ or a truth value ‘false’ while a predicate logic is an expression of one
or more variables defined on some specific domain.
iii. Conflict resolution strategies developed for forward chaining:
Recency: When two or more rules could be chosen, favor the one that matches
the most recently added facts, as these are most likely to describe the current
situation.
Specificity: If all of the conditions of two or more rules are satisfied chose the rule
according to how specific its conditions are. It is possible to favor either the more
general or specific case. The most specific may be identified roughly as the one
having the greatest number of preconditions. This usually catches exceptions and
other specific cases before firing the more general(default) rules.
Order: The rule with the highest priority fires first if there are few rules in the
knowledge base. The rules should be arranged in a suitable order in the
knowledge base. Each rule is given a priority and the one with the highest priority
is selected.
Refractoriness (Not previously used): If a rule’s conditions are satisfied, but
previously the same rule has been satisfied by the same facts, ignore the rule. This
helps to prevent the system from entering infinite loops. Once fired a rule should
be removed from the conflict set.
Arbitrary choice: Pick a rule at random. This has the merit of being simple to
compute.

Topic 6
i. Unsupervised and supervised algorithms:
Supervised algorithms are algorithms that learn under the presence of a
supervisor. When training a machine using a supervised learning algorithm, we
use data with labels (labelled data). Supervised algorithms require an input
variable (say X) and output the variable (say Y) to learn the mapping function (say
f). Y=f(X). Here the goal is to try to approximate the mapping function (f) so that
we can predict output variables when we have new input data.
Unsupervised algorithms are algorithms trained using unlabeled dataset and are
allowed to act on that data without any supervision. Unsupervised algorithms
aren’t supervised using training data instead the algorithm finds hidden patterns
and insights from the given data by themselves. The goal is to find the underlying
structure of the dataset , group that data according to similarities and present
that dataset in a compressed format.
ii. With appropriate examples distinguish between the two techniques.
Supervised algorithms example: House prices
How can a model for predicting house prices be trained? First, we need data
about the houses: square footage, number of rooms, features, whether a house
has a garden or not, and so on. We then need to know the prices of these houses,
i.e. the corresponding labels. By leveraging data coming from thousands of
houses, their features and prices, we can now train a supervised machine learning
model to predict a new house’s price based on the examples observed in the
model.
Unsupervised algorithm example: Suppose the unsupervised learning algorithm is
given an input dataset containing images of different types of cats and dogs. The
algorithm is never trained upon the given dataset, which means it does not have
any idea about the features of the dataset. The task of the unsupervised learning
algorithm is to identify the image features on their own. Unsupervised learning
algorithm will perform this task by clustering the image dataset into the groups
according to similarities between images.
iii. Dataset: It is a collection data in which data is arranged in some order. It can
contain any data from a series of an array to database table where every column
of the table represents a particular variable and each row corresponds to a given
member of the dataset in question.
iv. Training set: It is the actual data set used to train the model for performing
various actions. Trains an algorithm to understand how to apply concepts such as
neural networks, to learn and produce results. Makes up majority of the total
data.
v. Test set: The data used to evaluate how well your algorithm was trained with the
training set.
vi. Cross-validation: A statistical method used to estimate the skill of machine
learning models. It is commonly used in applied machine learning to compare and
select a model for a given predictive modelling. The idea of cross-validation is to
split the training set into two: a set of examples to train with and a validation set.
vii. Confusion Matrix: An N x N matrix used for evaluating the performance of a
classification model, where N is the number of target classes. The matrix
compares the actual target values with those predicted. The columns represent
actual values of the target variables. The rows represent the predicted values of
the target variable.

You might also like