You are on page 1of 84

EMERGING TECHNOLOGIES

(CPE0051)
Module 8
Artificial Intelligence
1. Introduce the concept of artificial intelligence.
2. Identify the different categories of AI.
3. Identify the different types of AI.
4. Identify the applications of AI.
5. Orient the students on issues regarding AI.
What is Artificial Intelligence?

Artificial intelligence (AI) is wide-ranging branch of computer


science concerned with building smart machines capable of
performing tasks that typically require human intelligence. AI
is an interdisciplinary science with multiple approaches, but
advancements in machine learning and deep learning are
creating a paradigm shift in virtually every sector of the tech
industry.
HOW DOES ARTIFICIAL INTELLIGENCE WORK?

Less than a decade after breaking the Nazi encryption


machine Enigma and helping the Allied Forces win World
War II, mathematician Alan Turing changed history a second
time with a simple question: "Can machines think?"
Turing's paper "Computing Machinery and Intelligence"
(1950), and it's subsequent Turing Test, established the
fundamental goal and vision of artificial intelligence.

At it's core, AI is the branch of computer science that aims to


answer Turing's question in the affirmative. It is the endeavor
to replicate or simulate human intelligence in machines.
The expansive goal of artificial intelligence has given rise to
many questions and debates. So much so, that no singular
definition of the field is universally accepted.

The major limitation in defining AI as simply "building


machines that are intelligent" is that it doesn't actually
explain what artificial intelligence is? What makes a machine
intelligent?
In their groundbreaking textbook Artificial Intelligence: A
Modern Approach, authors Stuart Russell and Peter Norvig
approach the question by unifying their work around the
theme of intelligent agents in machines. With this in mind, AI
is "the study of agents that receive percepts from the
environment and perform actions." (Russel and Norvig viii)
Norvig and Russell go on to explore four different approaches
that have historically defined the field of AI:

Thinking humanly
Thinking rationally
Acting humanly
Acting rationally
The first two ideas concern thought processes and
reasoning, while the others deal with behavior. Norvig and
Russell focus particularly on rational agents that act to
achieve the best outcome, noting "all the skills needed for the
Turing Test also allow an agent to act rationally." (Russel and
Norvig 4).
Patrick Winston, the Ford professor of artificial intelligence
and computer science at MIT, defines AI as "algorithms
enabled by constraints, exposed by representations that
support models targeted at loops that tie thinking, perception
and action together."
While these definitions may seem abstract to the average
person, they help focus the field as an area of computer
science and provide a blueprint for infusing machines and
programs with machine learning and other subsets of artificial
intelligence.
While addressing a crowd at the Japan AI Experience in
2017, DataRobot CEO Jeremy Achin began his speech by
offering the following definition of how AI is used today:
"AI is a computer system able to perform tasks that ordinarily
require human intelligence... Many of these artificial
intelligence systems are powered by machine learning, some
of them are powered by deep learning and some of them are
powered by very boring things like rules."
Categories of AI
Narrow AI:

Sometimes referred to as "Weak AI," this kind of


artificial intelligence operates within a limited context and is a
simulation of human intelligence. Narrow AI is often focused
on performing a single task extremely well and while these
machines may seem intelligent, they are operating under far
more constraints and limitations than even the most basic
human intelligence.
Artificial General Intelligence (AGI):

AGI, sometimes referred to as "Strong AI," is the kind of


artificial intelligence we see in the movies, like the robots
from Westworld or Data from Star Trek: The Next Generation.
AGI is a machine with general intelligence and, much like a
human being, it can apply that intelligence to solve any
problem.
Narrow Artificial Intelligence
Narrow AI is all around us and is easily the most successful
realization of artificial intelligence to date. With its focus on
performing specific tasks, Narrow AI has experienced
numerous breakthroughs in the last decade that have had
"significant societal benefits and have contributed to the
economic vitality of the nation," according to "Preparing for
the Future of Artificial Intelligence," a 2016 report released by
the Obama Administration.
A few examples of Narrow AI include:

Google search
Image recognition software
Siri, Alexa and other personal assistants
Self-driving cars
IBM's Watson
Machine Learning & Deep Learning
Much of Narrow AI is powered by breakthroughs in machine
learning and deep learning. Understanding the difference
between artificial intelligence, machine learning and deep
learning can be confusing. Venture capitalist Frank
Chen provides a good overview of how to distinguish
between them, noting:
"Artificial intelligence is a set of algorithms and intelligence to
try to mimic human intelligence. Machine learning is one of
them, and deep learning is one of those machine learning
techniques."
Simply put, machine learning feeds a computer data and
uses statistical techniques to help it "learn" how to get
progressively better at a task, without having been
specifically programmed for that task, eliminating the need for
millions of lines of written code. Machine learning consists of
both supervised learning (using labeled data sets) and
unsupervised learning (using unlabeled data sets).
Deep learning is a type of machine learning that runs inputs
through a biologically-inspired neural network architecture.
The neural networks contain a number of hidden layers
through which the data is processed, allowing the machine to
go "deep" in its learning, making connections and weighting
input for the best results.
Artificial General Intelligence
The creation of a machine with human-level intelligence that
can be applied to any task is the Holy Grail for many AI
researchers, but the quest for AGI has been fraught with
difficulty.
The search for a "universal algorithm for learning and acting
in any environment," (Russel and Norvig 27) isn't new, but
time hasn't eased the difficulty of essentially creating a
machine with a full set of cognitive abilities.
AGI has long been the muse of dystopian science fiction, in
which super-intelligent robots overrun humanity, but experts
agree it's not something we need to worry about anytime
soon.
Types of AI
AI or artificial intelligence is the simulation of human
intelligence processes by machines, especially computer
systems. These processes include learning, reasoning and
self-correction. Some of the applications of AI include expert
systems, speech recognition and machine vision. Artificial
Intelligence is advancing dramatically. It
is already transforming our world socially, economically and
politically.
AI was coined by John McCarthy, an American computer
scientist, in 1956 at The Dartmouth Conference where the
discipline was born. Today, it is an umbrella term that
encompasses everything from robotic process automation to
actual robotics. AI can perform tasks such as identifying
patterns in the data more efficiently than humans, enabling
businesses to gain more insight out of their data.
With the help from AI, massive amounts of data can be
analyzed to map poverty and climate change, automate
agricultural practices and irrigation,
individualize healthcare and learning, predict consumption
patterns, streamline energy-usage and waste-management.
Types of Artificial Intelligence:
Artificial Intelligence can be classified in several ways. The
first classifies the AI as either weak AI or strong AI. Weak AI
also known as narrow AI, is an AI system that is designed
and trained for a specific type of task. Strong AI, also known
as artificial general intelligence, is an AI system with
generalized human cognitive abilities so that when presented
with an unfamiliar task, it has enough intelligence to find a
solution.
The Turing Test, developed by mathematician Alan Turing in
1950, is a method used to determine if a computer can think
like a human, although the method is controversial. The
second example is from Arend Hintze, an assistant professor
of integrative biology and computer science and engineering
at Michigan State University. He categorized AI into four
types, and these were as follow:
Type 1: Reactive Machines. An example is Deep Blue, an
IBM chess program that can identify pieces on the chess
board and can make predictions accordingly. But the major
fault with this is that it has no memory and cannot use past
experiences to inform future ones. It also analyzes possible
moves of its own and its opponents. Deep Blue and AlphaGO
were designed for narrow purposes and cannot easily be
applied to any other situation.
Type2: Limited Memory. These AI systems can use past
experiences to inform future decisions. Most of the decision-
making functions in the autonomous vehicles have been
designed in this way.
Type 3: Theory of mind: This is a psychology term, which
refers to the understanding that the other have in their own
beliefs and intentions that impact the decisions they make. At
present this kind of artificial intelligence does not exist.
Type4: Self-awareness. In this category, AI systems have a
sense of self, have consciousness. Machines with self-
awareness understand their current state and can use the
information to infer what others are feeling. This type of AI
does not yet exist.
AI Technologies
Artificial Intelligence Technologies:

The market for artificial intelligence technologies is


flourishing. Artificial Intelligence involves a variety of
technologies and tools, some of the recent technologies are
as follows:
Natural Language Generation: it’s a tool that produces text
from the computer data. Currently used in customer service,
report generation, and summarizing business intelligence
insights.
Speech Recognition: Transcribes and transforms human
speech into a format useful for computer applications.
Presently used in interactive voice response systems and
mobile applications.
Virtual Agent: A Virtual Agentis a computer generated,
animated, artificial intelligence virtual character (usually with
anthropomorphic appearance) that serves as an online
customer service representative. It leads an intelligent
conversation with users, responds to their questions and
performs adequate non-verbal behavior. An example of a
typical Virtual Agent is Louise, the Virtual Agent of eBay,
created by a French/American developer VirtuOz.
Machine Learning: Provides algorithms, APIs (Application
Program interface) development and training toolkits, data,
as well as computing power to design, train, and deploy
models into applications, processes, and other machines.
Currently used in a wide range of enterprise applications,
mostly `involving prediction or classification.
Deep Learning Platforms: A special type of machine
learning consisting of artificial neural networks with multiple
abstraction layers. Currently used in pattern recognition and
classification applications supported by very large data sets.
Biometrics: Biometrics uses methods for unique recognition
of humans based upon one or more intrinsic physical or
behavioral traits. In computer science, particularly,
biometrics is used as a form of identity access management
and access control. It is also used to identify individuals in
groups that are under surveillance. Currently used in market
research.
Robotic Process Automation: using scripts and other
methods to automate human action to support efficient
business processes. Currently used where it is inefficient for
humans to execute a task.
Text Analytics and NLP: Natural language processing (NLP)
uses and supports text analytics by facilitating the
understanding of sentence structure and meaning, sentiment,
and intent through statistical and machine learning methods.
Currently used in fraud detection and security, a wide range
of automated assistants, and applications for mining
unstructured data.
Applications for AI
Artificial Intelligence in Healthcare: Companies are
applying machine learning to make better and faster
diagnoses than humans. One of the best-known technologies
is IBM’s Watson. It understands natural language and can
respond to questions asked of it. The system mines patient
data and other available data sources to form a hypothesis,
which it then presents with a confidence scoring schema.
AI is a study realized to emulate human intelligence into
computer technology that could assist both, the doctor and
the patients in the following ways:
- By providing a laboratory for the examination,
representation and cataloguing medical information
- By devising novel tool to support decision making and
research
- By integrating activities in medical, software and cognitive
sciences
- By offering a content rich discipline for the future scientific
medical communities.
Artificial Intelligence in business: Robotic process
automation is being applied to highly repetitive tasks normally
performed by humans. Machine learning algorithms are being
integrated into analytics and CRM (Customer relationship
management) platforms to uncover information on how to
better serve customers.
Chatbots have already been incorporated into websites and e
companies to provide immediate service to customers.
Automation of job positions has also become a talking point
among academics and IT consultancies.
AI in education: It automates grading, giving educators
more time. It can also assess students and adapt to their
needs, helping them work at their own pace.
AI in Autonomous vehicles: Just like humans, self-driving
cars need to have sensors to understand the world around
them and a brain to collect, processes and choose specific
actions based on information gathered. Autonomous vehicles
are with advanced tool to gather information, including long
range radar, cameras, and LIDAR. Each of the technologies
are used in different capacities and each collects different
information.
This information is useless, unless it is processed and some
form of information is taken based on the gathered
information. This is where artificial intelligence comes into
play and can be compared to human brain. AI has several
applications for these vehicles and among them the more
immediate ones are as follows:
- Directing the car to gas station or recharge station when it
is running low on fuel.
- Adjust the trips directions based on known traffic
conditions to find the quickest route.
- Incorporate speech recognition for advanced
communication with passengers.
- Natural language interfaces and virtual assistance
technologies.
AI for robotics will allow us to address the challenges in
taking care of an aging population and allow much longer
independence. It will drastically reduce, may be even bring
down traffic accidents and deaths, as well as enable disaster
response for dangerous situations for example the nuclear
meltdown at the fukushima power plant.
Cyborg Technology: One of the main limitations of being
human is simply our own bodies and brains. Researcher
Shimon Whiteson thinks that in the future, we will be able to
augment ourselves with computers and enhance many of our
own natural abilities.
Though many of these possible cyborg enhancements would
be added for convenience, others may serve a more
practical purpose. Yoky Matsuka of Nest believes that AI will
become useful for people with amputated limbs, as the brain
will be able to communicate with a robotic limb to give the
patient more control. This kind of cyborg technology would
significantly reduce the limitations that amputees deal with
daily.
Seven Most Pressing
Ethical Issues in AI
1. Job Loss and Wealth Inequality

One of the primary concerns people have with AI is future


loss of jobs. Should we strive to fully develop and integrate AI
into society if it means many people will lose their jobs — and
quite possibly their livelihood?
According to the new McKinsey Global Institute report, by the
year 2030, about 800 million people will lose their jobs to AI-
driven robots. Some would argue that if their jobs are taken
by robots, perhaps they are too menial for humans and that
AI can be responsible for creating better jobs that take
advantage of unique human ability involving higher cognitive
functions, analysis and synthesis.
Another point is that AI may create more jobs — after all,
people will be tasked with creating these robots to begin with
and then manage them in the future.
One issue related to job loss is wealth inequality. Consider
that most modern economic systems require workers to
produce a product or service with their compensation based
on an hourly wage. The company pays wages, taxes and
other expenses, with left-over profits often being injected
back into production, training and/or creating more business
to further increase profits. In this scenario, the economy
continues to grow.
But what happens if we introduce AI into the economic flow?
Robots do not get paid hourly nor do they pay taxes. They
can contribute at a level of 100% with low ongoing cost to
keep them operable and useful. This opens the door for
CEOs and stakeholders to keep more company profits
generated by their AI workforce, leading to greater wealth
inequality.
Perhaps this could lead to a case of “the rich” — those
individuals and companies who have the means to pay for
AIs — getting richer.
2. AI is Imperfect — What if it Makes a Mistake?

AIs are not immune to making mistakes and machine


learning takes time to become useful. If trained well, using
good data, then AIs can perform well. However, if we feed AIs
bad date or make errors with internal programming, the AIs
can be harmful. Teka Microsoft’s AI chatbot, Tay, which was
released on Twitter in 2016.
In less than one day, due to the information it was receiving
and learning from other Twitter users, the robot learned
to spew racist slurs and Nazi propaganda. Microsoft shut the
chatbot down immediately since allowing it to live would have
obviously damaged the company’s reputation.
Yes, AIs make mistakes. But do they make greater or fewer
mistakes than humans? How many lives have humans taken
with mistaken decisions? Is it better or worse when an AI
makes the same mistake?
3. Should AI Systems Be Allowed to Kill?
In this TEDx speech, Jay Tuck describes AIs as software that
writes its own updates and renews itself. This means that, as
programmed, the machine is not created to do what we want
it to do — it does what it learns to do. Jay goes on to
describe an incident with a robot called Tallon. Its
computerized gun was jammed and open fired uncontrollably
after an explosion killing 9 people and wounding 14 more.
Predator drones, such as the General Atomics MQ-1
Predator, have been existence for over a decade. These
remotely piloted aircraft can fire missiles, although US law
requires that humans make the actual kill decisions. But with
drones playing more of a role in aerial military defense, we
need to further examine their role and how they are used.
Is it better to use AIs to kill than to put humans in the line of
fire? What if we only use robots for deterrence rather than
actual violence?
The Campaign to Stop Killer Robots is a non-profit organized
to ban fully-autonomous weapons that can decide who lives
and dies without human intervention. “Fully autonomous
weapons would lack the human judgment necessary to
evaluate the proportionality of an attack, distinguish civilian
from combatant, and abide by other core principles of the
laws of war. History shows their use would not be limited to
certain circumstances.”
4. Rogue Ais

If there is a chance that intelligent machines can make


mistakes, then it is within the realm of possibility that an AI
can go rogue, or create unintended consequences from its
actions in pursuing seemingly harmless goals.
One scenario of an AI going rogue is what we’ve already
seen in movies like The Terminator and TV shows where a
super-intelligent centralized AI computer becomes self-aware
and decides it doesn’t want human control anymore.

Right now experts say that current AI technology is not yet


capable of achieving this extremely dangerous feat of self-
awareness; however, future AI supercomputers might.
The other scenario is where an AI, for instance, is tasked to
study the genetic structure of a virus in order to create a
vaccine to neutralize it. After making lengthy calculations the
AI formulated a solution where it weaponizes the virus
instead of making a vaccine out of it. It’s like opening a
modern day Pandora’s Box and again ethics comes into play
where legitimate concerns need to be addressed in order to
prevent a scenario like this.
5. Singularity and Keeping Control Over Ais

Will AIs evolve to surpass human beings? What if they


become smarter than humans and then try to control us? Will
computers make humans obsolete? The point at which
technology growth surpasses human intelligence is referred
to as “technological singularity.”
Some believe this will signal the end of the human era and
that it could occur as early as 2030 based on the pace of
technological innovation. AIs leading to human extinction —
it’s easy to understand why the advancement of AI is scary to
many people.
6. How Should We Treat AIs?

Should robots be granted human rights or citizenship? If we


evolve robots to the point that they are capable of “feeling,”
does that entitle them to rights similar to humans or animals?
If robots are granted rights, then how do we rank their social
status? This is one of the primary issues in “roboethics,” a
topic that was first raised by Isaac Asimov in 1942.
In 2017, the Hanson Robotics humanoid robot, Sophia, was
granted citizenship in Saudi Arabia. While some consider this
to be more of a PR stunt than actual legal recognition, it does
set an example of the type of rights AIs may be granted in the
future.
7. AI Bias

AI has become increasingly inherent in facial and voice


recognition systems, some of which have real business
implications and directly impact people. These systems are
vulnerable to biases and errors introduced by its human
makers. Also, the data used to train these AI systems itself
can have biases.
For instance, facial recognition algorithms made by Microsoft,
IBM and Megvii all had biases when detecting people’s
gender.

These AI systems were able to detect the gender of white


men more accurately than gender of darker skin men.
Similarly, Amazon’s.com’s termination of AI hiring and
recruitment is another example which exhibits that AI cannot
be fair; the algorithm preferred male candidates over female.
This was because Amazon’s system was trained with data
collected over a 10-year period that came mostly from male
candidates.
https://www.valluriorg.com/blog/artificial-intelligence-
and-its-applications/
https://builtin.com/artificial-intelligence
https://kambria.io/blog/the-7-most-pressing-ethical-
issues-in-artificial-intelligence/

You might also like