You are on page 1of 9

What is AI

Will a robot take my job? How is AI changing jobs?

AI is currently a “hot topic” and it means different things to different people.

For some Ai is about artificial life, forms that can surpass human intelligence, for other, Ai is just data
processing.

Ex 1 Self Driving Car

These cars require a combinations of Ai techniques like: search and planning the most convenient route,
computer vision to identify obstacles, and decision making.

These must work with almost flawless precision, and are used in other autonomous system such as flying
drones, delivery robot, etc..

Implications

Road safety could improve as the reliability of the systems surpasses human level, the efficiency of logistic
should improve, and humans will move into a supervisory role, keeping an eye on what’s going on while the
machine drives.

EX 2 Content Recommendation

A lot of the information that we receive in a day is personalized.

For example in social media, in streaming services use personalized offers.

While the frontpage of the printed version of the New York Times or China Daily is the same for all readers,
the frontpage of the online version is different for each user. The algorithms that determine the content
that you see are based on AI.

Implications

While many companies don’t reveal the details of their algorithms, being aware of the principles helps you
these involve so called filter bubbles, echo-chambers,
understand the potential implications:
troll factories, fake news, and new forms of propaganda.
Ex 3 image and video processing

Face recognition is already a commodity that is used every day in business, government, for organizing
photos, automatic tagging on social media, and passport control.
Some techniques are used in automatic car for recognizing other cars or objects, or to estimate wildlife
population

Implications

When such techniques advance they become widely available, and creating things such as fake videos will
be easier

I. What is, and what isn't AI? Not an easy question!


Why is the public perception of AI so nebulous? Let’s look at a few reasons.

Reason 1: no officially agreed definition


Even AI researchers have no exact definition of AI.
There’s an old (geeky) joke that AI is defined as “cool things that computers can’t do.” The
irony is that under this definition, AI can never make any progress: as soon as we find a
way to do something cool with a computer, it stops being an AI problem. However, there is
an element of truth in this definition. Fifty years ago, for instance, automatic methods for
search and planning were considered to belong to the domain of AI.
Nowadays such methods are taught to every computer science student.
Similarly, certain methods for processing uncertain information are becoming so well
understood that they are likely to be moved from AI to statistics or probability very soon.

Reason 2: the legacy of science fiction


The confusion about AI degenerated with science fiction, for ex Science fiction stories often feature
friendly humanoid servants that provide overly-detailed factoids or witty dialogue, but can
sometimes follow the steps of Pinocchio and start to wonder if they can become human.

Reason 3: what seems easy is actually hard…


Another source of confusion in understating, is that what seems easy is actually difficult, if you pick up an
object, and you think about what you did (figured out where are some suitable objects for picking up, chose
one of them and planned a trajectory for your hand to reach that one, then moved your hand by contracting
various muscles in sequence and managed to squeeze the object with just the right amount of force to keep it
between your fingers.) it seems basically easy, but it isn’t, you can understand how complicated is the whole
“machine” when something goes wrong, for example when an object that you picked up is heavier or
lighter, you feel out of balance.
Grasping objects by a robot is extremely hard, and in this area developed a new field of study.

Reason 4 : what seems hard is actually easy…


By contrast, playing chess or solving mathematical exercises, for which humans need years of practice,
seems well suited to computers, which can calculate billions of possibility in a second, can compute many
alternatives.

Computers beat the human world champion in 1997


So what would be a more useful definition

Key terminology

Autonomy

The ability to perform tasks in complex environments without constant guidance by a user.

Adaptivity

The ability to improve performance by learning from experience.

Words can be misleading


By defining AI, we should be cautious as many of the words that we use, that can be misleading, for ex
learning, understanding and intelligence.

You may say, that a system is intelligent, because it delivers accurate instructions or detect signs of
melanoma in photos…

A word like this (intelligent) can suggest that the system is capable of performing any task that a smart
person is able to perform: cooking, washing…

Likewise, when we say that a computer can distinguish an object, such as a road, because it is able to
segment an image into distinct objects, but it doesn’t mean that a AI is able to recognize if a person is
wearing a t-shirt with a photo of a road printed on it

Note
Watch out for ‘suitcase words’
Marvin Minsky, a cognitive scientist and one of the greatest pioneers in AI, coined the
term suitcase word for terms that carry a whole bunch of different meanings that come
along even if we intend only one of them. Using such terms increases the risk of
misinterpretations such as the ones above.

Intelligence is not a single dimension like temperature.


You can compare today’s temperature to yesterday?s, or you can compare temperatures
from different cities, people can be ranked by their IQ.
However, in the context of AI, different AI systems cannot be compared, on a single
dimension, a chess playing algorithm isn’t more intelligent than a spam filter.
AI is narrow, a artificial intelligence that can solve a problem tell us nothing about the
ability to solve another one.

Why you can say “a pinch of AI” but not “an AI”
The classification of AI is not yes-no dichotomy, while some methods are clearly AI and other are clearly not
AI, there are also methods that involve a pinch of AI.
Thus it would be more appropriate to talk about the “Ainess” rather than discussing about AI

“AI” is not a countable noun


When discussing AI, we would like to discourage the use of AI as a countable noun: one
AI, two AIs, and so on. AI is a scientific discipline, like mathematics or biology. This means
that AI is a collection of concepts, problems, and methods for solving them.
Because AI is a discipline, you shouldn’t say “an AI”, just like we don’t say “a biology”. This
point should also be quite clear when you try saying something like “we need more
artificial intelligences.” That just sounds wrong, doesn’t it? (It does to us).

II. Related Fields


In addition to AI, there are several other closely related field that are good to know
at least by name.
These include machine learning, data science, and deep learning.
Machine learning can be said to be a subfield of AI, which itself is a subfield
of computer science (such categories are often somewhat imprecise and some parts
of machine learning could be equally well or better belong to statistics).
Machine learning enables AI solutions that are adaptive. A concise definition can be
given as follows:
Systems that improve their performance in a given task with more and more
experience or data.

Deep learning is a subfield of machine learning, which itself is a subfield of AI, which
itself is a subfield of computer science.

“depth” of deep learning refers to the complexity of a mathematical model, and that
the increased computing power of modern computers has allowed researchers to
increase this complexity to reach levels that appear not only quantitatively but also
qualitatively different from before.

As you notice, science often involves a number of progressively more special subfields,
subfields of subfields, and so on. This enables researchers to zoom into a particular
topic they increase amount of knowledge accrued over the years, and produce new
knowledge or sometimes, correct earlier knowledge to be more accurate.

Data science is a recent umbrella term (term that covers several subdisciplines) that
includes machine learning and statistics, certain aspects of computer science
including algorithms, data storage, and web application development.

Data science is also a practical discipline that requires understanding of the domain in
which it is applied in, for example, business or science: its purpose (what "added value"
means), basic assumptions, and constraints. Data science solutions often involve at
least a pinch of AI

Robotics means building and programming robots so that they can operate in complex,
real-world scenarios. In a way, robotics is the ultimate challenge of AI since it requires a
combination of virtually all areas of AI. For example:

 Computer vision and speech recognition for sensing the environment


 Natural language processing, information retrieval, and reasoning under
uncertainty for processing instructions and predicting consequences of potential
actions
 Cognitive modeling and affective computing (systems that respond to
expressions of human feelings or that mimic feelings) for interacting and working
together with humans

Many of the robotics-related AI problems are best approached by machine learning,


which makes machine learning a central branch of AI for robotics.

Note
What is a robot?
In brief, a robot is a machine comprising sensors (which sense the environment) and
actuators (which act on the environment) that can be programmed to perform actions.
People used to science-fictional depictions of robots will usually think of humanoid
machines but most real-world robots currently in use look very different as they are
designed according to the application.
Most applications would not benefit from the robot having human shape, just like we don’t
have humanoid robots to do our dishwashing but machines in which we place the dishes
to be washed by jets of water

It may not be obvious at first sight, but any kind of vehicles that have at least some level of
autonomy and include sensors and actuators are also counted as robotics. On the other
hand, software-based solutions such as a customer service chatbot, even if they are
sometimes called “software robots”, aren’t counted as (real) robotics.

III Philosophy of AI
The word AI bring up philosophical questions about the need of a mind, or If computing
can replicate a consciousness.

The Turing test

Alan turing was an English mathematician and logician, he is considered the father of
computer science.
He was fascinate by the possibility of simulating intelligence and thinking, Turing created
an imitation game, that later became known as the Turing test, it determinate if a machine
can show an intelligent behavior.
In this test , a human interrogator interacts by writing messages with two people, A and B.
If the interrogator cannot determinate which player is a computer, it pass the test.
The topic of the discussion is that if a computer is indistinguishable from a human, in a
natural language conversation.

One problem: does being human-like mean you are intelligent?


A problem of this test is that it may actually measure whether the computer
behaves like a human more than whether it is intelligent.

The test has indeed been “passed” by computer programs that keep changing
the subject, make plenty of spelling errors, and sometimes refuse to respond at
all. A famous example is Eugene Goostman, a 13-year-old robot Ukrainian boy
who constantly tries to avoid answering questions by making jokes and changing
the subject to his pet guineapig. Here is a short segment of an example
conversation with Eugene Goostman from the blog of Scott Aaronson:

Is Eugene a computer or a person?


Interrogator: How many legs does a camel have?

Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know
your specialty - or, possibly, I’ve missed it?

Interrogator: How many legs does a millipede have?

Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are
supposed to trick me.

Interrogator: No, I need to know that you’re not a chatbot. Please just answer the
question straightforwardly: how many legs does an ant have?

Eugene: Something between 2 and 4. Maybe, three? :-))) Oh, what a fruitful


conversation;-)

Interrogator: Do you understand why I’m asking such basic questions? Do you realize
I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade
Runner”?

Eugene: ...wait

Eugene Goostman, who in reality is a computer program, fooled 10 out of 30 judges into
thinking he is a real person.

The Chinese room argument


The best known counter-argument is John Searle’s Chinese Room thought
experiment.
His experiment is about a person who doesn’t know Chinese and is looked in a
room, outside the room there is a person who can slip notes written in Chinese
inside the room.
The person inside the room is given a big manual where she can find detailed
instructions for responding to the notes she receives from the outside.
Searle argued that even if the person outside the room gets the impression that
he is in a conversation with another Chinese-speaking person, the person inside
the room does not understand Chinese. Likewise, his argument continues, even
if a machine behaves in an intelligent manner, for example, by passing the
Turing test, it doesn’t follow that it is intelligent or that it has a “mind” in the way
that a human has. The word “intelligent” can also be replaced by the word
“conscious” and a similar argument can be made.

Is a self-driving car intelligent?


The Chinese Room argument goes against the notion that intelligence can be
broken down into small mechanical instructions that can be automated.

A self-driving car is an example of an element of intelligence (driving a car) that


can be automated. The Chinese Room suggests that this isn't really intelligent
thinking: the AI system in the car doesn’t see or understand its environment, and
it doesn’t know how to drive safely, in the way a human being sees,
understands, and knows.

According to Searle this means that the intelligent behavior of the system is
fundamentally different from actually being intelligent.

How much does philosophy matter in practice?


The definition of intelligence, natural or artificial, and consciousness appears to
be extremely evasive and leads to apparently never-ending discourse. In
intellectual company, this discussion can be quite enjoyable (in the absence of
suitable company, books such as The Mind’s I by Hofstadter and Dennett can offer
stimulation).

However, as John McCarthy pointed out, the philosophy of AI is “unlikely to have


any more effect on the practice of AI research than philosophy of science
generally has on the practice of science.”

Key terminology
General vs narrow AI
When reading the news, you might see the terms “general” and “narrow” AI. So
what do these mean? Narrow AI refers to AI that handles one task. General AI,
or Artificial General Intelligence (AGI) refers to a machine that can handle any
intellectual task. All the AI methods we use today fall under narrow AI, with
general AI being in the realm of science fiction. In fact, the ideal of AGI has been
all but abandoned by the AI researchers because of lack of progress towards it
in more than 50 years despite all the effort. In contrast, narrow AI makes
progress in leaps and bounds.

Strong vs weak AI

A related dichotomy is “strong” and “weak” AI. This boils down to the
philosophical distinction between being intelligent and acting intelligently.
Strong AI would amount to a “mind” that is genuinely intelligent and self-
conscious.
Weak AI is what we actually have, namely systems that exhibit intelligent
behaviors despite being “mere“ computers.

After completing Chapter 1 you should be able to:

 Explain autonomy and adaptivity as key concepts for explaining AI


 Distinguish between realistic and unrealistic AI (science fiction vs. real life)
 Express the basic philosophical problems related to AI including the implications of the
Turing test and Chinese room thought experiment

You might also like