You are on page 1of 6

Republic of the Philippines

BOHOL ISLAND STATE UNIVERSITY


Calape, Bohol

Name: Francis Caboverde Instructor: Ms. Elrolen L. Asombrado


Program/Year/Section: BSCS 3A Date: April 28, 2022

Human Computer Interaction | MIDTERM EXAM (Topic Covered: Introduction)

1.
The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational
definition of intelligence. A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer.
The Turing Test is a deceptively simple method of determining whether a machine can demonstrate human
intelligence: If a machine can engage in a conversation with a human without being detected as a machine, it has
demonstrated human intelligence. The Turing Test has become a fundamental motivator in the theory and
development of artificial Intelligence (AI).
• The Turing Test judges the conversational skills of a bot.
• According to the test, a computer program can think if its responses can fool a human into
believing it, too, is human.
• Not everyone accepts the validity of the Turing Test, but passing it remains a major challenge to
developers of artificial intelligence.

The Turing Test Today


The Turing Test has its detractors, but it remains a measure of the success of artificial intelligence projects.
An updated version of the Turing Test has more than one human judge interrogating and chatting with both
subjects. The project is considered a success if more than 30% of the judges, after five minutes of conversation,
conclude that the computer is a human.
The Loebner Prize is an annual Turing Test competition that was launched in 1991 by Hugh Loebner, an
American inventor and activist. Loebner created additional rules requiring the human and the computer program
to have 25 -minute conversations with each of four judges.The winner is the computer whose program receives
the most votes and the highest ranking from the judges.

Chatting with Eugene


Alan Turing predicted that a machine would pass the Turing Test by 2000.He was close.
In 2014, Kevin Warwick of the University of Reading organized a Turing Test competition to mark
the 60th anniversary of Alan Turing’s death. A computer chatbot called Eugene Goostman, who had the
persona of a 13 -year-old boy, passed the Turing Test in that event. He secured the votes of 33% of the
judges who were convinced that he was human.The vote is, not surprisingly, controversial. Not everybody
accepts Eugene Goostman's achievement.
Critics of the Turing Test
Critics of the Turing Test argue that a computer can be built that has the ability to think, but not to have
a mind of its own. They believe that the complexity of the human thought process cannot be coded.
Regardless of the differences in opinion, the Turing Test has arguably opened doors for more innovation
in the technology sphere.
In the paper, he discussed several objections to his proposed enterprise and his test for intelligence
namely:
(1) The Theological Objection ,
(2) The ‘Heads in the Sand’ Objection ,
(3) The Mathematical Objection ,
(4) The Argument from Consciousness ,
(5) Arguments from Various Disabilities ,
(6) Lady Lovelace’s Objection ,
(7) Argument from Continuity in the Nervous System , (8) The Argument from Informality of
Behavior , and
(9) The Argument from Extra-Sensory Perception.

Objections of Turing carry some weight today also like theoretical objection and theological objection because
they are not real and it is programmed.

The other or new objections arising from developments since he wrote the paper are:

• The chimpanzee’s objection:


According to the first objection, the test is too conservative. Few would deny that chimpanzees can think,
yet no chimpanzee can pass the Turing test. If thinking animals could fail, then presumably a thinking computer
also can also fail the Turing test.

• The sense organs objection:


This test focused on the computer’s ability to make verbal responses. It doesn’t respond to the objects that are
seen and touched like a human does.

• Simulation objection:
Suppose a computer passes the Turing test. How can we say that it thinks? Success in the test means only that it
has shown simulation of thinking.

• The black box objection:


The black box is a device whose inner workings are allowed to be a mystery. The computer involved is treated as
a black box. The judgment of whether a computer thinks or not is based on outward behavior.

2.
The picture of problem solving that had arisen during the first decade of AI research was of a general -
purpose search mechanism trying to string together elementary reasoning steps to find complete solutions. Such
approaches have been called weak methods because, although general, they do not scale up to large or difficult
problem instances. The alternative to weak methods is to use more powerful, domain -specific knowledge that
allows larger reasoning steps and can more easily handle typically occurring cases in narrow areas of expertise.
One might say that to solve a hard problem, you have to almost know the answer already.
By 1970, most govt. funding for AI projects was cancelled, since AI was still a relatively new field,
academic in nature, with few practical applications apart from playing games. At that time, no AI system could
manage real -world problems. Many of the problems attempted to solve were too broad and too difficult.
Problems/difficulties:
a) Most early programs knew nothing of their subject matter.
b) Several problems that AI was attempting to solve were intractable –“combinatorial explosion”
c) There were some fundamental limitations on the basic structure being used to generate intelligent
behavior.
3.
EXPERT SYSTEMS
Expert Systems resulted from the failure of weak methods to solve broad and difficult problems through
general methods. Expert Systems somehow inverted what weak methods did by narrowing the problem to be
solved and making large reasoning steps.
Expert systems solve problems by reasoning through a body of knowledge (“expertise”). In expert
systems, researchers usually team up with an expert like in the case of DENDRAL project where Edward
Feigenbaum, Bruce Buchanan (a computer scientist) and Joshua Lederberg (a Nobel Prize winner in genetics)
formed a team.
The development of expert/knowledge systems was in early 70s with more powerful, domain-specific
knowledge. The Domain for intelligent machines had to be sufficiently restricted; have to narrow areas of
expertise to deliver practical results .

4.
The common characteristics of early expert systems such as DENDRAL, MYCIN, and PROSPECTOR
are the following:

• They have high-performance levels


• They are easy to understand
• They are completely reliable
• They are highly responsive

5.
a) Supermarket bar code scanners
This is not AI because, while it can read bar codes, it is not capable of machine learning.
They can simply scan and display the code. Therefore, it is not an instance of AI.

b) Web search engines


These are AI because they are capable of machine learning, such as being able to tailor search
results. They are also capable of natural language processing and are able to understand what users are
asking for and fix spelling errors. It constantly optimizes searches for quick retrieval of information on
the internet.
c) Voice-activated telephone menus
These are not AI because they are incapable of adapting to new inputs, even though it is capable
of natural language processing. It can only interact in a particular way and does not understand if
something occurs out of its rule book.

d) Internet routing algorithms that respond dynamically to the state of the network.
These are AI because they are capable of adapting to new situations. It is an instance of AI as it
has made decisions based on the state of network.

6.
a. Playing a decent game of table tennis (Ping-pong).
Yes. A reasonable level of proficiency was achieved by Andersson’s robot (Andersson, 1988).

b. Driving in the center of Cairo, Egypt.


It is currently being solved by various companies such as waymo, uber, tesla and so on which have
introduced autonomous vehicles. Autonomous vehicles use various hardware sensors to capture the
information of environment which is analyzed using AI technology by the software to implement
certain action. Driving in the center of Cairo Egypt using computers or AI could be possible but it
takes time before it could happen since the process of creating and allowing a self -driving car to the
public roads would not be easy and requires so much tests.

c. Driving in IT Park, Cebu City.

If the study of the creation of autonomous vehicles would be successful and be approved to be in the
public roads someday, then it could be possible but we know the fact that it will take so much time
before that would happen.

d. Buying a week’s worth of grocery at the market.

As of now, no robot can currently put together the tasks of moving in a crowded environment, using
vision to identify a wide variety of objects, and grasping the objects (including squish able vegetables)
without damaging them. The component pieces are nearly able to handle the individual tasks, but it
would take a major integration effort to put it all together.

e. Buying a week’s worth of grocery on the Web.


Yes. Software robots are capable of handling such tasks, particularly if the design of the web grocery
shopping site does not change radically overtime.

f. Playing a decent game of bridge at a competitive level.


Yes, it is possible for AI software to play bridge at competitive level, if it is trained with a good number
of datasets. We have seen AI software beat the best players in other games like GO, chess and so on.
Programs such as GIB now play at a solid level.
g. Discovering and proving new mathematical theorems.
It is possible for AI to solve theorems but discovering new theorems and proving them is not possible.

h. Writing an intentionally funny story.


No. It may be possible for the AI to generate random stories based on a data set it is trained on but
cannot intentionally generate a funny story because it might not know what might be considered funny.

i. Giving competent legal advice in a specialized area of law.


It is possible for the AI to assist a legal advisor by generating probable set of options that may be
helpful for the case but cannot give competent legal advice because AI cannot understand e very aspect
of case.

j. Translating spoken English into spoken Swedish in real time.


Yes, it is possible for AI to translate languages and lot of companies like Apple and Google have done
advances in this area .

k. Performing a complex surgical operation.


No, it cannot be implemented by the AI as performing complex surgical operation involves a lot of
verticals .

these can
For the currently infeasible tasks, identify what are the difficulties and have proposed solutions on how
be solved.

Performing any type of surgical operations is kind of impossible task as it involves

1.) AI can come to wrong conclusions when identifying the issue with the patient’s health
2.) If there is any kind of misstep, AI may not be able to rectify the misstep immediately.
3.) The AI cannot be held vulnerable or liable if it did wrong in an operation or malpractice. This can be solved
by really not allowing AI to perform any type of surgical operations.
References:
Russel, S. J., and Norvig, P. (2010). Artificial Intelligence: A Modern Approach.
Frankenfield, J. (2022). Turing Test.

Retrievedfrom
https://www.investopedia.com/terms/t/turingtest.asp#:~:text=The%20Turing%20Test%20judges%20the,to%20
developers%20of%20artifi cial%20intelligence.
Turing, A. M. (1950). I.—Computing Machinery and Intelligence.
Negnevitsky, (2005). Artificial Intelligence.
Rosos J. M. (2018). Creating Intelligence.
Retrieved from
https://creatingintelligence.net/2018/08/31/weak-methods-and-expert-systems-prelude-
toai/#:~:text=Expert%20Systems%20resulted%20from%20the,and%20making%20large%20reasoning% 20steps.

You might also like