You are on page 1of 2

In his article, ‘The Dark Secret at the Heart of AI’, Will Knight (2017) states that artificial

intelligence can’t give an explanation about its decision-making process. In fact, the
author points out several cases where AI is used for different scopes: driving, banking,
or medical diagnosis.

On the one hand, Nvidia’s car project enhances an algorithm that teaches itself how to
drive by using a decision-making process based on an artificial neuronal network. The
author reports that there is no way to know how to get an explanation of the
behaviour or reaction that the system has had. He insists on the idea that
understanding deep learning is essential before using these techniques because,
otherwise, failures and possible effects and consequences cannot be foreseen.

In addition, mathematical models are used nowadays to do daily tasks, such as banking
operations or carrying out some hiring processes. These techniques are
understandable for users. However, the author illustrates that there are several
scientists that argues against using machine-learning techniques for military or banks
because they are ‘black boxes’, for example Tommi Jaakola. Since the problem lies in
the fact of being unable of understand the behaviour that these methods have, the
author claims if people should trust on something that they do not understand.

On the other hand, the author maintains the use of deep learning for medical
diagnosis. He shows the program ‘Deep Patient’ that Mount Sinai Hospital created. By
providing data concerning patients, results and doctor visits, there were hidden
patterns that resulted in predicting diseases. Although this program is capable of
foresee mental disorders such as schizophrenia that are difficult to identify, Dudley,
Mount Sinai’s leader, posits that they do not know how it works.

The author discusses that there were two types of evolving these methods. On the one
hand, there were two schools that faced the ‘black box’ issue, and built machines
based on logic that everyone, just by checking the code, could understand their
behaviour.

On the other hand, there were also schools that defended that artificial intelligence
should evolve by itself basing its knowledge on experience and biology. This machine
programs itself that results in evolving the society, for example, digitalizing
handwritten characters or improving machine translation. In contrast with the first two
schools, these techniques are opaque, even for computer scientists. This technology is
based on interconnected layers composed of thousands of simulated neurons. Each
layer recognizes a level of abstraction.

1
The author argues that being aware of the ‘black box’ problem, several strategies have
been applied to get explanations how deep learning works. For example, Google’s
Deep Dream project and Jeff Clune’s project by using images to reveal more
information or Regina Barzilay’s project that focused on mammogram images to
identify breast cancer.

In addition, U.S. military has also invested money on this type of programs such as
DARPA (Defense Advanced Research Projects Agency). The author explains that this
technology is used to identify patterns in large amounts of data. He discusses that
soldiers as well as analysts do not feel comfortable using technology that are unable to
understand its reasoning.

The author insists on the idea that knowing what the reasoning of carrying out actions,
or understanding the behaviour of a machine is a vital need if it is going to be used in
our daily lives. However, he refutes this idea maintaining that some behaviour are
impossible to explain not only for machines but also for the human being because they
are just instinctual or subconscious. Therefore, some scientists insist on the idea of
being cautious with artificial intelligence basing its decision-making process on our
consistent ethical judgments and recommending not to rely on them if they do not
give a good explanation.

In my opinion, we should work to improve AI process to understand the decisions


taken by these systems. Therefore, we should not stop using the benefits provided by
machine-learning, deep-learning or artificial neuronal networks, since by
understanding how they work in many situations, we will understand or will find
patterns that we cannot see by ourselves. It has been proved that these solutions are
effective, so, if we understand the way these systems think, we will be able to get all
the benefits that they provide us.

You might also like