You are on page 1of 2

January 20, 2017

The Future Economy

Tom Simonites Technology Review article, AI Software Learns to Make AI Software

reveals high-level information about the current artificial intelligence sector which leaves the

reader either excited or terrified depending on their perspective1. Currently, there is a mechanism

known as machine learning where programs are developed with the intent that they immolate

abstract thinking in their decision making3.

To give an example of this, AI developers have created programs designed to outsmart

humans at complex board games as a benchmark for their perceived intelligence. One might

think that the best way to go about this is to make the computer calculate every possible move

and pick the best one every time. This way of thinking might work well in a game like tic-tac-

toe, where the number of possible moves is relatively small. However, in a game like chess this

is not really an adequate method. Claude Shannon famously calculated that in the most modern

figures there are more possible moves in chess than there are atoms in the universe2. That

breathtaking fact led programmers to a new way of thinking about artificial intelligence. Instead

of creating programs that attempt to decipher these unrealistically large amounts of data,

programmers decided to create algorithms (plans or sets of instructions) which allowed for

computers to pick less than perfect moves and essentially record and learn from their outcomes.

That verbose explanation should help one understand in laymans terms that machine

learning is exactly what it sounds like. What was groundbreaking about the research performed

in Simontes article though, was that the various prestigious tech firms and academic

establishments (Google, MIT, and Berkley to name a few) had performed experiments using
machine learning to teach programs on how to use machine learning1. Basically, the programs

themselves are learning how to make programs that learn. The article goes on to explain that in

spite of positive results this technology still isnt practical due to the massive amounts of

computational power required for it to run efficiently, but the concept and its irony is what I

found most interesting.

Simonite comments at the beginning show that he sees the same irony. He starts by

saying, Progress in artificial intelligence causes some people to worry that software will take

jobs such as driving trucks away from humans. Now leading researchers are finding that they can

make software that can learn to do one of the trickiest parts of their own jobsthe task of

designing machine-learning software1. People tend to think that the primary threat of advancing

ai is to the working-class. The fact that people who take orders and move boxes might be

replaced by future technologies is appearing more possible every day. What is mind-blowing is

that this threat is not limited to the working-class, but to every job in the workforce. If machines

can learn how to program on what is essentially the PhD level, what jobs could they not learn to

do in the future? The biggest shock may not be this development that machines can write

excellent code, but that they are doing this and its only 2017. The not-so distant future seems to

be coming in faster than the imagination can predict, so the next question becomes how should

people adapt to this new world where we are no longer indispensable to the functioning


1. Simonite, Tom. Googles AI software is learning to make AI software. MIT

Technology Review, 19 Jan. 2017. Web. 20 Jan. 2017.
2. How many unique games of chess can be played??-the Shannon number., 12 Nov. 2009. Web. 20 Jan. 2017.
3. Rouse, Margaret. What is machine learning? - definition from, 23 Oct. 2016. Web. 20 Jan. 2017.