You are on page 1of 5

18/1/23, 10:55 Facebook’s Artificial Intelligent Robots shut down after they started talking to each other | by Samiksha

Samiksha Rastogi | DataDrivenInvestor

Open in app Sign up Sign In

Search Medium

Published in DataDrivenInvestor

Samiksha Rastogi Follow

Oct 22, 2018 · 5 min read · Listen

Save

Facebook’s Artificial Intelligent Robots shut


down after they started talking to each other
By Samiksha Rastogi

In 2017, facebook had to stop one of its experiments when two artificial Intelligent
robots started talking to each other in a language that only they understood. The
language was created by them to simplify things but was incomprehensible to humans.
The robots were prepared to make a negotiation over an experimental trade of
ordinary objects like hats, balls etc. Also they were instructed to improve during the
process that made them even better than they were before the experiment started.

The robots had learning algorithm that allows it to learn anything different that
happens while the experiment. For instance, imagine a MMA fight where a robot that
has some defined moves fights a human. The robot there is a machine that would only
follow the set of instructions coded. There is less or no chance of a robot winning in
front of a trained fighter this way but if the robot happens to be a learning robot then it
can learn to copy the opponent and learn opponent’s moves. Hence, what happened in
facebook is not a surprising thing to happen because the learning robots are designed
in a manner that they tend to find or generate ways to optimize the task which makes
us understand what exactly happened. Moreover, the robots even after using their
alien language were able to make a successful negotiation.
95

https://medium.datadriveninvestor.com/facebooks-artificial-intelligent-robots-shut-down-after-they-started-talking-to-each-other-f5f6966e1931 1/5
18/1/23, 10:55 Facebook’s Artificial Intelligent Robots shut down after they started talking to each other | by Samiksha Rastogi | DataDrivenInvestor

The names of the robots were Bob and Alice. Below are a few instances to understand
how they actually interacted and what is so “alien” about it:

Bob: i can i i everything

Alice: balls have a ball to me to me to me

The statements given are beyond human understanding rather they use words from
english language only which brings us to a conclusion that the robots created a
shorthand just like humans do. The chatbots were negotiating in a way that was near to
human approach, for instance they would pretend to show interest in one object and
give it up that later making an impact that they are making a sacrifice.

A similar incident happened in 2016 with Microsoft’s chatbot Tay which was exposed to
Twitter and social web. It was a machine learning project designed for human
engagement. Tay started to post racist comments on Twitter and ultimately microsoft
had to shut it down stating that “As it is learns, some of its responses are inappropriate.
We are making some adjustments”. What might have happened was that as Tay was
exposed to social media, it was repeating statements by other users, to engage them in
the conversation and as the company did not implement any automated filters on
specific terms, the bot was using racist labels and other common expletives.

Readers may also find related articles of scenarios with Google and Nikon, where
Google robots identified Africans as gorillas and Nikon face detection when used on
Asian subjects gave a message “are they blinking?” which concludes that machine
learning is a field with many points to cover.

What is machine learning and what are the possible problems that arise in machine
learning?

Machine learning is a part of artificial intelligence that uses techniques to make a


machine learn without using any programming. An algorithm that makes a program
accurate in predicting outcomes without being further programmed. Machine
learning can be done by:

Visual Object Detection: Given natural photographs (images from the


web), and a target object class such as “person” or “car”, we want to
https://medium.datadriveninvestor.com/facebooks-artificial-intelligent-robots-shut-down-after-they-started-talking-to-each-other-f5f6966e1931 2/5
18/1/23, 10:55 Facebook’s Artificial Intelligent Robots shut down after they started talking to each other | by Samiksha Rastogi | DataDrivenInvestor

build a system that will identify the objects of that type in the photographs
and give their approximate locations. We consider the case where training
data is given as pictures annotated with bounding boxes for the objects
of the desired class.

Open Domain Continuous Speech Recognition: Given a sound wave of


human speech recover the sequence of words that were uttered. Training
data is taken to consist of sound waves paired with transcriptions such
as in closed caption television together with a large corpus of text not
associated with any sound waves.

Natural Language Translation: Given a sentence in one language translate


it into another language. The training data consists of a set of translation
pairs where each pair consists of a sentence and its translation.

The Netflix Challenge: Given previous movie ratings of an individual


predict the rating they would give to a movie they have not yet rated.
The training data consists of a large set of ratings where each rating is a
person identifier, a movie identifier, and a rating.

There can be number of problems that might occur. Some are listed below:

Societal bias: An artificial intelligent software reflects how its creators are biased.
Societal bias is a trait of a person or group with distinct traits and is a stubborn
problem that has bothered humans since the dawn of civilization.

Sparse text data: Machines can work on and understand small text data but when a
scattered data is given to a machine the results are not accurate as they can be with
small meaningful data. For eg. Language modelling of small data like a tweet is
easier than compared to a document.

Difficulty in interpretation of semantics and syntactics of a language: It is difficult


for a bot to understand difference in syntax and semantics of a language as rules
and conventions for every language are different and it becomes difficult for the
bot to decide what rules to follow while translating. Moreover, they cannot
interpret sarcasm in any sentence due to high semantic difference in words. For
eg. Given a quote “Have you ever listened to someone for a while and wondered..

https://medium.datadriveninvestor.com/facebooks-artificial-intelligent-robots-shut-down-after-they-started-talking-to-each-other-f5f6966e1931 3/5
18/1/23, 10:55 Facebook’s Artificial Intelligent Robots shut down after they started talking to each other | by Samiksha Rastogi | DataDrivenInvestor

“Who ties your shoelaces for you?””. Here the sarcasm would be incomprehensible
to human brain too. Therefore, we cannot expect a machine to understand it as a
sarcasm rather it would answer a “yes” or “no” for the asked question in the quote.

Going out of context: The bots keep on learning from previous messages or texts
but in some cases it may deviate from the real context. For instance, when we type
a message in Google Translator, it may or maynot give the exact conversion of the
given sentence reason being the difficulty in understanding set of rules for every
language.

Machine Learning Artificial Intelligence Robots Computer Science Programming

About Help Terms Privacy

https://medium.datadriveninvestor.com/facebooks-artificial-intelligent-robots-shut-down-after-they-started-talking-to-each-other-f5f6966e1931 4/5
18/1/23, 10:55 Facebook’s Artificial Intelligent Robots shut down after they started talking to each other | by Samiksha Rastogi | DataDrivenInvestor

Get the Medium app

https://medium.datadriveninvestor.com/facebooks-artificial-intelligent-robots-shut-down-after-they-started-talking-to-each-other-f5f6966e1931 5/5

You might also like