Professional Documents
Culture Documents
Search Medium
Published in DataDrivenInvestor
Save
In 2017, facebook had to stop one of its experiments when two artificial Intelligent
robots started talking to each other in a language that only they understood. The
language was created by them to simplify things but was incomprehensible to humans.
The robots were prepared to make a negotiation over an experimental trade of
ordinary objects like hats, balls etc. Also they were instructed to improve during the
process that made them even better than they were before the experiment started.
The robots had learning algorithm that allows it to learn anything different that
happens while the experiment. For instance, imagine a MMA fight where a robot that
has some defined moves fights a human. The robot there is a machine that would only
follow the set of instructions coded. There is less or no chance of a robot winning in
front of a trained fighter this way but if the robot happens to be a learning robot then it
can learn to copy the opponent and learn opponent’s moves. Hence, what happened in
facebook is not a surprising thing to happen because the learning robots are designed
in a manner that they tend to find or generate ways to optimize the task which makes
us understand what exactly happened. Moreover, the robots even after using their
alien language were able to make a successful negotiation.
95
https://medium.datadriveninvestor.com/facebooks-artificial-intelligent-robots-shut-down-after-they-started-talking-to-each-other-f5f6966e1931 1/5
18/1/23, 10:55 Facebook’s Artificial Intelligent Robots shut down after they started talking to each other | by Samiksha Rastogi | DataDrivenInvestor
The names of the robots were Bob and Alice. Below are a few instances to understand
how they actually interacted and what is so “alien” about it:
The statements given are beyond human understanding rather they use words from
english language only which brings us to a conclusion that the robots created a
shorthand just like humans do. The chatbots were negotiating in a way that was near to
human approach, for instance they would pretend to show interest in one object and
give it up that later making an impact that they are making a sacrifice.
A similar incident happened in 2016 with Microsoft’s chatbot Tay which was exposed to
Twitter and social web. It was a machine learning project designed for human
engagement. Tay started to post racist comments on Twitter and ultimately microsoft
had to shut it down stating that “As it is learns, some of its responses are inappropriate.
We are making some adjustments”. What might have happened was that as Tay was
exposed to social media, it was repeating statements by other users, to engage them in
the conversation and as the company did not implement any automated filters on
specific terms, the bot was using racist labels and other common expletives.
Readers may also find related articles of scenarios with Google and Nikon, where
Google robots identified Africans as gorillas and Nikon face detection when used on
Asian subjects gave a message “are they blinking?” which concludes that machine
learning is a field with many points to cover.
What is machine learning and what are the possible problems that arise in machine
learning?
build a system that will identify the objects of that type in the photographs
and give their approximate locations. We consider the case where training
data is given as pictures annotated with bounding boxes for the objects
of the desired class.
There can be number of problems that might occur. Some are listed below:
Societal bias: An artificial intelligent software reflects how its creators are biased.
Societal bias is a trait of a person or group with distinct traits and is a stubborn
problem that has bothered humans since the dawn of civilization.
Sparse text data: Machines can work on and understand small text data but when a
scattered data is given to a machine the results are not accurate as they can be with
small meaningful data. For eg. Language modelling of small data like a tweet is
easier than compared to a document.
https://medium.datadriveninvestor.com/facebooks-artificial-intelligent-robots-shut-down-after-they-started-talking-to-each-other-f5f6966e1931 3/5
18/1/23, 10:55 Facebook’s Artificial Intelligent Robots shut down after they started talking to each other | by Samiksha Rastogi | DataDrivenInvestor
“Who ties your shoelaces for you?””. Here the sarcasm would be incomprehensible
to human brain too. Therefore, we cannot expect a machine to understand it as a
sarcasm rather it would answer a “yes” or “no” for the asked question in the quote.
Going out of context: The bots keep on learning from previous messages or texts
but in some cases it may deviate from the real context. For instance, when we type
a message in Google Translator, it may or maynot give the exact conversion of the
given sentence reason being the difficulty in understanding set of rules for every
language.
https://medium.datadriveninvestor.com/facebooks-artificial-intelligent-robots-shut-down-after-they-started-talking-to-each-other-f5f6966e1931 4/5
18/1/23, 10:55 Facebook’s Artificial Intelligent Robots shut down after they started talking to each other | by Samiksha Rastogi | DataDrivenInvestor
https://medium.datadriveninvestor.com/facebooks-artificial-intelligent-robots-shut-down-after-they-started-talking-to-each-other-f5f6966e1931 5/5