You are on page 1of 4

ZEYNEP

TUFEKCI

The Real Bias Built In at Facebook


By Zeynep Tufekci

May 19, 2016

FACEBOOK is biased. That’s true. But not in the way conservative critics say it is.

The social network’s powerful newsfeed is programmed to be viral, clicky, upbeat or


quarrelsome. That’s how its algorithm works, and how it determines what more than a
billion people see every day.

The root of this bias is in algorithms, a much misunderstood but increasingly powerful
method of decision making that is spreading to fields from news to health care to hiring and
even to war.

Algorithms in human affairs are generally complex computer programs that crunch data
and perform computations to optimize outcomes chosen by programmers. Such an
algorithm isn’t some pure sifting mechanism, spitting out objective answers in response to
scientific calculations. Nor is it a mere reflection of the desires of the programmers.

We use these algorithms to explore questions that have no right answer to begin with, so we
don’t even have a straightforward way to calibrate or correct them.

The current discussion of bias and Facebook started this month, after some former
Facebook contractors claimed that the “trending topics” section on Facebook highlighted
stories that were vetted by a small team of editors who had a prejudice against right-wing
news sources.

This suggestion set off a flurry of reactions, and even a letter from the chairman of the
Senate Commerce Committee. However, the trending topics box is a trivial part of the site,
and almost invisible on mobile, where most people use Facebook. And it is not the newsfeed,
which is controlled by an algorithm.

This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego.
Any unauthorized use or reproduction of this document is strictly prohibited*.
To defend itself against the charges of bias stemming from the “trending topics” revelation,
Facebook said that the process was neutral, that the stories were first “surfaced by an
algorithm.” Mark Zuckerberg, the chief executive, then invited the radio host Glenn Beck
and other conservatives to meet with him on Wednesday.

But “surfaced by an algorithm” is not a defense of neutrality, because algorithms aren’t


neutral.

Algorithms are often presented as an extension of natural sciences like physics or biology.
While these algorithms also use data, math and computation, they are a fountain of bias and
slants — of a new kind.

If a bridge sways and falls, we can diagnose that as a failure, fault the engineering, and try
to do better next time. If Google shows you these 11 results instead of those 11, or if a hiring
algorithm puts this person’s résumé at the top of a file and not that one, who is to definitively
say what is correct, and what is wrong? Without laws of nature to anchor them, algorithms
used in such subjective decision making can never be truly neutral, objective or scientific.

Programmers do not, and often cannot, predict what their complex programs will do.
Google’s Internet services are billions of lines of code. Once these algorithms with an
enormous number of moving parts are set loose, they then interact with the world, and learn
and react. The consequences aren’t easily predictable.

Our computational methods are also getting more enigmatic. Machine learning is a rapidly
spreading technique that allows computers to independently learn to learn — almost as we
do as humans — by churning through the copious disorganized data, including data we
generate in digital environments.

However, while we now know how to make machines learn, we don’t really know what exact
knowledge they have gained. If we did, we wouldn’t need them to learn things themselves:
We’d just program the method directly.

With algorithms, we don’t have an engineering breakthrough that’s making life more
precise, but billions of semi-savant mini-Frankensteins, often with narrow but deep
expertise that we no longer understand, spitting out answers here and there to questions we
can’t judge just by numbers, all under the cloak of objectivity and science.

If these algorithms are not scientifically computing answers to questions with objective
right answers, what are they doing? Mostly, they “optimize” output to parameters the
company chooses, crucially, under conditions also shaped by the company. On Facebook the

This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego.
Any unauthorized use or reproduction of this document is strictly prohibited*.
goal is to maximize the amount of engagement you have with the site and keep the site ad-
friendly. You can easily click on “like,” for example, but there is not yet a “this was a
challenging but important story” button.

This setup, rather than the hidden personal beliefs of programmers, is where the thorny
biases creep into algorithms, and that’s why it’s perfectly plausible for Facebook’s work
force to be liberal, and yet for the site to be a powerful conduit for conservative ideas as well
as conspiracy theories and hoaxes — along with upbeat stories and weighty debates.
Indeed, on Facebook, Donald J. Trump fares better than any other candidate, and anti-
vaccination theories like those peddled by Mr. Beck easily go viral.

The newsfeed algorithm also values comments and sharing. All this suits content designed
to generate either a sense of oversize delight or righteous outrage and go viral, hoaxes and
conspiracies as well as baby pictures, happy announcements (that can be liked) and
important news and discussions. Facebook’s own research shows that the choices its
algorithm makes can influence people’s mood and even affect elections by shaping turnout.

For example, in August 2014, my analysis found that Facebook’s newsfeed algorithm largely
buried news of protests over the killing of Michael Brown by a police officer in Ferguson,
Mo., probably because the story was certainly not “like”-able and even hard to comment on.
Without likes or comments, the algorithm showed Ferguson posts to fewer people,
generating even fewer likes in a spiral of algorithmic silence. The story seemed to break
through only after many people expressed outrage on the algorithmically unfiltered Twitter
platform, finally forcing the news to national prominence.

Software giants would like us to believe their algorithms are objective and neutral, so they
can avoid responsibility for their enormous power as gatekeepers while maintaining as large
an audience as possible. Of course, traditional media organizations face similar pressures to
grow audiences and host ads. At least, though, consumers know that the news media is not
produced in some “neutral” way or above criticism, and a whole network — from media
watchdogs to public editors — tries to hold those institutions accountable.

The first step forward is for Facebook, and anyone who uses algorithms in subjective
decision making, to drop the pretense that they are neutral. Even Google, whose powerful
ranking algorithm can decide the fate of companies, or politicians, by changing search
results, defines its search algorithms as “computer programs that look for clues to give you
back exactly what you want.”

But this is not just about what we want. What we are shown is shaped by these algorithms,
which are shaped by what the companies want from us, and there is nothing neutral about
that.
This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego.
Any unauthorized use or reproduction of this document is strictly prohibited*.
Zeynep Tufekci is an assistant professor at the School of Information and Library Science at the University of North Carolina
and a contributing opinion writer.

Follow The New York Times Opinion section on Facebook and Twitter, and sign up for the Opinion Today newsletter.

A version of this article appears in print on May 18, 2016, Section A, Page 27 of the New York edition with the headline: The Real Bias Built In at
Facebook

READ 235 COMMENTS

This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego.
Any unauthorized use or reproduction of this document is strictly prohibited*.

You might also like