You are on page 1of 30

For the exclusive use of C. Ahrengot, 2022.

9 -7 2 0 -4 0 0
REV: JANUARY 21, 2020

DAVID YOFFIE

DANIEL FISHER

Fixing Facebook: Fake News, Privacy, and Platform


Governance
It’s not enough to just build tools. We need to make sure they are used for good.

— Mark Zuckerberg, testifying before the U.S. Senate on April 10, 2018

During Facebook Inc.’s results conference call for its second quarter of 2019, Mark Zuckerberg—the
company’s CEO, chairman, and majority shareholder—announced that the number of people using at
least one of Facebook’s services each day had risen to 2.1 billion, more than a quarter of the global
population. 1 Zuckerberg also confirmed that Facebook had agreed to pay a record-setting $5 billion
fine to the Federal Trade Commission (FTC) for violating consumers’ privacy. The FTC settlement also
required Facebook to become more accountable for how it handled users’ data, which Zuckerberg said
would help Facebook set “a new standard” for transparency in its industry. He also welcomed further
regulation on election safety, harmful content, and data portability. “I don’t believe it’s sustainable for
private companies to be making so many decisions on social issues,” he said.” 2

Zuckerberg’s calls for regulation stood in stark contrast to his optimism in Facebook’s early days.
He long espoused the belief that connecting people was a way to foster social harmony. Yet a growing
number of critics accused Facebook of doing the exact opposite by taking advantage of users’ privacy
and allowing bad actors to use its platform to cause real-world harm. Facebook tried to mitigate these
problems: it hired more content moderators and changed its privacy policy to give users greater clarity
and control. But numerous critics remained skeptical.

In March 2019, Zuckerberg announced that the Facebook platform would be shifting from a “public
square” model, which focused on sharing public posts across the social graph, to a “digital living room”
model, which would focus on smaller, more intimate conversations. However, many questions
remained. First, what changes could Facebook make to ease its users’ privacy concerns and keep bad
actors out of the public square? Second, should Facebook govern more (be more directly engaged in
censorship and curation), govern less, or do something in between? And finally, how could Zuckerberg
monetize the digital living room? Over the past three years, Facebook’s public square model had
earned over $120 billion in revenue and a net income of $47 billion, mostly from advertising. On an
encrypted platform focused on private communications, could an advertising model work? Or were
financial success and good governance mutually exclusive?

Professor David Yoffie and Research Associate Daniel Fisher prepared this case. This case was developed from published sources. Funding for the
development of this case was provided by Harvard Business School and not by the company. HBS cases are developed solely as the basis for class
discussion. Cases are not intended to serve as endorsements, sources of primary data, or illustrations of effective or ineffective management.

Copyright © 2019, 2020 President and Fellows of Harvard College. To order copies or request permission to reproduce materials, call 1-800-545-
7685, write Harvard Business School Publishing, Boston, MA 02163, or go to www.hbsp.harvard.edu. This publication may not be digitized,
photocopied, or otherwise reproduced, posted, or transmitted, without the permission of Harvard Business School.

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

Facebook—A Brief Overview


Mark Zuckerberg was a sophomore at Harvard when he launched thefacebook.com (“Facebook”
hereon) on February 4, 2004. 3 Zuckerberg said that the idea was to make an online version of Harvard’s
class directory, or “face book.” “I think it’s kind of silly that it would take the University a couple of
years to get around to it,” Zuckerberg told the Harvard Crimson. “I can do it better than they can, and I
can do it in a week.” 4

The original Facebook site was simple: Users could create profiles with personal information like
their course schedule, they could search for other profiles by looking up courses or social organizations,
and they could link their profiles to their friends’ profiles. Within 24 hours of its launch, more than
1,000 Harvard students registered to join the site. 5 Despite its simplicity, or perhaps because of it, many
early users found Facebook addictive and regularly checked the site to keep up with their friends.
Having lots of friends on Facebook became something of a status symbol, and students began
competing to add as many as possible. In the process, Facebook became, in Zuckerberg’s words, a
complex “social graph” that mirrored social connections in the real world. 6

Facebook quickly expanded to other universities, and by May 2005, the site had 2.8 million
registered users and a $13 million investment from Accel Partners, a Palo Alto venture firm. 7 In
September 2006, Facebook made the site available to everyone over the age of 13, the age limit set by
United States federal law for collecting personal information online. 8 Over the next decade, Facebook
growth was roughly linear, adding around 50 million monthly active users per quarter. 9

As Facebook grew, it added new features to the site. Among the first big changes was the addition
of the News Feed in 2006, a feature that aggregated content posted by users’ friends. In 2007, it launched
the Facebook Platform, a set of tools and services that enabled third parties to develop applications
(apps) like games and personality quizzes for Facebook, as well as gather data from users that granted
access to their profile information. 10 Afterwards, Facebook added instant chat, video, livestreaming,
classified ads, and the capability to interact with content by posting “likes” and “reactions.” Facebook
received several acquisition offers, including a $75 million offer from Viacom in 2005 and $1 billion
offer from Yahoo in 2006, but Zuckerberg rejected all of them. 11 By 2017, its number of monthly active
users surpassed two billion. For 2018, it reported earning $55.8 billion in revenue and net income of
$22.1 billion (see Exhibit 1). 12

Facebook’s Business Model


Facebook earned revenue primarily through targeted advertising. It first offered advertisements in
2004 through a service called Flyers, which allowed advertisers to send advertisements to particular
college campuses; later versions allowed advertisers to target more specific audiences by specifying an
age range or key words to look for in user profiles. 13 As technology evolved, Facebook enabled
advertisers to target based on personal, location, and/or behavioral data, which made Facebook one of
the most valuable resources for advertisers on the web (see Exhibit 8). As of 2019, Facebook had more
than 7 million advertisers on its platform, and its average revenue per user (ARPU) in the U.S. had
risen from less than $10 in 2015 to more than $33 (see Exhibit 7). 14

Over the years, Facebook acquired dozens of companies and integrated some of their products into
its site. Facebook’s three largest purchases were Instagram, a photo-sharing mobile application, for $1
billion in 2012; Oculus VR, a virtual reality (VR) hardware and software developer, for $2 billion in
2014; and WhatsApp, a mobile messaging application, for $19 billion in 2014. Facebook also acquired
Onavo, a data security application, to track what applications were becoming popular with users. 15 If

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

it felt a new application posed a threat, it would make an acquisition, as it did in the case of WhatsApp,
or it would copy the application’s core feature, as it did with Snapchat by adding 24-hour “stories” to
Facebook and Instagram. 16 For a time, “Don’t be too proud to copy” was an informal motto at
Facebook. 17 Just before going public in 2012, Zuckerberg warned Facebook employees: “If we don’t
create the thing that kills Facebook, someone else will . . . The internet is not a friendly place. Things
that don’t stay relevant don’t even get the luxury of leaving ruins. They disappear.” 18

Criticism of Facebook
From its earliest days, Facebook received criticism for its curation of content and for violating users’
privacy. Critics assailed Facebook for removing and/or failing to remove certain content. In recent
years, the critique focused on “fake news,” a a nebulous category of content commonly associated with
political misinformation masquerading as legitimate news stories. In the case of privacy, Facebook
received criticism for making it too easy for advertisers, third-party application developers, or other
users to access users’ personal information.

Concern over these two issues was global: critics accused Facebook of being complicit in the spread
of misinformation and hateful rumors that sparked deadly violence in several different countries,
including alleged acts of genocide committed by the Myanmar government. 19 In addition, Facebook
routinely faced lawsuits concerning violations of privacy in several different countries, as well as
regulatory scrutiny from governmental agencies. These issues got everyone’s attention following the
2016 U.S. presidential election. Leading up to the election, Facebook pages that shared politicized
misinformation became extremely popular, and journalists revealed that some of it was produced by
the Russian government to support candidate Donald Trump. 20 To the same end, Russians were also
buying targeted political ads. 21 Then, in March 2018, the Guardian and the New York Times published
articles about Cambridge Analytica, a small data analytics firm that had gained access to the data of
tens of millions of Facebook users and was later hired by Donald Trump’s presidential campaign. One
month after the Cambridge Analytica stories broke, Zuckerberg appeared before the U.S. Senate.
During his testimony, he publicly apologized:

It’s not enough to just give people control over their information. We need to make
sure that the developers they share it with protect their information, too . . . We didn’t take
a broad enough view of our responsibility, and that was a big mistake, and it was my
mistake, and I’m sorry. 22

One year later, in May 2019, the New York Times published an op-ed by Chris Hughes, a Facebook
co-founder, titled “It’s Time to Break Up Facebook.” According to Hughes, “Mark is a good, kind
person. But I’m angry that his focus on growth led him to sacrifice security and civility for clicks . . . An
era of accountability for Facebook and other monopolies may be beginning.” 23

The Issues: Curating Content


Like many other online platforms, Facebook curated the content that its users posted to make its
platform more welcoming. According to its Community Standards, its goal was to “err on the side of
giving people a voice while preventing real world harm and ensuring that people feel safe in our
community.” 24 To meet this goal, Facebook banned content falling into nine categories: (1) Adult
Nudity and Sexual Activity, (2) Bullying and Harassment, (3) Child Nudity and the Sexual Exploitation

a Facebook used the term “false news.”

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

of Children, (4) Fake Accounts, (5) Hate Speech, (6) the Sale of Drugs or Firearms, (7) Spam, (8) Terrorist
Propaganda, and (9) Violence and Graphic Content (see Exhibit 4).

Originally, Facebook’s guidelines for moderators—the people that screened posts for inappropriate
content—were brief and vague. According to former Safety Manager Charlotte Willner, the
instructions were essentially: “if it makes you feel bad in your gut, then go ahead and take it down.” 25
Facebook evolved to fixed guidelines in 2009 for two reasons: it needed to be sure that the members of
its growing moderation team were removing content in a consistent and predictable manner, and it
needed an explanation available for how it made curation decisions to respond to criticisms, which
were becoming more frequent. 26 But Facebook struggled to translate the subtleties of its norms into
hard-and-fast rules. Occasional leaks to the press revealed that Facebook’s internal guidelines were a
patchwork of thousands of PDFs, PowerPoint slides, and Excel sheets, reflecting a long history of ad
hoc responses. 27 Hate speech was among the most challenging categories, and Facebook’s attempts to
define the concept often produced bizarre results. For example, one rule for content curation mandated
that moderators remove hateful speech directed at “white men,” but not “female drivers” or “black
children,” because the latter two groups were too specific. 28

The inherent ambiguities of categories of content like hate speech also impeded the development of
algorithms for detecting violations. Algorithms required large, consistently labeled datasets to “learn”
how to identify different kinds of content. Categories like hate speech presented two challenges. First,
examples of hate speech were numerous, diverse, and highly context-dependent. This meant that the
amount of data required to train algorithms was enormous. The second challenge was that people often
disagreed about what constituted hate speech, making it difficult to label data consistently. Algorithms
for detecting hate speech were often inaccurate, which meant most decisions about hate speech fell to
human moderators (see Exhibit 3). Although algorithms struggled less with other categories of
violations, moderators often made the final call on whether to remove content. By 2019, Facebook
employed around 30,000 moderators around the world, and the average moderator viewed hundreds
of pieces of content every day. 29 Because much of the content was shocking and violent, many
moderators were reportedly traumatized, leading to high turnover. 30

Misinformation and Fake News


Zuckerberg’s philosophy had been to refrain from making content decisions based upon “the
truth.” Because Facebook was meant to be a platform where people could share opinions, it could not
be “in the business of . . . deciding what is true and what is not true,” as Zuckerberg put it in 2018. 31
This first became an issue in 2009, when Jewish groups criticized Facebook for refusing to remove
content promoting Holocaust denial. 32 Facebook’s response was that it could not determine whether
the posts were anti-Semitic or merely ignorant, and it needed to err on the side of free expression. 33

However, Facebook was never able to avoid such decisions completely, particularly after misinfor-
mation on the site caused real-world violence. Starting in 2011, rumors of child abductors spread on
Facebook and WhatsApp sparked mob violence in countries like Mexico and India, claiming dozens of
lives. 34 In Myanmar, the government used Facebook to spread rumors about the Rohingya Muslim
minority. 35 After more than 650,000 Rohingya Muslims fled the country, the chairman of the UN
Independent International Fact-Finding Mission on Myanmar stated flatly in 2018 that Facebook had
played a “determining role” in the crisis. 36

The issue that prompted the most concern was political misinformation. Pages sharing sensational
and misleading political content became extremely popular during the 2016 U.S. presidential election. 37
Some of these pages were run by people with earnest political convictions, but not all. Some popular

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

political pages were run by Macedonian teenagers, and some were run by organizations funded by
Yevgeny Prigozhin, a Russian businessman with strong ties to the Russian government. 38

Trump’s victory in 2016 was a turning point for Facebook. Critics argued that Facebook enabled
foreign powers to unfairly influence the election by spreading pro-Trump propaganda. Zuckerberg’s
first response was to call the theory a “pretty crazy idea,” citing internal investigations that had
concluded a “very small volume” of the content on Facebook was fake news. 39 Facebook also found
that almost every Facebook user in the U.S. had one or more friends that were members of a different
political party, cutting against the popular theory that social media platforms like Facebook produced
an “echo chamber” by isolating users from opposing viewpoints and reinforcing their biases. Instead,
Zuckerberg claimed the problem was users, who were ultimately responsible for clicking on harmful
content. “I don’t know what to do about that,” he said. 40

However, following the 2016 election, Facebook instituted new measures to limit the spread of
misinformation on its platforms. On Facebook, it partnered with fact-checkers, who affixed warnings
to dubious content, and its News Feed algorithms deprioritized content Facebook had identified as
inaccurate. On WhatsApp, where data was encrypted, Facebook could not observe or moderate the
content. Instead, Facebook instated a hard limit on the number of groups a message could be forwarded
to, hoping to reduce the spread of misinformation indirectly. 41 Facebook never banned misinformation
outright on any of its platforms, and it would only remove instances of misinformation for reasons
besides its factual inaccuracy. For example, in August 2018, it removed hundreds of pages, groups, and
accounts originating from Russia and Iran for “coordinated inauthentic behavior,” which it defined as
“working together to mislead others about who they are or what they are doing. 42

One especially challenging form of misinformation was “deepfakes.” Deepfakes were videos
altered by machine learning tools to make the people in the videos do and say things they had not. One
deepfake that went viral on Instagram was a video of Zuckerberg himself, in which he appears to lay
bare his plan for total global domination. 43 Facebook decided to keep the video up, but Zuckerberg
said that it was “currently evaluating what the policy [on deepfakes] needs to be,” and noted that there
was a “very good case” for treating them differently from other kinds of misinformation. 44

Another challenge was misinformation in political advertisements. Facebook received criticism in


September 2019 for refusing to take down an ad run by Trump’s reelection campaign that made false
claims about Joe Biden. Following the controversy, Twitter banned political advertisements on its own
platform, and Google limited the forms of targeting that political advertisers could use. 45 Under
pressure, Facebook hinted at the possibility of changing its approach to political ads, but Zuckerberg
remained steadfast. “What I believe is that in a democracy, it’s really important that people cans see
for themselves what politicians are saying, so they can make judgments. And, you know, I don’t think
that a private company should be censoring politicians or news,” he said. 46

Legal Concerns: Fake News Laws, Free Speech, and Section 230
The 2016 U.S. presidential election was not the only election affected by misinformation: elections
in Mexico, Brazil, Indonesia, France, and elsewhere suffered from similar fake news attacks. 47
Consequently, many countries passed “anti–fake news” laws that allowed government agencies to
demand platforms, including Facebook, to remove misinformation, although free speech activists
generally panned these laws as thinly veiled attempts to quash dissent. 48

Within the U.S., conservative politicians regularly criticized Facebook, Twitter, and others for
failing to fulfill the ideal of free expression enshrined in the First Amendment of the United States

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

Constitution. b Republican Senators Ted Cruz and Josh Hawley were among the most outspoken on
this issue. According to them, Facebook actively censored content to impose an anti-conservative bias
on its users. “What makes the threat of political censorship so problematic is the lack of transparency,
the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is
disfavored,” said Cruz. 49

To address the issue, several Republicans advocated revising Section 230 of the 1996
Communications Decency Act of 1996 (CDA), which protected user-generated content platforms from
the legal responsibilities of publishers, including liability for potentially tortious content (see Exhibit
5). 50 The section was a response to a New York Supreme Court decision that found an online forum
was liable for libelous posts because it exercised “editorial control” by enforcing comment guidelines.
Fearing that legal threats could stifle the growth of the internet, lawmakers added Section 230 to specify
that online service providers could not be classified as publishers, even if they moderated content.
According to Ron Wyden, a Democratic Senator from Oregon who wrote portions of Section 230:

I wanted to make sure that internet companies could moderate their websites without
getting clobbered by lawsuits. I think everybody can agree that’s a better scenario than
the alternative, which means websites hiding their heads in the sand out of fear of being
weighed down with liability. 51

In June 2019, Hawley introduced legislation to revise Section 230. To enjoy legal protection in the
future, Hawley wanted tech companies to provide “clear and convincing” evidence that they did not
moderate content in a “politically biased manner.” 52 Democratic lawmakers, including Wyden and
House Speaker Nancy Pelosi, also called for revisions of Section 230. 53 Wyden argued that companies
like Facebook were not doing a good enough job keeping “slime” off their platforms. “[If] you don’t
use the sword, there are going to be people coming for your shield,” he warned. 54, c

The Content Curation Paradox


Common wisdom was that platforms needed to enforce community guidelines; otherwise, they ran
the risk of shocking, graphic, or risqué content driving the majority of users away. (In the words of
Micah Schaffer, the technology advisor that wrote YouTube’s first community guidelines, “Bikinis and
Nazism have a chilling effect.” 55) But Facebook also found the opposite. According to Zuckerberg: “our
research suggests that no matter where we draw the lines . . . [as] content gets close to that line, people
will engage with it more on average—even if they tell us afterwards they don’t like the content . . . [This
pattern] applies . . . to almost every category of content.” 56

According to the results of a study released in 2018, content that contained false information spread
farther and faster than accurate news stories, often by an order of magnitude. 57 In an analysis
performed on partisan news pages that were popular in 2016, BuzzFeed News similarly found that
false information received the most engagement from users. 58 These observations cut against the
optimistic “self-cleaning oven” theory of social media, which held that users would act as a check on
one another, keeping each other civil and calling out one another’s ignorance and biases. 59

b Amendment I of the United States Constitution stated that “Congress shall make no law . . . abridging the freedom of speech,
or of the press.” The Supreme Court of the United States held that the Amendment prohibited only governmental, not private
abridgement of speech, unless the private entity exercised “powers traditionally exclusively reserved to the state” (see Jackson
v. Metropolitan Edison Co.).
c Like Republicans, Democrats also received criticism for muddying the waters of the discussion around Section 230. The main
objection was that Section 230 did not provide Facebook or others cover for hate speech; the First Amendment did.

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

The most shocking example of this conundrum occurred on March 15, 2019, when a gunman
livestreamed to Facebook the first of two deadly terrorist attacks on mosques in Christchurch, New
Zealand. In an attempt to get as much attention as possible, the gunman brandished weapons painted
with the names of white supremacists, and he made joking references to various online communities,
including fans of the popular YouTuber PewDiePie. The livestream concluded without being removed
because not a single user reported it. In the 24 hours following the attack, users re-uploaded the video
over 1.5 million times. 60

The Issues: Privacy


In 2006, Facebook made the site available to everyone over the age of 13. 61 According to its privacy
policy at the time, the site collected two types of information: personal information that users
knowingly submitted (a name, for example) and aggregated browsing data like browser types and IP
addresses. Facebook said that it would share users’ personal information only with groups they had
specifically identified in their privacy settings, but it also reserved the right to share personal
information with business partners. In addition, Facebook allowed third-party advertisers to download
“cookies” to users’ computers to track their browsing behavior. 62

Although Facebook was no longer the exclusive, insular community it once was, Zuckerberg
wanted to preserve the same sense of security that motivated users to share personal information. He
said, “The problem Facebook is solving is this one paradox. People want access to all the information
around them, but they also want complete control over their own information.” 63

Facebook faced its first controversy over privacy in 2006, when it launched the News Feed. Within
24 hours, hundreds of thousands of Facebook users joined anti–News Feed groups on Facebook to
protest the new feature, which they felt robbed them of autonomy over how they shared information
about themselves. 64 Zuckerberg responded to the controversy in a blog post titled “Calm down.
Breathe. We hear you.” “[We] agree, stalking isn’t cool,” he said, “but being able to know what’s going
on in your friends’ live is.” 65 The controversy died out soon after and user growth remained strong. 66

Facebook continued to receive criticism for its management of users’ privacy. Typically, the
criticism was that it robbed users of agency by failing to ask for meaningful consent to collect and share
information, as well as by providing confusing or incomplete information about how their information
was being shared under specific privacy settings. Zuckerberg was comfortable subverting users’
privacy expectations because he believed that users did not care as much about privacy as critics
thought, and that they would care even less about privacy in the future. “We view it as our role in the
system to constantly be innovating and be updating what our system is to reflect what the current
social norms are,” he said. 67 However, these updates did not reflect many countries’ privacy
regulations: between 2009 and 2019, regulators in Canada, the U.S., and several countries in Europe
concluded that Facebook had violated its users’ right to privacy. 68

One of Facebook’s most consequential run-ins with regulators was its settlement with the FTC in
2011. The FTC began their investigation after organizations like the Electronic Frontier Foundation
(EFF) submitted complaints that Facebook was tricking users into sharing more personal information
than they might otherwise have by presenting them with misleading privacy settings controls. 69 After
investigating, the FTC accused Facebook of making misleading statements about users’ data privacy.
In Facebook’s 2011 settlement with the FTC, the company was barred from making further
“misrepresentations” about users’ privacy in the future. 70 In a blog post, Zuckerberg said, “We will
continue to improve the service, build new ways for you to share and offer new ways to protect you
and your information better than any other company in the world.” 71 Five years later, though, a scandal

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

over Cambridge Analytica led U.S. government officials to question Facebook’s compliance with its
privacy commitments.

Cambridge Analytica
In 2014, a University of Cambridge lecturer named Alexander Kogan paid around 270,000 Facebook
users through Amazon Mechanical Turk to take a personality quiz on a Facebook application he had
developed. 72 To access the application, the users first needed to agree to share their personal data,
along with the personal data of their friends. d In this way, Kogan gained access to the profile data of
roughly 87 million Facebook users. He shared a large portion of that data with political consulting
group Cambridge Analytica—a violation of Facebook’s terms of service with third-party app
developers.

In 2015, a reporter for the Guardian obtained documents revealing that Cambridge had data from
millions of Facebook profiles and was working with Ted Cruz’s presidential campaign to target
advertisements. 73 When asked to comment, a Facebook spokesman said that the company was
“carefully investigating the situation.” 74 Eventually, Facebook discovered that Kogan had also shared
data with research colleagues and Eunoia, a data analytics company run by former Cambridge
employee Chris Wylie. 75 Facebook demanded that both companies delete the data, and both certified
that they did. “[L]iterally all I had to do was tick a box and sign it and send it back, and that was it,”
said Wylie. “Facebook made zero effort to get the data back.” 76

After Cruz dropped out of the race, Cambridge began working for the Trump campaign. Both
Cambridge and the Trump campaign denied ever using Kogan’s Facebook data to target ads. 77
Following Trump’s election, several publications reported that Cambridge had played a decisive role
in Trump’s victorious 2016 campaign. 78 Then, in March 2018, the New York Times and the Guardian both
ran stories featuring Wylie, who called the company a “psychological warfare mindf**k tool.” 79 The
New York Times also reported that Cambridge Analytica retained hundreds of gigabytes of Facebook
data on its servers. 80

A few days later, Zuckerberg announced that Facebook would perform a “thorough audit” of all
apps that might have access to large amounts of user data. 81 At the end of March, it closed down a
feature called Partner Categories, which used data from third-party data brokers to target
advertisements. 82 And in April, it began blocking developers from accessing the data of users who had
not used their apps for more 90 days. 83 Facebook also introduced new privacy tools for users. For
example, in July 2019, Facebook launched a tool that explained to users why particular advertisements
were targeted at them and what third-party data brokers were involved. 84 Users could also opt out of
particular ad campaigns. 85

It was unclear what effect the news of Cambridge’s actions had on users. According to a survey
conducted by The Atlantic, 42% of users changed their behavior on Facebook after hearing about
Cambridge Analytica, and 25% of those who changed their behavior became more careful about what
they posted. 86 However, after conducting their own survey, Piper Jaffrey analysts found that most
Facebook and Instagram users had been “unfazed by the negative news flow.” 87

d At the time, Facebook’s API for third-party apps allowed users to volunteer data from their friend’s profile data along with
their own. Users could prevent their friends from sharing their personal data with third-party apps by changing their privacy
settings. In 2014, Facebook changed its API for third-party apps and prevented all new apps from requesting and accessing the
profile data of friends of users. It phased out the feature entirely in 2015.

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

The response from legislators and regulators in the U.S. was less ambiguous: many state legislatures
introduced data privacy bills, and the FTC began another investigation into Facebook. 88 In July 2019,
the FTC announced that it had reached a settlement with Facebook, who would pay a fine of $5 billion,
the largest privacy or data security penalty ever imposed in world history, for “deceiving users about
their ability to control the privacy of their personal information.” 89

Legal Concerns: The General Data Protection Regulation (GDPR) and Beyond
Perhaps Facebook’s most important legal challenge related to privacy was the General Data
Protection Regulation (GDPR), an expansive regulation passed by the European Union (EU) in April
2016 and implemented in May 2018. The new EU regulation granted EU citizens the right to be
forgotten, the right to download all of their data, as well as other privacy guarantees. EU regulators
could issue substantial fines to companies that violated these policies. In the years leading up to the
GDPR’s implementation, Facebook made numerous efforts to prepare. For example, it sent out a notice
to inform users of their privacy settings and how to opt out of various features. Nevertheless, on the
day the GDPR became enforceable, privacy advocates filed complaints that sought to fine Facebook up
to 3.9 billion euros. 90 If a company was found in violation of the GDPR, the EU could levy a maximum
fine of 4% of a company’s global revenue the previous year—$2.23 billion in Facebook’s case. 91

Facebook also faced a number of lawsuits in the U.S. One of the largest was filed in 2015 by residents
of Illinois, who accused Facebook of violating a state privacy law by allowing its “Tags Suggestions”
feature to use facial recognition tools on photos of them without asking for explicit consent. In August
2019, a federal appeals court upheld a district judge’s decision to deny Facebook’s motion to dismiss,
exposing the company to damages that could total more than $7 billion. 92

Competitors and Digital Governance


Facebook was not the only social network that faced challenges on how to govern its platform.
Twitter, YouTube, WeChat, LinkedIn, and others also struggled with governance issues.

Twitter
First debuted in 2007, Twitter was a “microblogging” social media platform that allowed registered
users to post and share text posts, or “tweets,” which were originally limited to a length of 140
characters (see Exhibit 2). 93 In 2017, Twitter expanded the limit to 280. 94 Twitter’s original purpose was
to allow users to share small updates about their lives, but as it grew more popular, journalists began
using the service to give real-time news updates and political candidates used it to campaign. By 2019,
Twitter had around 330 million monthly active users. 95

Twitter first unveiled the “Twitter Rules” in 2009. 96 The Rules outlined 10 violations, including
impersonation, publishing private information without permission, threats, spam, and pornographic
profile pictures. 97,, e Over the years, it added rules and eventually categorized them into three
categories: Safety, Privacy, and Authenticity. Twitter’s Safety rules banned threats of violence, the
promotion of terrorism, harassment, and other forms of harmful content; its Privacy rules banned

e Strictly speaking, pornography was allowed on Twitter. Rather than an outright ban, Twitter’s policy for adult content was
that users could not include it where it would be “highly visible” on Twitter, including profile images and live video, but they
could share adult content in tweets, provided they marked its as “sensitive.”

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

posting private information and intimate photographs without permission; and its Authenticity rules
banned spam, impersonating other users, and spreading misinformation to influence elections. 98

From the very beginning, Twitter received criticism for its failure to enforce its rules against abuse
and harassment. Twitter’s founders were strongly committed to freedom of expression, so they were
hesitant to actively curate tweets, and they set a high bar for removing tweets and users. 99 Many
Twitter critics were women subjected to misogynistic harassment campaigns on the site. After one
woman published an article in 2015 about being harassed on Twitter, CEO Dick Costelo admitted that
Twitter “suck[ed] at dealing with abuse and trolls on the platform and we’ve sucked at it for years.” 100
But Twitter struggled to improve. According to an Amnesty International study in 2018, the average
female journalist or politician on Twitter received an abusive tweet every 30 seconds, and black women
were disproportionately targeted. 101 Twitter was also criticized for failing to remove conspiracy
theorists and white supremacists from its platform. 102 In June 2019, Twitter announced that it would
not remove tweets that violated its rules if it were of “public interest,” such as tweets by Trump. 103
Instead, such tweets were hidden under warning labels explaining that they were abusive.

YouTube
Google’s YouTube was a video-sharing website that launched in 2005. After explicit content began
flooding the site in 2006, YouTube put together a small team to screen videos. 104 One decade later,
YouTube’s moderation team included thousands of Google employees, and with the help of algorithms
that flagged inappropriate content, it removed millions of videos every year. 105 YouTube first created
rules for users in 2007, banning pornography, criminal acts, violence, threats, spam, and hate speech. 106
It later added rules banning impersonation, sharing private information about someone without their
consent, and encouraging harmful or dangerous acts. 107 Like Facebook and Twitter, YouTube was
regularly criticized for how it curated content, particularly for failing to remove hateful content.
YouTube also received criticism for spreading misinformation. Some theorized that its recommen-
dation algorithms, which drove 70% of viewing time on the site, was inadvertently indoctrinating
viewers in far-right conspiracy theories by regularly recommending videos espousing them. 108 In
response to the criticism, YouTube changed its algorithms to stop recommending conspiracy theory
videos, and it began adding links to relevant Wikipedia pages under videos concerning hot-button
issues like climate change and school shootings. 109

One somewhat unique challenge that YouTube faced was how to properly curate content for young
children. Although YouTube insisted that it had “never been for kids under 13,” 12 of the 20 most-
watched videos on YouTube were aimed primarily at children. 110 Seeing the opportunity, many
content creators specifically targeted children, producing videos featuring nursery rhymes, bright
colors, and popular characters like Spiderman. The vast majority of these videos were of extremely
poor quality, with little educational value, and many featured violence and sexual imagery. 111 To
address the problem, YouTube introduced YouTube Kids, an app that featured content screened by
algorithms to remove non-child-friendly content, but inappropriate videos continued to slip through
the cracks. 112 In April 2018, YouTube introduced a new option that allowed parents to limit YouTube
Kids to showing only videos that had been approved by human moderators. 113

Microsoft
Famous for its computer operating systems, software tools, and internet browsers, Microsoft was
the world’s largest software company. 114 Over its decades-long history, critics repeatedly attacked
Microsoft for violating users’ privacy. For example, the Irish Data Protection Commissioner and the
Dutch Ministry of Justice & Security were alarmed by Microsoft’s practice of storing subject lines from

10

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

emails and sentences checked by its spelling software. 115 In each case, Microsoft responded quickly to
the concerns raised, and it never experienced a privacy scandal on the scale of Cambridge Analytica. 116
Microsoft also received little criticism for how it governed LinkedIn, a professional social network it
bought in 2016. 117 With almost 700 million users, LinkedIn had community guidelines that were largely
similar to Facebook’s, and it employed filters and “machine-assisted” detection systems to identify
inappropriate content. 118 According to LinkedIn’s Editor-in-Chief Dan Roth, “You talk on LinkedIn the
same way you talk in the office. There are certain boundaries around what is acceptable . . . This is
something that your boss sees, your future boss, people you want to work with in the future.” 119

WeChat
WeChat was a multipurpose social media application that was used primarily in China. It could be
used to message friends, make phone calls, hold videoconferences, stream live video, and pay for goods
and services. In 2019, it had more than one billion users, and its users spent an average of one-third of
their day interacting with the service. 120 Unlike Facebook, it did not rely on advertising: it generated
revenue primarily through commissions paid by services using its mobile payments system. 121 But like
Facebook, WeChat actively screened and removed content. Chinese law dictated that all digital
platforms remove “sensitive” content. 122 For posts to WeChat Moments, the WeChat equivalent of the
Facebook News Feed, it implemented real-time image filtering to scan for sensitive text and to compare
images to a blacklist. Because the process was computationally expensive, it could take up to several
seconds for content to be censored. For person-to-person chats, WeChat used a hash index, which
stored shortened “fingerprints” of images, to screen and censor content more quickly. According to
one study conducted by Citizen Lab, a research group at the University of Toronto, WeChat’s method
of curating content often resulted in “over-censorship.” 123 For example, in certain cases, it would
remove not only negative references to specific government policies, but also neutral references,
including screenshots of official announcements from government websites.

Facebook’s Future
In 2019, Facebook announced two significant changes. The first was that it was shifting to a “digital
living room” model that would prioritize privacy. The second was that it would establish an
independent board to make decisions about how Facebook curated content.

The Digital Living Room


During Facebook’s second quarter of 2018 earnings call, Zuckerberg introduced a new “family-
wide” metric that counted individual users of at least one of Facebook’s apps (Facebook, WhatsApp,
Instagram, and Messenger). As of June 2018, that number stood at 2.5 billion. According to Facebook
CFO David Wehner, this family-wide number better reflected the size of the Facebook community than
Facebook’s daily active user count, which had grown by only 1.54% during the quarter, or less than
half its usual quarterly rate of growth. 124 Investors were unmoved by this argument: on the day
Facebook released its report for the second quarter of 2018, its stock fell by 19%, and its market
capitalization shrunk by $119 billion. It was the largest single-day drop in market value in U.S. stock
market history. 125

The introduction of the family-wide metric was a harbinger for greater changes to come: in March
2019, Zuckerberg announced that Facebook planned to integrate Instagram, WhatsApp, and
Messenger to create a combined platform for private, encrypted messaging. Zuckerberg explained that

11

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

Facebook was responding to users’ shift away from sharing and communicating openly in the “town
square” toward more personal, impermanent “living room” discussions:

Today we already see that private messaging, ephemeral stories, and small groups are
by far the fastest growing areas of online communication . . . Many people prefer the
intimacy of communicating one-on-one or with just a few friends . . . In a few years, I
expect future versions of Messenger and WhatsApp to become the main ways people
communicate on the Facebook network. 126

One month later, during its annual developer conference, Facebook unveiled its redesign of the site,
which heavily featured groups and private messaging. Almost all of the presenters at the conference
repeated the phrase: “the future is private.” 127

Facebook’s new living room model presented two significant challenges. The first was that it was
unclear how advertising would work, given that the News Feed was both the venue and an important
source of data for targeted advertisements. Zuckerberg admitted that having access to less information
would make targeted advertisements “somewhat less effective,” but he was confident that Facebook
would be able to work out a system for targeting advertisements using a fraction of the data. 128 As for
the advertisements themselves, Facebook was exploring a number of options, including inbox ads and
sponsored messages for Messenger. It was also possible that Facebook would go the way of WeChat,
and would begin making revenue off transactions on the platform. This possibility became more
plausible after Facebook proposed its own cryptocurrency, Libra, in June 2019. 129

The second challenge the digital living room presented was content curation: if users increasingly
shifted to encrypted messaging, Facebook needed new tools to stop the spread of false and harmful
content. In an interview, Zuckerberg said he was “much more worried” about the tradeoffs between
privacy and safety than anything else, but he provided no specifics when asked how Facebook would
address them. 130 Facebook was already addressing the problem on WhatsApp—for example, by
limiting the number of times a message could be forwarded—but it was unclear how successful its
efforts had been. According to an internal study, the forwarding limit had reduced the total number of
forwarded messages on WhatsApp by 25%, and another study found that it delayed the spread of
content by up to two orders of magnitude; however, the second study also found that content designed
to be viral (e.g., alarming conspiracy theories) was largely unaffected by the forwarding limit, and the
authors recommended that WhatsApp develop a “quarantine approach” that directly limited the
forwarding of specific messages or accounts. 131

There was also reason to be concerned about Instagram. According to one study published in
December 2018, Russian efforts to influence the 2016 U.S. election had actually been more successful
on Instagram than on Facebook. Although Russian posts reached fewer people on Instagram,
Instagram users interacted with these posts almost 2.5 times more than Facebook users did. 132 Ahead
of the 2020 election, Facebook began making changes to Instagram to limit the spread of false
information, giving users the ability to flag false information, implementing image-detection
algorithms to catch previously debunked content, rolling out more fact-checkers in the U.S., and
removing debunked content from its “Explore” tab and search results. 133

The Oversight Board


During an interview in April 2018, Vox founder Ezra Klein asked Zuckerberg if Facebook had “just
become too big and too vast and too consequential for normal corporate governance structures.” 134 In
response, Zuckerberg said that he wanted “to create a governance structure around the content and
the community that reflects more what people in the community want than what short-term-oriented

12

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

shareholders might want,” and that a key component of that governance structure would be a system
for appealing content decisions, possibly with an independent board “almost like a Supreme Court”
giving the final say. 135 Later, in November, Zuckerberg announced that Facebook was planning to
create an independent board for reviewing its content decisions, and in January 2019, Facebook
released a draft charter for the board to be reviewed by academics, activists, and Facebook users. 136 In
September 2019, Facebook released the final draft of the board’s charter and announced that it planned
to have an appeals system up and running by 2020. 137

According to its charter, the Oversight Board would be a body of no fewer than 11 members chosen
by an independent trust to serve a maximum of three three-year terms. 138 The Board would have the
authority to review Facebook’s content decisions and to instruct Facebook to allow, remove, or take
other actions on content, as well as the responsibility to explain their decision in plain language by
interpreting Facebook’s Community Standards in light of Facebook’s principles (see Exhibit 6) and
international human rights norms. According to Zuckerberg, “The board will be an advocate for our
community—supporting people’s rights to free expression and making sure we fulfill our
responsibility to keep people safe.” 139

Experts and advocates were cautiously optimistic. According to legal scholar Kate Klonick:

[I]t just seems that making sure this Oversight Board actually works is deeply in
Facebook’s best interests . . . And it might not work! [I describe it] as trying to retro-fit a
skeletal system for a jellyfish. A private transnational company voluntarily creating an
independent body and process to oversee a fundamental human right. It's really a very
daunting idea that no one has ever tackled before. 140

Facebook’s Choices
At the highest level, Facebook’s options ranged from governing less to governing more. Governing
less meant becoming more like WhatsApp. As Facebook told its WhatsApp users:

Your messages are yours . . . We’ve built privacy, end-to-end encryption, and other
security features into WhatsApp. We don’t store your messages once they’ve been
delivered. When they are end-to-end encrypted, we and third parties can’t read them. 141

In an encrypted world, the platform would have no way of knowing whether a user was sending
harmful content to others. Governance options would be technically limited: the platform could do
little more than receive reports from users and/or monitor publicly available information, like users’
profile photos, for known examples of harmful content. 142 And even if Facebook only moderated
content reported by users, it would still need to make tough decisions. Historically, the vast majority
of content that users reported on Facebook did not violate Community Standards on Facebook. Instead,
users reported content to show disagreement or disapproval. 143

At the other extreme, Facebook could monitor the digital living room and curate all content, like
WeChat. It could ramp up investment in AI and machine learning, and dramatically expand (maybe
by 5–10 times) the number of human moderators around the world. Such an approach would raise
questions about privacy, free speech, and censorship. Despite Zuckerberg’s consternation over
Facebook becoming an arbiter of the truth, this strategy would require third-party fact-checkers
monitoring and marking posts to quell the spread of misinformation, especially during elections.

In between these two options, there were probably hundreds of small and large steps Facebook
could take to govern the platform. Some external commentaries recommended small changes in how

13

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

to curate content, while others called for a wholesale reinvention of the company. In May 2018, for
example, Facebook chartered an independent Data Transparency Advisory Group (DTAG) to assess
and provide recommendations for how Facebook measured and reported its effectiveness in enforcing
its Community Standards. The DTAG report emphasized the importance of “procedural justice”:

Over the past four decades, a large volume of social psychological research has shown
that people are more likely to respect authorities and rules, and to follow those rules and
cooperate with those authorities, if they perceive them as being legitimate . . . Perhaps
counterintuitively, this research also shows that peoples’ judgments about legitimacy do
not depend primarily on whether authorities give them favorable outcomes . . . Rather,
judgments about legitimacy are more strongly swayed by the processes and procedures
by which authorities use their authority. 144

To address the spread of misinformation, some experts recommended that social media companies
introduce more “friction” into their platforms—for example, put a 10-minute gap between when a user
submitted content and when it appeared online, or prioritize local content. 145 This would give
Facebook more time to catch problematic content and give users the chance to think twice about
posting. As one analyst commented, it would be better to “put some cognitive space between stimulus
and response when you are in a hot hedonic state—or, as everyone’s mom used to put it, ‘When you’re
mad, count to ten before you answer.’” 146

At the same time, several critics argued that nothing short of a breakup would solve Facebook’s
troubles. When Facebook co-founder, Chris Hughes, made this argument in the New York Times on May
9, 2019, he created a stir. Hughes wrote: “Mark may never have a boss, but he needs to have some check
on his power. The American government needs to do two things: break up Facebook’s monopoly and
regulate the company to make it more accountable to the American people.” 147

Zuckerberg’s Choice?
Mark Zuckerberg controlled a majority of the voting stock at Facebook, which meant that he was
responsible for making the choice. On July 24, 2019, Zuckerberg posted a message to his Facebook page:

Our top priority has been addressing the important social issues facing the internet
and our company. With our privacy-focused vision for building the digital living room
and with today’s FTC settlement, delivering world-class privacy protections will be even
more central to our vision of the future. But I also believe we have a responsibility to keep
innovating and building qualitatively new experiences for people to come together in new
ways. So I’ve been focused on making sure we can keep executing our proactive roadmap
while we work hard to address important social issues . . . Our mission to bring the world
closer together is difficult but important, and I’m grateful for the role every one of you
plays to help make this happen. 148

Ultimately, Zuckerberg had to decide what kind of company he was building. Since the economics
of the current business were spectacular, should he only tinker around the edges and avoid threatening
the “golden goose”? Should he make a dramatic move to shift towards a privacy-driven, encrypted
platform? Should he consider breaking up the company, as his co-founder proposed? Or should he
invest heavily in building a deeply curated platform? The future of his company, and maybe the future
of large swathes of the world, were at stake.

14

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

Exhibit 1 Facebook Financial Summary 2014–2018 (in $ millions)

2014 2015 2016 2017 2018

Net sales 12,466 17,928 27,638 40,653 55,838


COGS 2,153 2,867 3,789 5,454 9,355
R&D 2,666 4,816 5,919 7,754 10,273
SG&A 2,653 4,020 5,503 7,242 11,297
Operating income (loss) 4,994 6,225 12,427 20,203 24,913
Net income 2,940 3,688 10,217 15,934 22,112
EBITDA 6,237 8,170 14,769 23,228 29,228

Cash and ST investments 11,199 18,434 29,449 41,441 41,114


Total assets 40,184 49,407 64,961 84,524 97,334
Total liabilities 4,088 5,189 5,767 10,177 13,207
Total shareholders’ equity 36,096 44,218 59,194 84,524 97,334

Gross margin 83% 84% 86% 87% 83%


Return on equity 17% 18% 25% 27% 30%

Source: Created by casewriter using Facebook’s annual reports and Thomson One.

15

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

Exhibit 2 Twitter Financial Summary 2014–2018 (in $ millions)

2014 2015 2016 2017 2018

Net Sales 1,403 2,218 2,529 2,443 3,042


COGS 446 729 932 861 965
R&D 692 807 713 542 554
SG&A 804 1,132 1,251 10,001 1,070
Income (loss) before income taxes (578) (533) (441) (95) 423
Net income (loss) (578) (521) (457) (108) 1,205
EBITDA (370) (208) (55) 289 1,676

Cash and ST investments 3,622 3,495 3,775 4,403 6,209


Total assets 5,583 3,201 3,523 7,412 10,163
Total liabilities 1,957 2,074 2,265 2,365 3,357
Total shareholders’ equity 3,626 4,368 4,605 5,047 6,806

Gross margin 68% 67% 63% 65% 68%


Return on equity -18% -13% -10% -2% 20%

Source: Created by casewriter using Twitter’s annual reports and Thomson One.

16

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

Exhibit 3 Flowchart of Facebook’s Standards Enforcement Process

Source: Created by casewriter using information from The Justice Collaboratory, Yale Law School, “Report of the Facebook
Data Transparency Advisory Group,” April 2019, pp. 11-12.

17

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

Exhibit 4 Content Violations upon Which Facebook, YouTube, and Twitter Took Action (July–
December 2018)

Source: Created by casewriter using data from transparency reports released by:

Facebook (https://transparency.facebook.com/community-standards-enforcement);

YouTube (https://transparencyreport.google.com/youtube-policy/removals?hl=en); and,

Twitter (https://transparency.twitter.com/en/twitter-rules-enforcement.html), accessed November 2019.

18

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

Exhibit 5 Excerpts from Section 230 of Title V of the Telecommunications Act of 1996
(Communication Decency Act)

(a) Findings The Congress finds the following:

(1) The rapidly developing array of Internet and other interactive computer services
available to individual Americans represent an extraordinary advance in the availability of
educational and informational resources to our citizens.

(2) These services offer users a great degree of control over the information that they receive,
as well as the potential for even greater control in the future as technology develops.

(3) The Internet and other interactive computer services offer a forum for a true diversity of
political discourse, unique opportunities for cultural development, and myriad avenues for
intellectual activity…

(b) Policy It is the policy of the United States—

(1) to promote the continued development of the Internet and other interactive computer
services and other interactive media;

(2) to preserve the vibrant and competitive free market that presently exists for the Internet
and other interactive computer services, unfettered by Federal or State regulation…

(c) Protection for “Good Samaritan” Blocking and Screening of Offensive Material

(1) Treatment of Publisher or Speaker No provider or user of an interactive computer


service shall be treated as the publisher or speaker of any information provided by another
information content provider.

(2) Civil Liability No provider or user of an interactive computer service shall be held liable
on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of
material that the provider or user considers to be obscene, lewd, lascivious, filthy,
excessively violent, harassing, or otherwise objectionable, whether or not such
material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or
others the technical means to restrict access to material described in paragraph (1)…

Source: “47 U.S. Code § 230. Protection for private blocking and screening of offensive material,” Cornell Law School Legal
Information Institute, https://www.law.cornell.edu/uscode/text/47/230#fn002008, accessed August 23, 2019.

19

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

Exhibit 6 Facebook’s Updated Community Values

The goal of our Community Standards is to create a place for expression and give people voice.
Building community and bringing the world closer together depends on people’s ability to share
diverse views, experiences, ideas and information. We want people to be able to talk openly about the
issues that matter to them, even if some may disagree or find them objectionable. In some cases, we
allow content which would otherwise go against our Community Standards – if it is newsworthy and
in the public interest. We do this only after weighing the public interest value against the risk of harm,
and we look to international human rights standards to make these judgments.

A commitment to expression is paramount, but we recognize the internet creates new and increased
opportunities for abuse. For these reasons, when we limit expression we do it in service of one or more
of the following values:

• Authenticity: We want to make sure the content people are seeing on Facebook is authentic.
We believe that authenticity creates a better environment for sharing, and that’s why we don’t
want people using Facebook to misrepresent who they are or what they’re doing.

• Safety: We are committed to making Facebook a safe place. Expression that threatens people
has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.

• Privacy: We are committed to protecting personal privacy and information. Privacy gives
people the freedom to be themselves, and to choose how and when to share on Facebook and
to connect more easily.

• Dignity: We believe that all people are equal in dignity and rights. We expect that people will
respect the dignity of others and not harass or degrade others.

Our Community Standards apply to everyone around the world, and to all types of content. They’re
designed to be comprehensive – for example, content that might not be considered hateful may still be
removed for violating a different policy.

We recognize that words mean different things or affect people differently depending on their local
community, language or background. We work hard to account for these nuances while also applying
our policies consistently and fairly to people and their expression.

Source: Monika Bickert, “Updating the Values That Inform Our Community Standards,” September 12, 2019,
https://newsroom.fb.com/news/2019/09/updating-the-values-that-inform-our-community-standards/, accessed
September 24, 2019.

20

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

Exhibit 7 Average Revenue per Facebook User (2015–2019)

Source: Created by casewriter using data from Facebook’s quarterly earnings presentations.

Exhibit 8 Cost per 1,000 Impressions (CPM) Comparison (January 2019)

Source: Created by casewriter using data from: Maxwell Gollin, “How Much Do Ads Cost on Facebook,
Instagram, Twitter, and LinkedIn in 2019,” Falcon.io, January 7, 2019, accessed September 26, 2019.

21

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

Endnotes

1 Facebook, Inc., Second Quarter 2019 Results Conference Call,


https://s21.q4cdn.com/399680738/files/doc_financials/2019/Q2/Q2'19-Earnings-Call-Transcript.pdf, accessed July 26, 2019.
2 Facebook, Inc., Second Quarter 2019 Results Conference Call,
https://s21.q4cdn.com/399680738/files/doc_financials/2019/Q2/Q2'19-Earnings-Call-Transcript.pdf, accessed July 26, 2019.
3 Alan J. Tabak, “Hundreds Register for New Facebook Website,” Harvard Crimson, February 9, 2004,
https://www.thecrimson.com/article/2004/2/9/hundreds-register-for-new-facebook-website/, accessed April 30, 2019.
4 Alan J. Tabak, “Hundreds Register for New Facebook Website,” Harvard Crimson, February 9, 2004,
https://www.thecrimson.com/article/2004/2/9/hundreds-register-for-new-facebook-website/, accessed April 30, 2019.
5 John Cassidy, “Me Media,” New Yorker, May 7, 2006, https://www.newyorker.com/magazine/2006/05/15/me-media,
accessed April 30, 2019.
6 Fred Vogelstein, “How Mark Zuckerberg Turned Facebook Into the Web’s Hottest Platform,” Wired, September 6, 2007,
https://www.wired.com/2007/09/ff-facebook/, accessed April 30, 2019.
7 Ellen Rosen, “Student’s start-up draws Attention and $13 million,” New York Times, May 26, 2005, https://
www.nytimes.com/2005/05/26/business/students-startup-draws-attention-and-13-million.html, accessed April 30, 2019.
8 “Welcome to Facebook, everyone,” Facebook blog, September 26, 2006, https://www.facebook.com/notes/facebook/
welcome-to-facebook-everyone/2210227130/, accessed May 20, 2019; “Terms of Use,” Facebook, via WayBack Machine
(archived March 1, 2006), http://web.archive.org/web/20060301120239/https://www.facebook.com/terms.php, accessed
May 20, 2019.
9 Calculated using data from “Number of monthly active Facebook users worldwide as of 1st quarter 2019 (in millions),”
Statista, https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/, accessed June
26, 2019.
10 “Platform is here,” Facebook note, June 1, 2007, https://www.facebook.com/notes/facebook/platform-is-
here/2437282130/, accessed August 23, 2019.
11 Nicholas Carlson (Business Insider), “11 Companies That Tried to Acquire Facebook,” Inc., January 12, 2016,
https://www.inc.com/business-insider/11-companies-that-almost-acquired-facebook.html, accessed June 26, 2019.
12 Facebook Form 10-K, p. 33.

13 “Ad Evolution: The History of Facebook,” Adtaxi, March 8, 2018, https://www.adtaxi.com/blog-roll/2018/03/08/ad-


evolution-history-facebook, accessed June 4, 2019.
14 Kerry Flynn, “Cheatsheet: Facebook now has 7m advertisers,” Digiday, January 30, 2019,
https://digiday.com/marketing/facebook-earnings-q4-2018/, accessed June 26, 2019.
15 Elizabeth Dwoskin, “Facebook’s willingness to copy rivals’ apps seen as hurting innovation,” Washington Post, August 10,
2017, https://www.washingtonpost.com/business/economy/facebooks-willingness-to-copy-rivals-apps-seen-as-hurting-
innovation/2017/08/10/ea7188ea-7df6-11e7-a669-b400c5c7e1cc_story.html, accessed September 4, 2019.
16 Billy Gallagher, “How Facebook Tried to Squash Snapchat,” Wired, February 16, 2018,
https://www.wired.com/story/copycat-how-facebook-tried-to-squash-snapchat/, accessed September 16, 2019.
17 Deepa Seetharaman and Betsy Morris, “The New Copycats: How Facebook Squashes Competition From Startups,” Wall
Street Journal, August 9, 2017, via Factiva, accessed September 4, 2019.
18 Billy Gallagher, “How Facebook Tried to Squash Snapchat,” Wired, February 16, 2018,
https://www.wired.com/story/copycat-how-facebook-tried-to-squash-snapchat/, accessed September 16, 2019.
19 Paul Mozur, “A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” New York Times, October 15, 2018,
https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html, accessed May 7, 2019.
20 Andrew Weisburd, “How Russia Dominates Your Twitter Feed to Promote Lies (And, Trump, Too),” Daily Beast, August 6,
2016, https://www.thedailybeast.com/how-russia-dominates-your-twitter-feed-to-promote-lies-and-trump-too, accessed July
16, 2019.

22

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

21 Issie Lapowsky, “How Russian Facebook Ads Divided and Targeted US Voters Before the 2016 Election,” Wired, April 16,
2018, https://www.wired.com/story/russian-facebook-ads-targeted-us-voters-before-2016-election/, accessed July 26, 2019.
22 Courtesy of Bloomberg Government, “Transcript of Mark Zuckerberg’s Senate hearing,” Washington Post, April 10, 2018,
https://www.washingtonpost.com/news/the-switch/wp/2018/04/10/transcript-of-mark-zuckerbergs-senate-
hearing/?utm_term=.c14742076dc7, accessed May 30, 2019.
23 Chris Hughes, “It’s Time to Break Up Facebook,” New York Times, May 9, 2019,
https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html, accessed July 16, 2019.
24 Mark Zuckerberg, “A Blueprint for Content Governance and Enforcement,” Facebook post, November 15, 2018,
https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-
enforcement/10156443129621634/, accessed May 9, 2019.
25 Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review 131(6)
(April 2018): 1631.
26 Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review 131(6)
(April 2018): 1620.
27 Max Fisher, “Inside Facebook’s Secret Rulebook for Global Political Speech,” December 27, 2018, New York Times,
https://www.nytimes.com/2018/12/27/world/facebook-moderators.html?searchResultPosition=20, accessed April 23, 2019;
Julia Angwin and Hannes Grassegger, “Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not
Black Children,” ProPublica, June 28, 2017, https://www.propublica.org/article/facebook-hate-speech-censorship-internal-
documents-algorithms, accessed May 9, 2019.
28 Max Fisher, “Inside Facebook’s Secret Rulebook for Global Political Speech,” December 27, 2018, New York Times,
https://www.nytimes.com/2018/12/27/world/facebook-moderators.html?searchResultPosition=20, accessed April 23, 2019;
Julia Angwin and Hannes Grassegger, “Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not
Black Children,” ProPublica, June 28, 2017, https://www.propublica.org/article/facebook-hate-speech-censorship-internal-
documents-algorithms, accessed May 9, 2019.
29 Mark Zuckerberg, “A Blueprint for Content Governance and Enforcement,” Facebook post, November 15, 2018,
https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-
enforcement/10156443129621634/, accessed May 14, 2019.
30 Casey Newton, “The Trauma Floor,” The Verge, February 25, 2019, https://www.theverge.com/2019/2/25/18229714/
cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona, accessed July 18, 2019.
31 Kara Swisher, “Full transcript: Facebook CEO Mark Zuckerberg on Recode Decode,” Vox, July 18, 2018,
https://www.vox.com/
2018/7/18/17575158/mark-zuckerberg-facebook-interview-full-transcript-kara-swisher, accessed July 18, 2019.
32 Lisa Respers France, “Facebook urged to remove Holocaust-denial groups,” CNN, May 8, 2009,
http://www.cnn.com/2009/TECH/05/08/facebook.holocaust.denial/index.html, accessed May 7, 2019.
33 Ian Paul, “Facebook Boots Holocaust Denial Groups,” PC World, May 12, 2009,
https://www.pcworld.com/article/164765/Facebook_Boots_Holocaust_Denial_Groups.html, accessed September 17, 2019.
34 Timothy McLaughlin, “How Facebook’s Rise Fueled Chaos and Confusion in Myanmar,” Wired, July 6, 2018,
https://www.wired.com/story/how-facebooks-rise-fueled-chaos-and-confusion-in-myanmar/, accessed April 25, 2019;
Alberto Arce (Associated Press), “In frightened Mexico town, a mob kills 2 young pollsters,” San Diego Union-Tribune, October
22, 2015, https://www.sandiegouniontribune.com/sdut-in-frightened-mexico-town-a-mob-kills-2-young-2015oct22-
story.html, accessed April 25, 2019; Amanda Taub and Max Fisher, “Where Countries Are Tinderboxes and Facebook Is a
Match,” New York Times, April 21, 2019, https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-
riots.html?module=inline, accessed April 24, 2019; Fajar Eko Nugroho, “Beredar Hoax Penculikan Anak, Geandangan Disiksa
Nyaris Tewas,” Liputan 6, March 7, 2017, https://www.liputan6.com/regional/read/2878821/beredar-hoax-penculikan-
anak-gelandangan-disiksa-nyaris-tewas, accessed April 24, 2019; B. Vijay Murty, “Jharkand lynching: When a WhatsApp
message turned tribals into killer mobs,” Hindustan Times, May 22, 2017, https://www.hindustantimes.com/india-news/a-
whatsapp-message-claimed-nine-lives-in-jharkhand-in-a-week/story-xZsIlwFawf82o5WTs8nhVL.html, accessed April 25,
2019; “Buddhist mobs burn Muslim homes, businesses in Sri Lanka,” CBS News, March 7, 2018, https://www.cbsnews.com/
news/sri-lanka-muslim-attacked-buddhist-mobs-curfews-social-media-cut-off/, accessed April 24, 2019.

23

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

35 Paul Mozur, “A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” New York Times, October 15, 2018,
https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html, accessed May 7, 2019.
36 Tom Miles, “U.N. investigators cite Facebook role in Myanmar crisis,” Reuters, March 12, 2018,
https://uk.reuters.com/article/us-myanmar-rohingya-facebook/u-n-investigators-cite-facebook-role-in-myanmar-crisis-
idUKKCN1GO2PN, accessed May 2, 2019.
37 Craig Silverman, Lauren Strapagiel, Hamza Shaban, Ellie Hall, and Jeremy Singer-Vine, “Hyperpartisan Facebook Pages Are
Publishing False And Misleading Information At An Alarming Rate,” BuzzFeed News, October 20, 2016,
https://www.buzzfeednews.com/article/craigsilverman/partisan-fb-pages-analysis, accessed May 6, 2019.
38 Craig Silverman and Lawrence Alexander, “How Teens In The Balkans Are Duping Trump Supporters With Fake News,”
BuzzFeed News, November 3, 2016, https://www.buzzfeednews.com/article/craigsilverman/how-macedonia-became-a-
global-hub-for-pro-trump-misinfo, accessed May 7, 2019; Robert Mueller, Report On The Investigation Into Russian Interference In
The 2016 Presidential Election, p. 14.
39 Aarti Shahani, “Zuckerberg Denies Fake News On Facebook Had Impact On The Election,” NPR, November 11, 2016,
https://www.npr.org/sections/alltechconsidered/2016/11/11/501743684/zuckerberg-denies-fake-news-on-facebook-had-
impact-on-the-election, accessed June 27, 2019.
40 Aarti Shahani, “Zuckerberg Denies Fake News On Facebook Had Impact On The Election,” NPR, November 11, 2016,
https://www.npr.org/sections/alltechconsidered/2016/11/11/501743684/zuckerberg-denies-fake-news-on-facebook-had-
impact-on-the-election, accessed June 27, 2019.
41 Annie Gowen and Elizabeth Dwoskin, “WhatsApp launches new controls after widespread app-fueled mob violence in
India,” Washington Post, July 19, 2019, https://www.washingtonpost.com/world/whatsapp-launches-new-controls-after-
widespread-app-fueled-mob-violence-in-india/2018/07/19/64433ec9-c944-446f-8d82-
8498234ee8a9_story.html?utm_term=.5ea7e634fa9fm, accessed July 22, 2019; “Facebook’s WhatsApp limits users to five text
forwards to curb rumors,” Reuters, January 21, 2019, https://www.reuters.com/article/us-facebook-whatsapp/facebooks-
whatsapp-limits-text-forwards-to-five-recipients-to-curb-rumors-idUSKCN1PF0TP, accessed July 22, 2019.
42 “Taking Down More Coordinated Inauthentic Behavior, Facebook Newsroom, August 21, 2018,
https://newsroom.fb.com/news/2018/08/more-coordinated-inauthentic-behavior/, accessed April 25, 2019; “Coordinated
Inauthentic Behavior Explained,” Facebook Newsroom (video), December 6, 2018,
https://newsroom.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-behavior/, accessed April 25, 2019.
43 Makena Kelly, “Instagram will leave up deepfake video of Mark Zuckerberg,” The Verge, June 11, 2019, https://www.
theverge.com/2019/6/11/18662027/instagram-facebook-deepfake-nancy-pelosi-mark-zuckerberg, accessed June 27, 2019.
44 Alexandra S. Levine, “Zuckerberg says Facebook mulling policy changes to deal with ‘deepfakes,’” Politico, June 26, 2019,
https://www.politico.com/story/2019/06/26/zuckerberg-facebook-deepfakes-policy-1385121, accessed June 27, 2019.
45 Emily Stewart, “Facebook is refusing to take down a Trump ad making false claims about Joe Biden,” Vox, October 9, 2019,
https://www.vox.com/policy-and-politics/2019/10/9/20906612/trump-campaign-ad-joe-biden-ukraine-facebook, accessed
December 2, 2019; Scott Spencer, “An update on our political ads policy,” November 20, 2019,
https://blog.google/technology/ads/update-our-political-ads-policy/, accessed December 2, 2019.
46 CBS News, “Facebook CEO on political ads: People should ‘judge for themselves the character of politicians,” CBS,
December 2, 2019, https://www.cbsnews.com/news/facebook-ceo-mark-zuckerberg-political-ads-people-should-judge-for-
themselves-the-character-of-politicians/, accessed December 2, 2019.
47 Julia Love, Joseph Menn, and David Ingram, “In Mexico, fake news creators up their game ahead of election,” Reuters, June
28, 2018, https://www.reuters.com/article/us-mexico-facebook/in-mexico-fake-news-creators-up-their-game-ahead-of-
election-idUSKBN1JO2VG, accessed May 9, 2019; Fanny Potkin and Agustinus Beo Da Costa, “In Indonesia, Facebook and
Twitter are ‘buzzer’ battlegrounds as elections loom,” Reuters, March 12, 2019, https://www.reuters.com/article/us-
indonesia-election-socialmedia-insigh/in-indonesia-facebook-and-twitter-are-buzzer-battlegrounds-as-elections-loom-
idUSKBN1QU0AS, accessed May 9, 2019; Rajesh Roy and Newley Purnell, “India Wants Facebook to Curb Fake News Ahead
of Elections,” Wall Street Journal, March 6, 2019, https://www.wsj.com/articles/india-wants-facebook-to-curb-fake-news-
ahead-of-elections-11551898486, accessed May 9, 2019; “Factbox: ‘Fake News’ laws around the world,” Reuters, April 2, 2019,
https://www.reuters.com/article/us-singapore-politics-fakenews-factbox/factbox-fake-news-laws-around-the-world-
idUSKCN1RE0XN, accessed August 19, 2019.

24

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

48 “Factbox: ‘Fake News’ laws around the world,” Reuters, April 2, 2019, https://www.reuters.com/article/us-singapore-
politics-fakenews-factbox/factbox-fake-news-laws-around-the-world-idUSKCN1RE0XN, accessed August 19, 2019.
49 Jessica Guynn, “Ted Cruz threatens to regulate Facebook, Google and Twitter over charges of anti-conservative bias,” USA
Today, April 10, 2019, https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-
over-alleged-bias/3423095002/, accessed August 19, 2019.
50 Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review 131(6)
(April 2018): 1603-1613.
51 Ron Wyden, “Floor Remarks: CDA 230 and SESTA,” Medium post, March 21, 2018,
https://medium.com/@RonWyden/floor-remarks-cda-230-and-sesta-32355d669a6e, accessed August 19, 2019.
52 “Senator Hawley Introduces Legislation to Amend Section 230 Immunity for Big Tech Companies,” June 19, 2019,
https://www.hawley.senate.gov/senator-hawley-introduces-legislation-amend-section-230-immunity-big-tech-companies,
accessed August 19, 2019.
53 Eric Johnson, “Nancy Pelosi says Trump’s tweets ‘cheapened the presidency’—and the media encourages him,” Recode,
April 12, 2019, https://www.vox.com/2019/4/12/18307957/nancy-pelosi-donald-trump-twitter-tweet-cheap-freak-
presidency-kara-swisher-decode-podcast-interview, accessed July 22, 2019.
54 Daisuke Wakabayashi, “Legal Shield for Websites Rattles Under onslaught of Hate Speech,” New York Times, August 6, 2019,
https://www.nytimes.com/2019/08/06/technology/section-230-hate-speech.html, accessed August 19, 2019.
55 Neima Jahromi, “The Fight For The Future of YouTube,” New Yorker, July 8, 2019,
https://www.newyorker.com/tech/annals-of-technology/the-fight-for-the-future-of-youtube, accessed July 18, 2019.
56 Mark Zuckerberg, “A Blueprint for Content Governance and Enforcement,” Facebook post, November 15, 2018,
https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-
enforcement/10156443129621634/, accessed May 9, 2019.
57 Peter Dizikes, “Study: On Twitter, false news travels faster than true stories,” MIT News, March 8, 2018,
http://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308, accessed May 14, 2019.
58 Craig Silverman, Lauren Strapagiel, Hamza Shaban, Ellie Hall, and Jeremy Singer-Vine, “Hyperpartisan Facebook Pages Are
Publishing False And Misleading Information At An Alarming Rate,” BuzzFeed News, October 20, 2016,
https://www.buzzfeednews.com/article/craigsilverman/partisan-fb-pages-analysis, accessed May 6, 2019.
59 Mike Wendling, “The (almost) complete history of ‘fake news,’” BBC News, January 22, 2018,
https://www.bbc.com/news/blogs-trending-42724320, accessed July 18, 2019.
60 Elizabeth Lopatto, “The mass shooting in New Zealand was designed to spread on social media,” The Verge, March 15,
2019, https://www.theverge.com/2019/3/15/18266859/new-zealand-shooting-video-social-media-manipulation, accessed
June 7, 2019.
61 “Welcome to Facebook, everyone,” Facebook post, September 26, 2006,
https://www.facebook.com/notes/facebook/welcome-to-facebook-everyone/2210227130/, accessed May 20, 2019; “Terms of
Use,” Facebook, via WayBack Machine (archived March 1, 2006),
http://web.archive.org/web/20060301120239/https://www.facebook.com/terms.php, accessed May 20, 2019.
62 “Privacy Policy,” Facebook, via WayBack Machine (archived February 6, 2006),
http://web.archive.org/web/20060206205518/http://www.facebook.com/policy.php, accessed May 21, 2019.
63 John Cassidy, “Me Media,” New Yorker, May 7, 2006, https://www.newyorker.com/magazine/2006/05/15/me-media,
accessed April 30, 2019.
64 Tracy Samantha Schmidt, “Inside the Backlash Against Facebook,” Time, September 6, 2006,
http://content.time.com/time/nation/article/0,8599,1532225,00.html, accessed April 26, 2019.
65 Mark Zuckerberg, “Calm down. Breathe. We hear you.” Facebook post, September 6, 2006,
https://www.facebook.com/notes/facebook/calm-down-breathe-we-hear-you/2208197130/, accessed April 26, 2019.
66 Fred Vogelstein, “How Mark Zuckerberg Turned Facebook Into the Web’s Hottest Platform,” Wired, September 6, 2007,
https://www.wired.com/2007/09/ff-facebook/, accessed April 30, 2019.

25

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

67 Marshall Kirkpatrick, “Facebook’s Zuckerberg Says The Age of Privacy is Over,” New York Times, January 10, 2010,
https://archive.nytimes.com/www.nytimes.com/external/readwriteweb/2010/01/10/10readwriteweb-facebooks-
zuckerberg-says-the-age-of-privac-82963.html, accessed September 25, 2019.
68 CBC News, “Facebook breaches Canadian privacy law: commissioner,” July 16, 2009,
https://www.cbc.ca/news/technology/facebook-breaches-canadian-privacy-law-commissioner-1.851486, accessed May 20,
2019; Jacqui Cheng, “FTC complaint says Facebook’s privacy changes are deceptive,” Ars Technica, December 21, 2009,
https:// arstechnica.com/tech-policy/2009/12/ftc-complaint-says-facebooks-privacy-changes-are-deceptive/, accessed May
21, 2019; Julia Fioretti, “Facebook wins privacy case against Belgian data protection authority,” Reuters, June 29, 2016,
https://www.reuters.com/article/us-facebook-belgium/facebook-wins-privacy-case-against-belgian-data-protection-
authority-idUSKCN0ZF1VV, accessed May 23, 2019; Julia Fioretti, “French data privacy regulator cracks down on Facebook,”
Reuters, February 8, 2016, https://www.reuters.com/article/us-facebook-france-privacy-idUSKCN0VH1U1, accessed May 23,
2019; Amar Toor, “Facebook is still violating user privacy, Dutch and French regulators say,” The Verge, May 17, 2017,
https://www.theverge.com/2017/5/17/15651740/facebook-privacy-violation-france-netherlands-fine, accessed May 28, 2019.
69 Jacqui Cheng, “FTC complaint says Facebook’s privacy changes are deceptive,” Ars Technica, December 21, 2009, https://
arstechnica.com/tech-policy/2009/12/ftc-complaint-says-facebooks-privacy-changes-are-deceptive/, accessed May 21, 2019.
70 Federal Trade Commission news release, “Facebook Settles FTC Charges That It Deceived Consumers By Failing to Keep
Privacy Promises,” November 29, 2011, https://www.ftc.gov/news-events/press-releases/2011/11/facebook-settles-ftc-
charges-it-deceived-consumers-failing-keep, accessed May 20, 2019.
71 Mark Zuckerberg, “Our Commitment to the Facebook Community,” Facebook Newsroom, November 29, 2011,
https://newsroom.fb.com/news/2011/11/our-commitment-to-the-facebook-community/, accessed May 16, 2019.
72 Issie Lapowsky, “Facebook Exposed 87 Million Users to Cambridge Analytica,” Wired, April 4, 2018,
https://www.wired.com/story/facebook-exposed-87-million-users-to-cambridge-analytica/, accessed September 3, 2019.
73 Harry Davies, “Ted Cruz using firm that harvested data on millions of unwitting Facebook users,” Guardian, December 11,
2015, https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data,
accessed September 3, 2019.
74 Harry Davies, “Ted Cruz using firm that harvested data on millions of unwitting Facebook users,” Guardian, December 11,
2015, https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data,
accessed September 3, 2019.
75 Carole Cadwalladr, “‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower,” Guardian,
March 18, 2018, https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-
bannon-trump, accessed September 3, 2019.
76 Carole Cadwalladr, “‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower,” Guardian,
March 18, 2018, https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-
bannon-trump, accessed September 3, 2019.
77 Anick Jesdanun, “Did Facebook data help Trump? ‘Great Hack’ explores scandal,” Associated Press, July 24, 2019,
https://www.apnews.com/5537b5caa9644cd1bb3b041affc44682, accessed September 3, 2019; Kendall Taggart, “The Truth
About The Trump Data Team That People Are Freaking Out About,” BuzzFeed News, February 16, 2017,
https://www.buzzfeednews.com/article/kendalltaggart/the-truth-about-the-trump-data-team-that-people-are-freaking,
accessed September 3, 2019.
78 Hannes Grassegger and Mikael Krogerus, “The Data That Turned the World Upside Down,” Vice, January 28, 2017,
https://www.vice.com/en_us/article/mg9vvn/how-our-likes-helped-trump-win, accessed September 3, 2019.
79 Carole Cadwalladr, “‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower,” Guardian,
March 18, 2018, https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-
bannon-trump, accessed September 9, 2019.
80 Matthew Rosenberg, Nicholas Confessore, and Carole Cadwalladr, “How Trump Consultants Exploited the Facebook Data
of Millions,” New York Times, March 17, 2018, https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-
trump-campaign.html, accessed May 21, 2019.
81 Mark Zuckerberg, Facebook post, March 21, 2018, https://www.facebook.com/zuck/posts/10104712037900071, accessed
August 26, 2019.

26

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

82 Taylor Hatmaker, “Facebook will cut off access to third party data for ad targeting,” TechCrunch, March 28, 2018, https://
techcrunch.com/2018/03/28/facebook-will-cut-off-access-to-third-party-data-for-ad-targeting/, accessed August 26, 2019.
83 Kurt Wagner, “Read the testimony Mark Zuckerberg will present to Congress this week,” Vox, April 9, 2018,
https://www.vox.com/2018/4/9/17215524/full-text-facebook-mark-zuckerberg-testimony-congress, accessed May 30, 2019.
84 Katie Notopoulos, “Facebook Will Now Show You How to Opt Out Of Targeted Ads,” BuzzFeed News, July 11, 2019,
https://www.buzzfeednews.com/article/katienotopoulos/facebook-data-broker-why-i-see-ad, accessed July 22, 2019.
85 Katie Notopoulos, “Facebook Will Now Show You How to Opt Out Of Targeted Ads,” BuzzFeed News, July 11, 2019,
https://www.buzzfeednews.com/article/katienotopoulos/facebook-data-broker-why-i-see-ad, accessed July 22, 2019.
86 Julie Beck, “People Are Changing the Way They Use Social Media,” The Atlantic, June 7, 2018,
https://www.theatlantic.com/technology/archive/2018/06/did-cambridge-analytica-actually-change-facebook-users-
behavior/562154/, accessed June 28, 2019.
87 Thomas Franck, “Facebook users ‘don’t seem to care’ about data scandal, fake news. Analyst says buy on the dip,” CNBC,
August 22, 2018, https://www.cnbc.com/2018/08/22/facebook-users-dont-seem-to-care-about-data-scandal-analyst-
says.html, accessed June 28, 2019.
88 Issie Lapowsky, “How Cambridge Analytica Sparked The Great Privacy Awakening,” Wired, March 17, 2019,
https://www.wired.com/story/cambridge-analytica-facebook-privacy-awakening/, accessed June 28, 2019; Tony Romm and
Craig Timberg, “FTC opens investigation into Facebook after Cambridge Analytica scrapes millions of users’ personal
information,” Washington Post, March 20, 2018, https://www.washingtonpost.com/news/the-switch/wp/2018/03/20/ftc-
opens-investigation-into-facebook-after-cambridge-analytica-scrapes-millions-of-users-personal-
information/?utm_term=.58539f5ef961, accessed July 24, 2019.
89 “FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook,” Federal Trade Commission press
release, July 24, 2019, https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-
new-privacy-restrictions, accessed July 24, 2019.
90 Julia Fioretti, “Facebook wins privacy case against Belgian data protection authority,” Reuters, June 29, 2016,
https://www.reuters.com/article/us-facebook-belgium/facebook-wins-privacy-case-against-belgian-data-protection-
authority-idUSKCN0ZF1VV, accessed May 23, 2019; Julia Fioretti, “French data privacy regulator cracks down on Facebook,”
Reuters, February 8, 2016, https://www.reuters.com/article/us-facebook-france-privacy-idUSKCN0VH1U1, accessed May 23,
2019; Amar Toor, “Facebook is still violating user privacy, Dutch and French regulators say,” The Verge, May 17, 2017,
https://www.theverge.com/2017/5/17/15651740/facebook-privacy-violation-france-netherlands-fine, accessed May 28, 2019;
Russell Brandom, “Facebook and Google hit with $8.8 billion in lawsuits on day one of GDPR,” The Verge, May 25, 2018, https:
//www.theverge.com/2018/5/25/17393766/facebook-google-gdpr-lawsuit-max-schrems-europe, accessed July 22, 2019.
91 Sam Schechner, “EU Nears Decisions in Facebook Privacy Cases,” Wall Street Journal, August 12, 2019, via Factiva, accessed
August 12, 2019.
92 Jonathan Stempel, “Facebook loses facial recognition appeal, must face privacy class action,” Reuters, August 8, 2019,
https://www.reuters.com/article/us-facebook-privacy-lawsuit/facebook-loses-facial-recognition-appeal-must-face-privacy-
class-action-idUSKCN1UY2BZ, accessed August 12, 2019.
93 Aliza Rosen, “Tweeting Made Easier,” Twitter blog, November 7, 2017,
https://blog.twitter.com/official/en_us/topics/product/2017/tweetingmadeeasier.html, accessed July 25, 2019.
94 Aliza Rosen, “Tweeting Made Easier,” Twitter blog, November 7, 2017,
https://blog.twitter.com/en_us/topics/product/2017/tweetingmadeeasier.html, accessed November 5, 2019.
95 “Twitter: number of monthly active users 2010-2019,” https://www.statista.com/statistics/282087/number-of-monthly-
active-twitter-users/, accessed July 26, 2019.
96 Biz Stone, “The Zen of Twitter Support,” Twitter blog, January 15, 2009, https://blog.twitter.com/official/
en_us/a/2009/the-zen-of-twitter-support.html, accessed August 15, 2019; Sarah Jeong, “The History of Twitter’s Rules,” Vice,
January 14, 2016, https://www.vice.com/en_us/article/z43xw3/the-history-of-twitters-rules, accessed August 15, 2019.
97 “The Twitter Rules,” via Web Archive (January 18, 2009), https://web.archive.org/web/20090118211301/
http://twitter.zendesk.com/forums/26257/entries/18311, accessed August 15, 2019.
98 “The Twitter Rules,” Twitter, https://help.twitter.com/en/rules-and-policies/twitter-rules, accessed August 15, 2019.

27

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

99 Charlie Warzel, “’A Honeypot For Assholes’: Inside Twitter’s 10-Year Failure to Stop Harassment,” BuzzFeed News, August
11, 2016, https://www.buzzfeednews.com/article/charliewarzel/a-honeypot-for-assholes-inside-twitters-10-year-failure-to-
s#.erQ4n4R6Y, accessed July 25, 2019; Jacqui Cheng, “Twitter’s controversy over Terms of Service (Updated),” Ars Technica,
May 26, 2008, https://arstechnica.com/tech-policy/2008/05/twitters-controversy-over-terms-of-service/, accessed July 27,
2019.
100 Nitasha Tiku and Casey Newton, “Twitter CEO: ‘We suck at dealing with abuse,’” The Verge, February 4, 2015, https://
www.theverge.com/2015/2/4/7982099/twitter-ceo-sent-memo-taking-personal-responsibility-for-the, accessed July 25, 2019.
101 “Women abused on Twitter every 30 seconds – new study,” Amnesty International press release, December 18, 2018,
https://www.amnesty.org.uk/press-releases/women-abused-twitter-every-30-seconds-new-study, accessed July 25, 2019.
102 Issie Lapowsky, “Twitter Finally Axes Alex Jones—Over a Publicity Stunt,” Wired, September 6, 2018,
https://www.wired.com/story/twitter-bans-alex-jones-infowars/, accessed July 25, 2019; “David Duke Twitter profile,”
Twitter, https://twitter.com/DrDavidDuke?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor, accessed July
25, 2019.
103 Twitter Safety, “Defining public interest on Twitter,” June 27, 2019,
https://blog.twitter.com/en_us/topics/company/2019/publicinterest.html, accessed July 25, 2019.
104 Catherine Buni and Soraya Chemaly, “The Secret Rules of the Internet,” The Verge, April 13, 2016,
https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-
speech, accessed August 15, 2019.
105 Susan Wojcicki, “Expanding our work against abuse of our platform,” YouTube Official Blog, December 4, 2017,
https://youtube.googleblog.com/2017/12/expanding-our-work-against-abuse-of-our.html, accessed June 11, 2019.
106 Catherine Buni and Soraya Chemaly, “The Secret Rules of the Internet,” The Verge, April 13, 2016,
https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-
speech, accessed August 15, 2019.
107 “Policies and Safety,” YouTube, https://www.youtube.com/yt/about/policies/#community-guidelines, accessed August
15, 2019.
108 Kevin Roose, “The Making of a YouTube Radical,” New York Times, June 8, 2019,
https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html, accessed July 26, 2019.
109 Casey Newton, “YouTube’s misinformation crisis was years in the making,” The Verge, April 18, 2019,
https://www.theverge.com/interface/2019/4/18/18429735/youtube-misinformation-hate-speech-susan-wojcicki, accessed
July 26, 2019; Jillian D’Onfro, “YouTube will add Wikipedia links debunking conspiracy theory videos,” CNBC, March 13,
2018, https://www.cnbc.com/2018/03/13/youtube-wikipedia-links-debunk-conspiracy.html, accessed July 26, 2019.
110 Paris Martineau, “YouTube Has Kid Troubles Because Kids Are A Core Audience,” Wired, June 6, 2019,
https://www.wired.com/story/youtube-kid-troubles-kids-core-audience/, accessed August 26, 2019.
111 Russell Brandom, “Inside Elsagate, The Conspiracy-Fueled War on Creepy YouTube Kids Videos,” The Verge, December 8,
2017, https://www.theverge.com/2017/12/8/16751206/elsagate-youtube-kids-creepy-conspiracy-theory, accessed August 26,
2019.
112 Sapna Maheshwari, “On YouTube Kids, Startling Videos Slip Past Filters,” New York Times, November 4, 2017,
https://www.nytimes.com/2017/11/04/business/media/youtube-kids-paw-patrol.html, accessed August 26, 2019.
113 Dani Deahl, “Parents can now limit YouTube Kids to only show human-approved videos,” The Verge, April 25, 2018,
https://www.theverge.com/2018/4/25/17281654/youtube-kids-parental-controls-approved-content, accessed August 26,
2019.
114 PWC, “Global 100 Software Leaders by revenue,” https://www.pwc.com/gx/en/industries/technology/
publications/global-100-software-leaders/explore-the-data.html, accessed July 25, 2019; Steven Musil, “Microsoft joins $1
trillion market cap club,” CNET, https://www.cnet.com/news/microsoft-joins-1-trillion-market-cap-club/, accessed July 25,
2019.
115 Daria Solovieva, “How Microsoft has (so far) avoided tough scrutiny over privacy issues,” Fast Company, January 10, 2019,
https://www.fastcompany.com/90290137/how-microsoft-has-avoided-tough-scrutiny-over-privacy-issues, accessed July 25,
2019.

28

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

Fixing Facebook: Fake News, Privacy, and Platform Governance 720-400

116 Daria Solovieva, “How Microsoft has (so far) avoided tough scrutiny over privacy issues,” Fast Company, January 10, 2019,
https://www.fastcompany.com/90290137/how-microsoft-has-avoided-tough-scrutiny-over-privacy-issues, accessed July 25,
2019.
117 John Herman, “Why Aren’t We Talking About LinkedIn?” New York Times, August 8, 2019,
https://www.nytimes.com/2019/08/08/style/linkedin-social-media.html, accessed August 19, 2019.
118 Rushi Bhatt, “Combining LinkedIn’s Content Filtering and Microsoft Cognitive Services to Keep Inappropriate Content Off
Our Sites,” LinkedIn Engineering blog, July 30, 2018, https://engineering.linkedin.com/blog/2018/07/combining-linkedins-
content-filtering-and-microsoft-cognitive-se, accessed August 19, 2019.
119 Rushi Bhatt, “Combining LinkedIn’s Content Filtering and Microsoft Cognitive Services to Keep Inappropriate Content Off
Our Sites,” LinkedIn Engineering blog, July 30, 2018, https://engineering.linkedin.com/blog/2018/07/combining-linkedins-
content-filtering-and-microsoft-cognitive-se, accessed August 19, 2019.
120 Cyrus Lee, “Daily active users for WeChat exceeds 1 billion,” ZDNet, https://www.zdnet.com/article/daily-active-user-
of-messaging-app-wechat-exceeds-1-billion/, accessed July 26, 2019; Jeffrey Knockel, Lotus Ruan, Masashi Crete-Nishihata,
and Ron Deibert, “(Can’t) Picture This,” Citizen Lab, August 14, 2018, https://citizenlab.ca/2018/08/cant-picture-this-an-
analysis-of-image-filtering-on-wechat-moments/, accessed July 25, 2019.
121 Li Yuan, “Mark Zuckerberg Wants Facebook to Emulate WeChat. Can It?” New York Times, March 7, 2019,
https://www.nytimes.com/2019/03/07/technology/facebook-zuckerberg-wechat.html, accessed July 25, 2019.
122 Jeffrey Knockel, Lotus Ruan, Masashi Crete-Nishihata, and Ron Deibert, “(Can’t) Picture This,” Citizen Lab, August 14,
2018, https://citizenlab.ca/2018/08/cant-picture-this-an-analysis-of-image-filtering-on-wechat-moments/, accessed July 25,
2019; Jeffrey Knockel and Ruohan Xiong, “(Can’t) Picture This 2,” Citizen Lab, July 15, 2019, https://citizenlab.ca/2019/07/
cant-picture-this-2-an-analysis-of-wechats-realtime-image-filtering-in-chats/, accessed July 25, 2019.
123 Jeffrey Knockel and Ruohan Xiong, “(Can’t) Picture This 2,” Citizen Lab, July 15, 2019, https://citizenlab.ca/2019/07/
cant-picture-this-2-an-analysis-of-wechats-realtime-image-filtering-in-chats/, accessed July 25, 2019.
124 Josh Constine, “Facebook stock tanks from mixed Q2 with slowest-ever growth,” TechCrunch, July 25, 2018,
https://techcrunch.com/2018/07/25/facebook-q2-2018-earnings/, accessed July 22, 2019.
125 Fred Imbert and Gina Francolla, “Facebook’s $100 billion-plus rout is the biggest loss in stock market history,” CNBC, July
26, 2018, https://www.cnbc.com/2018/07/26/facebook-on-pace-for-biggest-one-day-loss-in-value-for-any-company-sin.html,
accessed June 3, 2019.
126 “Read Mark Zuckerberg’s Blog Post on His ‘Privacy-Focused Vision’ for Facebook,” New York Times, March 6, 2019,
https://www.nytimes.com/2019/03/06/technology/facebook-privacy-blog.html, accessed June 3, 2019.
127 Casey Newton, “Facebook’s total focus on privacy could crowd out more important issues,” The Verge, May 1, 2019,
https://www.theverge.com/interface/2019/5/1/18524675/facebook-f8-2019-day-1-recap-privacy, accessed June 6, 2019.
128 Nicholas Thompson, “Mark Zuckerberg on Facebook’s Future and What Scares Him Most,” Wired, March 6, 2019,
https://www.wired.com/story/mark-zuckerberg-facebook-interview-privacy-pivot/, accessed July 22, 2019.
129 Nathaniel Popper, “Regulators Have Doubts About Facebook Cryptocurrency. So Do Its Partners.” New York Times, June
25, 2019, https://www.nytimes.com/2019/06/25/technology/facebook-libra-cryptocurrency.html, accessed July 26, 2019.
130 Nicholas Thompson, “Mark Zuckerberg on Facebook’s Future and What Scares Him Most,” Wired, March 6, 2019,
https://www.wired.com/story/mark-zuckerberg-facebook-interview-privacy-pivot/, accessed July 22, 2019.
131 “Limiting message forwarding on WhatsApp helped slow disinformation,” MIT Technology Review, September 26, 2019,
https://www.technologyreview.com/f/614435/whatsapp-disinformation-message-forwarding-politics-technology-brazil-
india-election/?utm_source=newsletters&utm_medium=email&utm_campaign=the_download.unpaid.engagement, accessed
September 27, 2019.
132 Kurt Wagner, “Instagram posts from Russian meddlers played a much bigger role in the 2016 election than we thought,”
Vox, December 17, 2018, https://www.vox.com/2018/12/17/18144924/instagram-russian-election-interference-social-media-
report-new-knowledge-oxford, accessed September 11, 2019.
133 “Instagram adds tool for users to flag false information,” Reuters, August 15, 2019, https://www.reuters.com/article/us-
usa-facebook-factcheck/instagram-adds-tool-for-users-to-flag-false-information-idUSKCN1V52AA, accessed September 11,
2019.
29

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.
For the exclusive use of C. Ahrengot, 2022.

720-400 Fixing Facebook: Fake News, Privacy, and Platform Governance

134 Ezra Klein, “Mark Zuckerberg on Facebook’s hardest year, and what comes next,” Vox, April 2, 2018, https://www.
vox.com/2018/4/2/17185052/mark-zuckerberg-facebook-interview-fake-news-bots-cambridge, accessed September 19, 2019.
135 Ezra Klein, “Mark Zuckerberg on Facebook’s hardest year, and what comes next,” Vox, April 2, 2018, https://www.
vox.com/2018/4/2/17185052/mark-zuckerberg-facebook-interview-fake-news-bots-cambridge, accessed September 19, 2019.
136 Brent Harris, “Global Feedback and Input on the Facebook Oversight Board for Content Decisions,” Facebook Newsroom,
https://newsroom.fb.com/news/2019/06/global-feedback-on-oversight-board/, accessed September 19, 2019.
137 Brent Harris, “Establishing Structure and Governance for an Independent Oversight Board,” Facebook Newsroom,
https://newsroom.fb.com/news/2019/09/oversight-board-structure/, accessed September 19, 2019.
138 “Oversight Board Charter,” Facebook, https://fbnewsroomus.files.wordpress.com/2019/09/oversight_board_charter.pdf,
accessed September 19, 2019.
139 Mark Zuckerberg, “Facebook’s commitment to the Oversight Board,” https://fbnewsroomus.files.wordpress.com/
2019/09/letter-from-mark-zuckerberg-on-oversight-board-charter.pdf, accessed September 19, 2019.
140 “An interview with free speech expert Kate Klonick,” https://galley.cjr.org/public/conversations/-LmeU-
3OaoBEKIrhjgdo, accessed September 19, 2019.
141 “WhatsApp Legal Info,” WhatsApp, https://www.whatsapp.com/legal/#key-updates, accessed August 19, 2019.

142 “How WhatsApp Helps Fight Child Exploitation,” WhatsApp, https://faq.whatsapp.com/en/165022051727702/, accessed
August 19, 2019; “Staying safe on WhatsApp,” WhatsApp, https://faq.whatsapp.com/en/general/21197244, accessed August
19, 2019.
143 Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review
131(6) (April 2018): 1638.
144 The Justice Collaboratory, Yale Law School, “Report of The Facebook Data Transparency Advisory Group,” April 2019, p.
34, https://law.yale.edu/sites/default/files/area/center/justice/document/dtag_report_5.22.2019.pdf, accessed August 5,
2019.
145 Jonathan Rauch, “Twitter Needs a Pause Button,” The Atlantic, August 2019,
https://www.theatlantic.com/magazine/archive/2019/08/twitter-pause-button/592762/, accessed August 8, 2019; Justin
Kosslyn, “The Internet Needs More Friction,” Vice, November 16, 2018, https://www.vice.com/en_us/article/3k9q33/the-
internet-needs-more-friction, accessed August 8, 2019.
146 Jonathan Rauch, “Twitter Needs a Pause Button,” The Atlantic, August 2019,
https://www.theatlantic.com/magazine/archive/2019/08/twitter-pause-button/592762/, accessed August 8, 2019.
147 Chris Hughes, “It’s Time to Break Up Facebook,” New York Times, May 9, 2019, https://www.nytimes.com/2019/05/09/
opinion/sunday/chris-hughes-facebook-zuckerberg.html, accessed September 16, 2019.
148 Mark Zuckerberg, Facebook post, July 24, 2019, https://www.facebook.com/zuck/posts/10108280403736331, accessed
July 26, 2019.

30

This document is authorized for use only by Christian Ahrengot in Firm Theory and Corporate Governance 2022 taught by ALEKSANDRA GREGORIC, Copenhagen Business School from
Feb 2022 to Aug 2022.

You might also like