You are on page 1of 9

Facebook

Media Brief
Privacy & Disinformation
Concerns
Cambridge Analytica Situation

• In 2007 launched Facebook platform so that applications can be more social


• This enabled the people to log into apps and share their friends list and basic information
• In 2013 Cambridge University researcher Aleksandr Kogan created a personality quiz app
• Around 300,000 people shared their data along with there friend's data
• Kogan was able to access tens of millions of their friend’s data
• In 2014 we changed the platform to limit the data accessible by apps
• In 2015 we learned that Kogan has shared data from the app without users' consent
• We asked the Cambridge Analytica to delete this user data and provide certification for the same.
• In 2018 we leaned from The guardian and The new York Times that they have not deleted as they had certified.
Misuse of platform to influence election campaign

• In Spring 2016 Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts
for people connected to the presidential campaigns
• The team also found Facebook accounts linked to Russian hackers who were messaging journalists to share
information from the stolen emails
• In late 2016 we decided to expand on Mr. Stamos’s work, creating a group called Project P, to study false news
on the site
• We have shared our findings with US authorities investigating these issues
• Approximately $100,000 were spent on ad from June of 2015 to May of 2017 likely operated by Russia
Alleged Misuse of Platform to influence Campaign
Steps taken for Cambridge Analytica case

• In 2014 we already took the most important steps to prevent bad actors from accessing people's information
• Apps no longer can access data about persons friend unless their fiends has authorized the app
• Developer has to get approval to get from Facebook before accessing any sensitive data
• In 2015 we banned the Kogan app form using the Facebook platform
• In 2018 Banned Cambridge analytica from using our services
• we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically
reduce data access in 2014,
• and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not
agree to a thorough audit.
• And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those
apps.
• we will restrict developers' data access even further to prevent other kinds of abuse. For example, we will remove developers'
access to your data if you haven't used their app in 3 months. We will reduce the data you give an app when you sign in -- to
only your name, profile photo, and email address. We'll require developers to not only get approval but also sign a contract in
order to ask anyone for access to their posts or other private data. And we'll have more changes to share in the next few days.
• we want to make sure you understand which apps you've allowed to access your data. In the next month, we will show everyone
a tool at the top of your News Feed with the apps you've used and an easy way to revoke those apps' permissions to your data.
We already have a tool to do this in your privacy settings, and now we will put this tool at the top of your News Feed to make
sure everyone sees it.
Steps taken for spreading of misinformation

• Most Important effort is to remove fake account. Fake accounts are the main source of Facebook services.
• Using machine learning we have removed more than 1 billion fake facebook accounts
• We have hired more than 10000 people to work on safety and security to identify the most sophisticated actors
• We identified that the Internet Research Agency (IRA) has been largely focused on manipulating people in
Russia and other Russian-speaking countries. We took down a network of more than 270 of their pages and
accounts, including the official pages of Russian state-sanctioned news organizations that we determined were
effectively controlled and operated by the IRA.
• We found a network based in Iran with links to Iranian state media that has been trying to spread propaganda in
the US, UK, and Middle East, and we took down hundreds of accounts, pages, and groups, including some
linked to Iran's state-sponsored media.
• We recently took down a network of accounts in Brazil that was hiding its identity and spreading misinformation
ahead of the country's Presidential elections in October.
• Although not directly related to elections, we identified and removed a coordinated campaign in Myanmar by the
military to spread propaganda
Steps taken for spreading of misinformation

• misinformation is spread in three main ways:


• By fake accounts, including for political motivation;(fake account take down detailed in previous slide)
• By spammers, for economic motivation
• In these cases, spammers make up sensationalist claims and push them onto the internet in the hope that
people will click on them and they'll make money off the ads next to the content
• If we make it harder for them to make money, then they'll typically just go and do something else instead. This is
why we block anyone who has repeatedly spread misinformation from using our ads to make money
• • By regular people, who often may not know they're spreading misinformation
• when a post is flagged as potentially false or is going viral, we pass it to independent fact-checkers to review
• Posts that are rated as false are demoted and lose on average 80% of their future views.
• In places where viral misinformation may contribute to violence we now take it down
Steps taken for spreading of misinformation

• We now also require anyone running political or issue ads in the US to verify their identity and location.
• Coordinating With Governments and Companies

You might also like