You are on page 1of 184

Table of Contents

Contents
Technology companies’ use of personal data is more beneficial than harmful. ........... 错误!未定义书签。
Table of Contents .......................................................................................................................................... 2
How To Use This Evidence Packet................................................................................................................. 5
Definitions ..................................................................................................................................................... 8
Pro ............................................................................................................................................................... 14
Education ............................................................................................................................................ 15
Health Care ......................................................................................................................................... 24
Business Profit Increases..................................................................................................................... 27
Lowers Consumer Prices ..................................................................................................................... 31
Customer Experience .......................................................................................................................... 33
Marketing Strategy ............................................................................................................................. 37
Increase Data Security ........................................................................................................................ 38
Economy.............................................................................................................................................. 40
Business Confidence ........................................................................................................................... 49
Financial Law Enforcement ................................................................................................................. 51
Government Efficiency........................................................................................................................ 52
Manufacturing .................................................................................................................................... 56
Aerospace Industry ............................................................................................................................. 58
Disease Spread .................................................................................................................................... 60
Crime Prevention ................................................................................................................................ 62
Counter terrorism ............................................................................................................................... 65
Big Data Analytics................................................................................................................................ 67
Micro Targeting ................................................................................................................................... 69
Smart Cities ......................................................................................................................................... 71
Data Science Good .............................................................................................................................. 72
Artificial Intelligence ........................................................................................................................... 73
IoT/AI Good (Econ) .............................................................................................................................. 79

2
IoT/AI Good (Healthcare) .................................................................................................................... 80
Data Collection and the IoT................................................................................................................. 81
IoT/AI Good (Privacy) .......................................................................................................................... 83
AT: Privacy ........................................................................................................................................... 84
AT: Data stolen by hackers.................................................................................................................. 90
AT: Tech-giants Monopoly .................................................................................................................. 92
Con .............................................................................................................................................................. 95
Data Brokers Bad ................................................................................................................................ 96
Technology Monopolies Bad ............................................................................................................. 100
Moral reasons for protecting privacy ............................................................................................... 106
Violation of Privacy ........................................................................................................................... 107
Kills Social Protests............................................................................................................................ 114
Political Manipulation ....................................................................................................................... 116
Data = Surveillance............................................................................................................................ 118
Targeted Ads Bad .............................................................................................................................. 122
Human Dignity .................................................................................................................................. 127
Data collection is dangerous ............................................................................................................. 130
Discrimination ................................................................................................................................... 143
Influence on Behaviour ..................................................................................................................... 147
Brexit ................................................................................................................................................. 148
Tax Havens ........................................................................................................................................ 151
UK Link .............................................................................................................................................. 152
USA Link ............................................................................................................................................ 153
Impact ............................................................................................................................................... 157
AT: New Laws Solve .......................................................................................................................... 165
AT: Health Care ................................................................................................................................. 167
AT: Lower prices for the consumer ................................................................................................... 174
AT: New systems can handle the data load ...................................................................................... 176
AT: AI Good ....................................................................................................................................... 177
AT:Unconcerned About Privacy ........................................................................................................ 178
AT: Data is anonymous ..................................................................................................................... 181

3
4
How To Use This Evidence Packet
Looking at an evidence packet for the first time as a new debater can be a harrowing experience.
However, if you understand the purpose of, and how to use an evidence packet it should not be a scary
experience. If you have never used an evidence packet before or have struggled to get the best use out
of evidence packets in the past, this introduction will hopefully help you get the most out of this
resource.

What to Read
The first misconception new debaters have with evidence packets is they wrongly assume it is intended
to be read from front to back. Let me make this clear, YOU DO NOT NEED TO READ EVERY WORD IN THIS
EVIDENCE PACKET. The evidence packet is supposed to be a resource for you to use, not a reading
assignment. Obviously the more of this evidence packet you read the more knowledgeable on the topic
you will become, but success at tournaments is not necessarily correlated with numbers of pages read.

The parts of the packet I would recommend all students to read would be the Topic Overview, Terms
and definitions and maybe the Status Quo. If you feel you already understand certain terms and
definitions those can be skipped. The topic overview tries to give historical context for the topic, so if
you feel you already have a good understanding of the history of money and currency than the topic
overview could be skipped as well. If you were already a crypto enthusiast before this topic was released
and pay attention to the news and know what’s going on in the world regarding cryptocurrency then
you may be fine skipping the status quo. The point is the evidence packet is here to be a resource to
enhance your understanding and supply you with possible evidence to use, not everything in its 150+
pages will be useful to you.

How to Read
For the evidence sections of the packet be sure to make use of the table of contents. It’s best to have a
plan for what you are trying to find before you go looking. If you would just like to browse the evidence
scan the table of contents first. If you see something interesting, then you can flip to that page and
examine further.

Let’s look at how to read a piece of evidence (also known as a “card”) once you’ve found something
interesting. A card has three main parts:

Tagline: Summarizes the argument the evidence supports.

Citation: Where the evidence came from. Will have the name of the author, the date of the publication
and a url or other information to find the source online

Quote: A selection from the original source of evidence.

On the next page is an example of a piece of evidence you might find in this packet with each main part
of the card labeled.

5
Tagline – Summarizes the argument the evidence supports

Data collection for business use harms new startups and gives large
corporations a huge advantage

Radinsky, Kira. 3-2-2015, "Data Monopolists Like Google Are Threatening the Economy" Harvard
Business Review, https://hbr.org/2015/03/data-monopolists-like-google-are-threatening-the-economy

Citation – Where the evidence came from. Will have the name of the author, the date of the publication and a url or other
information to find the source online.
The White House recently released a report about the danger of big data in our lives. Its main focus was the same old topic of
how it can hurt customer privacy. The Federal Trade Commission and National Telecommunications and Information
Administration have also expressed concerns about consumer privacy, as have PwC and the Wall Street Journal.

However, big data holds many other risks. Chief among these, in my mind, is the threat to free market competition.

Today, we see companies building their IP not solely on technology, but rather on proprietary data and its derivatives. As ever-
increasing amounts of data are collected by businesses, new opportunities arise to build new markets and products based on
this data. This is all to the good. But what happens next? Data becomes the barrier-to-entry to the market and thus prevents
new competitors from entering. As a result of the established player’s access to vast amounts of proprietary data, overall
industry competitiveness suffers. This hurts the economy.

The search market is a perfect example of data as an unfair barrier-to-entry. Google revolutionized the search market in 1996
when it introduced a search-engine algorithm based on the concept of website importance — the famous PageRank algorithm.
But search algorithms have significantly evolved since then, and today, most of the modern search engines are based on
machine learning algorithms combining thousands of factors — only one of which is the PageRank of a website. Today, the
most prominent factors are historical search query logs and their corresponding search result clicks. Studies show that the
historical search improves search results up to 31%. In effect, today’s search engines cannot reach high-quality results without
this historical user behavior.

This creates a reality in which new players, even those with better algorithms, cannot enter the market and compete with the
established players, with their deep records of previous user behavior. The new entrants are almost certainly doomed to fail.
This is the exact challenge Microsoft faced when it decided to enter the search market years after Google – how could it build a
search technology with no past user behavior? (Disclosure: I previously worked as a researcher at Microsoft, but had nothing to
do with Bing.) The solution came one year later when they formed an alliance with Yahoo search, gaining access to their years
of user search behavior data. But Bing still lags far behind Google.

This dynamic isn’t limited only to internet search. Given the importance of data to every industry, data-based barriers to entry
can affect anything from agriculture, where equipment data is mined to help farms improve yields, to academia, where school
performance and census data is mined to improve education. Even in medicine, hospitals specializing in certain diseases
become the sole owners of the medical data that could be mined for a potential cure.

While data monopolies hurt both small start-ups and large, established companies, it’s the biggest corporate players who have
the biggest data advantage. McKinsey calculates that in 15 out of 17 sectors in the U.S. economy, companies with more than
1,000 employees store, on average, over 235 terabytes of data—more data than is contained in the entire US Library of
Congress. Data is a strategy

Quote: A selection from the original source of evidence. 6


How to use the card
When you find a card, you like you can do several things with it. You can use this evidence in a case or
prepare it for a block to answer an opponent’s argument. You can also choose to keep the card as is or
modify it to create a better card for your purposes. For example, the tagline was written by someone
else, maybe you think the tagline isn’t clear or doesn’t represent the card well. Maybe you just want to
change the tagline to be more consistent with the way you talk. You can change the tagline you use
when using the evidence.

The citation is included so that you can dive deeper into a piece of evidence. Maybe you like the card,
and you think the original source might have more great cards to make out of it. Maybe you want to
read the rest of the article to better understand the methodology of the source, or you are suspicious
that the rest of the source might have some contradictory quotes. Looking up the original source and
reading the whole document is a great way to truly understand evidence and be prepared to argue for
or against it.

Many cards in an evidence packet will have certain parts of the quote underlined or highlighted. This is
the portion of the quote that is most relevant to the tagline. But underlining or highlighting the most
important parts of the quote is inherently a subjective practice. If you really like a card and want to use
it, be sure to read around the underlined or highlighted parts. There may be a great sentence you are
leaving out! On the other hand, maybe there is an underlined or highlighted portion that you think is
unnecessary to read. If you are going to use a card its always best to make it your own.

In conclusion
No evidence packet is ever complete or authoritative on its own. There may be good arguments this
evidence packet does not provide evidence for. There may be arguments in this evidence packet that
won’t be successful in a tournament. The most important thing is to make sure you use this evidence
packet as a resource to supplement your voice and your arguments. The most successful debaters will
always be the ones that run arguments that they have found they can be persuasive with. Those
arguments may or may not come from an idea generated in an evidence packet. I hope this evidence
packet will be helpful for you on its own, but if you feel confused by something in the packet don’t
forget to communicate with your partner, team and coaches. Always make use of the resources around
you and never be afraid to ask questions. Good luck this season!

7
Definitions
“On Balance”

With all things considered.

Merriam-Webster, 2016, Balance, http://www.merriam-webster.com/dictionary/on%20balance

“Technology Company”

Guzzetta. Marli 2009” Why Even a Salad Chain Wants to Call Itself a Tech Company”. Inc. Magazine.
Retrieved 2018-07-09.

A technology company (often tech company) is a type of business entity that focuses mainly on the
development and manufacturing of technology products or providing technology as a service.

“Personal Data”

Personal information or data is information or data that is linked or can be linked to individual persons.
Examples include explicitly stated characteristics such as a person ‘s date of birth, sexual preference,
whereabouts, religion, but also the IP address of your computer or metadata pertaining to these kinds of
information. In addition, personal data can also be more implicit in the form of behavioral data, for
example from social media, that can be linked to individuals. Personal data can be contrasted with data
that is considered sensitive, valuable or important for other reasons, such as secret recipes, financial
data, or military intelligence. Data used to secure other information, such as passwords, are not
considered here. Although such security measures (passwords) may contribute to privacy, their
protection is only instrumental to the protection of other (more private) information, and the quality of
such security measures is therefore out of the scope of our considerations here.

General Data Protection Regulation Article 4, section I- https://gdpr-info.eu/issues/personal-data/

Personal data are any information which are related to an identified or identifiable natural person.

8
The data subjects are identifiable if they can be directly or indirectly identified, especially by reference
to an identifier such as a name, an identification number, location data, an online identifier or one of
several special characteristics, which expresses the physical, physiological, genetic, mental, commercial,
cultural or social identity of these natural persons.

“Benefits”

A good result or effect.

Merriam-Webster, 2016, Benefits, http://www.merriam-webster.com/dictionary/benefits

: a good or helpful result or effect: money that is paid by a company (such as an insurance company) or
by a government when someone dies, becomes sick, stops working, etc. : something extra (such as
vacation time or health insurance) that is given by an employer to workers in addition to their regular
pay

“Big Data”

Gartner, Accessed 1/19/2020 https://www.gartner.com/en/information-technology/glossary/big-data

High-volume, high-velocity and/or high-variety information assets that demand cost-effective,


innovative forms of information processing that enable enhanced insight, decision making, and process
automation.

“AI”

Government Office for Science. Artificial intelligence: opportunities and implications for the future of
decision making. 9 November 2016.

The analysis of data to model some aspect of the world. Inferences from these models are then used to
predict and anticipate possible future events.

“Machine Learning”

Landau, Deb. Artificial Intelligence and Machine Learning: How Computers Learn. iQ, 17 August 2016.
https://iq.intel.com/artificial-intelligence-and-machine-learning/ Accessed 7 December 2016.

9
The set of techniques and tools that allow computers to ‘think’ by creating mathematical algorithms
based on accumulated data.”

“Big Data Analytics”

Information Commissioners Office. “Big Data, Artificial Intelligence, Machine Learning and ...,” April 9,
2017. https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-
protection.pdf.

The combination of all three concepts of Big Data, AI and Machine Learning can be called ‘big data
analytics’.

“Microtargeting”

https://en.wikipedia.org/wiki/Microtargeting

includes direct marketing datamining techniques that involve predictive market segmentation (aka
cluster analysis).

“GDPR”

The General Data Protection Regulation (EU) 2016/679 (GDPR) is a regulation in EU law on data
protection and privacy for all individual citizens of the European Union (EU) and the European Economic
Area (EEA).

“IP addresses”

An Internet Protocol address (IP address) is a numerical label assigned to any device (such as a personal
computer) that uses the internet. Any time a user views a website, their IP address makes a request to
that website’s server for information. Websites find IP addresses useful because they allow the website
to track the movement of one computer across a website over time, by recording the series of requests
for websites that particular computer makes. IP addresses are either dynamic or static. Dynamic IP
addresses change every time an internet connection is reset. Static IP addresses are persistently
configured and stay the same over time. Static IP addresses allow even longer-term tracking, because
they allow websites more easily to recognize a returning customer.

10
“Web-bugs”

Web-bugs are 1x1-pixel pieces of code that allow advertisers to track customers remotely. These are
also sometimes referred to as “beacons‟, “action tags‟, “clear GIFs‟, “Web tags‟, or “pixel tags‟ (Gilbert,
2008). Web-bugs are different from cookies, because they are designed to be invisible to the user and
also are not stored on a user’s computer. With web-bugs, a customer cannot know whether they are
being tracked without inspecting a webpage’s underlying html code. Web-bugs allow advertisers to
track customers as they move from one webpage to another. They also allow advertisers to document
how far a website visitor scrolls down a page. Combined, this means they are very helpful in determining
website visitor interests. Murray and Cowart (2001) found that 96 percent of websites that mentioned a
top 50 brand (as determined by the 2000 FT rankings) had a web-bug.

“Cookies”

A cookie is simply a string of text stored by a user’s web browser. Cookies allow firms to track
customers‟ progress across browsing sessions. This can also be done using a user IP address, but cookies
are generally more precise, especially when IP addresses are dynamic as is the case for many residential
internet services. Advertisers may also use a “flash cookie‟ as an alternative to a regular cookie. A flash
cookie differs from a regular cookie in that it is saved

as a “Local Shared Object‟ on an individual’s computer, making it harder for users to delete using
regular tools on their browser. Advertiser tend to use cookies and web-bugs in conjunction because of
the challenge of customer deletion of cookies. 38.4 percent of survey respondents say that they delete
cookies each month (Burst, 2003). Therefore, web bugs (which a user cannot avoid) have been
increasingly used in conjunction with, or even in place of, cookies in targeting advertising. Web-bugs
also have greater reach in terms of tracking ability than cookies, because they can be used to track
consumers‟ scrolling within a webpage.

“Click-stream data”

A click-stream is a series of webpage requests. „Click-stream data‟ refers to the collection of data that
describes the browsing habits and actions of a particular customer. Typically, a customer is identified by
a cookie (if available) or an IP address if not. Web-bugs are used in conjunction with these webpage-
level data to determine precisely where on a webpage a customer browsed.

“Deep packet inspection”

11
There are also other even more comprehensive ways of obtaining user browsing behavior. One such
technique is “deep-packet‟ inspection. This occurs when an internet service provider inspects for
content the data packets that are sent between one of its clients and websites. This technique was used
by Phorm, an advertising agency in the UK, in partnership with internet service providers, to target ads.
Since deep-packet inspection is typically done at the IP level, it represents a universal history of browser
behavior. This is different from collection of click-stream data where users are tracked across a subset of
websites only. Researchers such as Clayton (2008) argued that this is akin to „warrantless wiretapping,‟
because theoretically the firm can observe the content of private communications.

“Data brokers”

A data broker, also called an information broker or information reseller, is a business that collects
personal information about consumers and sells that information to other organizations.

Data brokers can collect information about consumers from a variety of public and non-public sources
including courthouse records, website cookies and loyalty card programs. Typically, brokers create
profiles of individuals for marketing purposes and sell them to businesses who want to target their
advertisements and special offers.

"Anonymisation"

"Anonymisation" of data means processing it with the aim of irreversibly preventing the identification of
the individual to whom it relates. Data can be considered effectively and sufficiently anonymised if it
does not relate to an identified or identifiable natural person or where it has been rendered
anonymous in such a manner that the data subject is not or no longer identifiable.

“Pseudonymization”

"Pseudonymisation" of data means replacing any identifying characteristics of data with a pseudonym,
or, in other words, a value which does not allow the data subject to be directly identified.
The GDPR and the Data Protection Act 2018 define pseudonymisation as the processing of personal data in such a manner that
the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that
(a) such additional information is kept separately, and (b) it is subject to technical and organisational
measures to ensure that the personal data are not attributed to an identified or identifiable individual.

12
Although pseudonymisation has many uses, it should be distinguished from anonymisation, as it only
provides a limited protection for the identity of data subjects in many cases as it still allows
identification using indirect means. Where a pseudonym is used, it is often possible to identify the data
subject by analysing the underlying or related data.

13
Pro

14
Education

Colleges are paying data collection companies to track students to maximize


efficiency and improve student experience and graduation rates

Young, Jeffrey R., 10-18-2019, "How Tech Companies Are Selling Colleges on Mass Data Collection,"
EdSurge, https://www.edsurge.com/news/2019-10-18-how-tech-companies-are-selling-colleges-on-
mass-data-collection

Big Data will save you. Versions of that sales pitch echoed through the cavernous exhibit hall this week
at one of the largest trade shows for tech companies selling to colleges.

Though each of the more than 275 companies exhibiting here at the annual meeting of Educause
claimed a unique spin, the typical refrain mixed inspiration and fear, and went something like this: “Our
tech system will help your students finish their degrees and save them (and you) money,” and “Oh by
the way, if you don’t use something like our product, you won’t retain enough current and/or recruit
enough new students to stay in business.”

For more from this year's Educause conference, see our related story, At Educause, a Push to Monitor
Student Data is Met with Concerns About Privacy and Equity.

Bold, half-foot tall letters on one company’s display claimed its software had helped a college bring in
more than $1 million in new student revenue and led another to a 53 percent increase in new students.
Another exhibit promised “no more silos” in the data that colleges routinely collect in digital form.

If colleges actually bought all the tools sold here, just about every move made by students and
professors in physical and virtual campuses would be tracked and analyzed in the name of efficiency.
And the vision expands beyond that, as the vision is to create data profiles of students before they even
arrive on campus and to continue data tracking long after they’ve graduated.

One company, for instance, sells a chatbot for a college’s admissions website that can answer questions
from applicants and then carefully log each interaction for follow-up, while another promises to provide
colleges a high-tech geographic map of their alumni, overlayed with income levels and indications of
how likely they are to donate.

Educause threw its weight behind the large-scale use of data this summer, when it issued a joint
statement with the National Association of College and University Business Officers and the Association
for Institutional Research titled “Analytics Can Save Higher Education. Really.”

“We strongly believe that using data to better understand our students and our own operations paves
the way to developing new, innovative approaches for improved student recruiting, better student
outcomes, greater institutional efficiency and cost-containment, and much more,” the statement reads.

15
“With the change-making capacity of analytics, we should be moving aggressively forward to harness
the power of these new tools for the success of our institutions and our students.”

Even at this conference, however, plenty of college officials expressed concerns about making sure
student privacy is protected as Big Data comes to campus. (For more on that, see our related coverage
from Educause).

College officials arrived at this week’s conference feeling plenty of pressure to save money and improve
student success, of course. Demographic changes mean there will be far fewer high school graduates in
coming years for colleges to attract. Many states are cutting back their support for public universities
and asking more questions about how colleges operate. And tuition costs have soared to the point
where more families feel priced out, leading more to ask whether college is worth it.

“People are looking for us to be making moves to make the university more fiscally sound.”

—Keith Hill, director of technology operations and infrastructure at the University of Southern
Mississippi

Data collection can prevent drop-outs

Information Commissioner’s Office “Big data, artificial intelligence, machine learning and data
protection”

Learning analytics in higher education (HE) involves the combination of ‘static data’ such as traditional
student records with ‘fluid data’ such as swipe card data from entering campus buildings, using virtual
learning environments (VLEs) and downloading e-resources. The analysis of this information can reveal
trends that help to improve HE processes, benefiting both staff and students. Examples include the
following:

• Preventing drop-out via early intervention with students who are identified as disengaged from their
studies by analyzing VLE login and campus attendance data. The ability for tutors to provide high-
quality, specific feedback to students at regular intervals (as opposed to

having to wait until it is ‘too late’ – after an exam for instance). The feedback is based on pictures of
student performance gleaned from analysis of data from all the systems used by a student during their
study.

• Increased self-reflection by students and a desire to improve their performance based on access to

their own performance data and the class averages. Giving students shorter, more precise lecture

16
recordings based on data analysis that revealed patterns regarding the parts of full lecture recordings

that were repeatedly watched (assessment requirements, for example).

Colleges need to maximize limited funds. Data collection allows them to achieve
the information points necessary for student improvement at a low cost

Young, Jeffrey R., 10-18-2019, "How Tech Companies Are Selling Colleges on Mass Data Collection,"
EdSurge, https://www.edsurge.com/news/2019-10-18-how-tech-companies-are-selling-colleges-on-
mass-data-collection

“We’ve gone through years of budget cuts,” said Keith Hill, director of technology operations and
infrastructure at the University of Southern Mississippi, which has been working to build a “data
warehouse” that pulls in information from various systems on campus so that officials have a clearer
picture of what is happening with their students. “People are looking for us to be making moves to make
the university more fiscally sound,” he added.

Southern Mississippi’s efforts are a good example of the reality on many campuses starting to get into
Big Data. They’re not looking to track every move a student makes, but they want to make better
decisions about things like how much financial aid to award a prospective student so that it is enough to
make attendance possible but not so much that there isn’t enough left for more-needy students.

Darren Catalano, CEO of HelioCampus, the system used by University of Southern Mississippi’s data
warehouse, said that one goal of creating a central home of all the data that colleges already produce is
to build a shared set of facts for leaders across campus to refer to when making decisions.

“You can skew your analytics if your department is producing them,” he said.

Leaders at Educause have embraced the term “digital transformation” to talk about the data-infused
campuses they envision in the near future.

17
Data collection can track student use of buildings to maximize building hours
and potentially spot students who need financial or educational help

Young, Jeffrey R., 10-18-2019, "How Tech Companies Are Selling Colleges on Mass Data Collection,"
EdSurge, https://www.edsurge.com/news/2019-10-18-how-tech-companies-are-selling-colleges-on-
mass-data-collection

We walked the exhibit hall here looking for new ways that companies are offering this Big-Data-driven
transformation. Among them...

Track student use of gyms, dining halls and other buildings

A company called Degree Analytics made its pitch this year in the Startup Alley of the exhibit hall,
offering a way to use student connections to campus WiFi networks to analyze how often students
attend class, go to the gym or enter the dining hall or other campus facilities.

Marc Speed, vice president for partner success at the company, said that its mission is to “increase
graduation and reduce financial debt.” But in practical terms, the company logs and analyzes every time
students connect their smartphones, laptops or other mobile devices with campus wireless networks to
spot patterns and notice when, say, a student stops coming to meals or attending class.

“On Thursday, you get Wednesday’s class attendance,” he said, “and an alert goes out to advisers so
they have more time to spend with students.”

And if colleges knew that a student suddenly stopped entering the dining hall, he said, they could look to
see if the student had run out of money for meals and might need some sort of emergency aid.

Speed said that colleges have long had the ability to use WiFi to collect data on where students go on
campus, though most haven’t done anything with it.

“What wireless allows you to do is go from a reactive approach of ‘oh no, this student just failed a quiz
or they have failing grades at midterms or finals,’ to, ‘I understand that this is something that a student
is struggling with right now, and I can reach out to proactively to get ahead of the problem,’” he said.

18
Students have the ability to opt out. No privacy invasion

Young, Jeffrey R., 10-18-2019, "How Tech Companies Are Selling Colleges on Mass Data Collection,"
EdSurge, https://www.edsurge.com/news/2019-10-18-how-tech-companies-are-selling-colleges-on-
mass-data-collection

Will students find it creepy or invasive?

Not if colleges make it clear to students what is happening and give them a chance to opt in or opt out,
argued Speed.

“Our opt-out rates are extremely low because we communicate it very effectively,” he added, noting
that the system is in use at 15 campuses so far. The colleges have been helpful in setting clear
“guardrails” for privacy, he added.

The company is moving “aggressively,” he said, to expand to more campuses next year.

19
Data collection can track student progress in class and on major projects.
Provides a helpful guide for academic success

Young, Jeffrey R., 10-18-2019, "How Tech Companies Are Selling Colleges on Mass Data Collection,"
EdSurge, https://www.edsurge.com/news/2019-10-18-how-tech-companies-are-selling-colleges-on-
mass-data-collection

Keep tabs on students writing research papers

Another company exhibiting in the Startup Alley here hopes to digitize the process of student research
papers. The goal is to identify when students are struggling so that professors or TAs can intervene
before it’s too late.

A former computer-science professor from Ireland, Keith W. Maycock, started a company called
NetSearch after being frustrated by his experience overseeing student research.

“Normally you ask the students how they’re getting along, and the answer is, ‘fine,’” he said.

The traditional way to spot problems is to notice that the student stopped showing up for check-ins or
when they turn in a poor paper at the end. With the NetSearch system, the students log their progress
in a space that professors can track. And it includes a customized search engine that can be synced to a
university library to help students home in on important research papers in their field (and professors
can also get reports on how often students are using the search engine to make sure they’re on track).

He says the system makes clear to students that they are being monitored, but that they are happy to
do it because they can get quicker feedback on their progress.

20
Colleges can use data collection to track the best preforming professors,
maximizing student achievement and minimizing pay loss

Young, Jeffrey R., 10-18-2019, "How Tech Companies Are Selling Colleges on Mass Data Collection,"
EdSurge, https://www.edsurge.com/news/2019-10-18-how-tech-companies-are-selling-colleges-on-
mass-data-collection

Monitor the teaching performance of adjuncts in online courses

Just a few booths away from NetSearch, a company called EdifyOnline presented its effort to create a
digital platform to help adjunct professors find teaching gigs and to help colleges find highly-qualified
adjuncts.

Vik Agarwal, the company’s co-founder, was quick to reel off the facts about how big the issue of
adjunct teaching has become, noting that U.S. higher education employs more than 750,000 to a million
adjuncts each year, and more than half of all teaching is now done by adjuncts.

“So what we’re trying to do is to consolidate this marketplace and create value on both sides,” he said,
referring to colleges and adjuncts.

The system is designed for online teaching gigs—which, to be clear, is a smaller subset of that big pool of
adjuncts. And the system is more than just a marketplace. It asks participating colleges to run its
software so that the performance of the adjuncts can be measured and used by colleges to make hiring
decisions.

As Agarwal put it, “that gives us the ability to accumulate some data on what is effective online
education pedagogy. The instructors are 1099 employees of EdifyOnline. So they access the courses
through technology we developed.” He added that his company provides training and technical support
to adjuncts it works with.

The company takes a cut of the adjunct pay (which is notoriously low to begin with). That led one
adjunct professor to criticize the idea during a pitch competition for the startup companies exhibiting at
the conference, saying it was taking away resources from those who can least afford it.

“Our goal is not to pay them less, our goal is to find high-performing adjuncts and get them more
opportunities,” Agarwal said. “The good thing here is they can access opportunities from multiple
institutions in one central place instead of them having to individually pursue all of them.” Which, he
added, could let them find “five to 10 courses they can teach at the same time.”

“The entire goal,” he added, “is to increase the availability and scalability of online education.”

21
Collecting student data is an important asset in schooling

Education World 09/02/2015, Nicole Gorman “Why Collecting Student Data is Important to Student
Achievement?” https://www.educationworld.com/a_news/why-collecting-student-data-important-
student-achievement-1284123462

According to Aimee Rogstad Guidera, CEO of Data Quality Campaign, collecting student data is an
important factor in increasing student achievement despite frequent controversy over privacy and
security concerns.

22
Educational data collected is noninvasive

Kennedy, Joseph CED, “Big Data’s Economic Impact” https://www.ced.org/blog/entry/big-datas-


economic-impact

Such new use of data has the capacity to transform every industry in similar ways. A recent OECD report
listed some of the ways that more and better data will affect the economy:

• Producing new goods and services, such as the Nest home thermometer or mass customized shoes;

• Optimizing business processes;

• More-targeted marketing that injects customer feedback into product design;

• Better organizational management; and

• Faster innovation through a shorter research and development cycle.

23
Health Care

Data collection in healthcare allows medical providers to collect multiple points


of information, store and transmit it to others in the field, and use that data to
provide the best care to patients for their benefit

SAKOVICH, NATALLIA. 4-9-2019, "The Importance of Data Collection in Healthcare and Its Benefits," SaM
Solutions, https://www.sam-solutions.com/blog/the-importance-of-data-collection-in-healthcare/

Prevention Strategies

Accurate information is a powerful tool not only for commerce but also for industrial and social spheres.
The value of timely and precise information is that it can be used to create prevention strategies. Here
are some examples.

In a municipality: online traffic and car accident monitoring in a city in order to map out alternative
routes for drivers and prevent them from getting stuck in traffic

In manufacturing: continuous control of equipment and spare parts in order to repair or replace them
before they break down, thus avoiding downtime

In medicine: monitoring of patient health conditions in order to provide adequate treatment and
prevent the deterioration of health

What Is the Impact of Data Collection in Healthcare?

In the healthcare sector, we can find the best examples of how data tracking and analysis change the
world for the better. The use of Big Data in medicine is motivated by the necessity to solve both local
organizational issues, such as reducing workloads and increasing profits of a medical agency, and the
global problems of humanity, such as forecasting epidemics and combating existing diseases more
efficiently.

Data collection in healthcare allows health systems to create holistic views of patients, personalize
treatments, advance treatment methods, improve communication between doctors and patients, and
enhance health outcomes. Let’s take a closer look at some case studies.

Predictive Capabilities of EHR

A personal electronic health record (EHR) is a system that collects information about the patient’s health
from a number of sources. An EHR includes test results, clinical observations, diagnoses, current health
problems, medications taken by the patient, the procedures he/she underwent, etc.

This type of medical card is able to send notifications to patients about the need to undergo a new test
or to ensure compliance with drug prescriptions. This is a vivid example of predictive analytics in

24
healthcare. By using a scope of data from digital medical records, doctors can establish a link between
fundamentally different symptoms, give an accurate diagnosis and provide adequate treatment

The main benefits of an EHR are security and the comprehensiveness of patient information. How
popular are these medical documents? According to the survey by Statista dated 2018, 44% of US adult
respondents have accessed their EHR, while 18% have not accessed them, but they do have ones. Only
6% of respondents opted out of having an EHR.

The most effective way to implement data management in healthcare is to create a centralized system
of electronic medical records. In the European Union, this system is supposed to become reality by
2020.

25
Data collection in healthcare can results in better preventive care

SAKOVICH, NATALLIA. 4-9-2019, "The Importance of Data Collection in Healthcare and Its Benefits," SaM
Solutions, https://www.sam-solutions.com/blog/the-importance-of-data-collection-in-healthcare/

The spectrum of data applications in the field of medicine should systematically expand because data
analysis has every chance to change people’s lives for the better. Information technologies make it
possible both to identify diseases of an individual and to predict the state of health of entire social
groups. Therefore, implementing Big Data into healthcare is the key to developing preventive measures
and saving lives. As they say, prevention is even better than a cure.

26
Business Profit Increases

Data collection can be used to set prices, maximizing profits for businesses and
eliminating waste

Baker, Walter. "Using big data to make better pricing decisions," McKinsey & Company,
https://www.mckinsey.com/business-functions/marketing-and-sales/our-insights/using-big-data-to-
make-better-pricing-decisions

t’s hard to overstate the importance of getting pricing right. On average, a 1 percent price increase
translates into an 8.7 percent increase in operating profits (assuming no loss of volume, of course). Yet
we estimate that up to 30 percent of the thousands of pricing decisions companies make every year fail
to deliver the best price. That’s a lot of lost revenue. And it’s particularly troubling considering that the
flood of data now available provides companies with an opportunity to make significantly better pricing
decisions. For those able to bring order to big data’s complexity, the value is substantial.

We’re not suggesting it’s easy: the number of customer touchpoints keeps exploding as digitization fuels
growing multichannel complexity. Yet price points need to keep pace. Without uncovering and acting on
the opportunities big data presents, many companies are leaving millions of dollars of profit on the
table. The secret to increasing profit margins is to harness big data to find the best price at the
product—not category—level, rather than drown in the numbers flood.

Too big to succeed

For every product, companies should be able to find the optimal price that a customer is willing to pay.
Ideally, they’d factor in highly specific insights that would influence the price—the cost of the next-best
competitive product versus the value of the product to the customer, for example—and then arrive at
the best price. Indeed, for a company with a handful of products, this kind of pricing approach is
straightforward.

It’s more problematic when product numbers balloon. About 75 percent of a typical company’s revenue
comes from its standard products, which often number in the thousands. Time-consuming, manual
practices for setting prices make it virtually impossible to see the pricing patterns that can unlock value.
It’s simply too overwhelming for large companies to get granular and manage the complexity of these
pricing variables, which change constantly, for thousands of products. At its core, this is a big data issue
(exhibit).

Patterns in the analysis highlight opportunities for differentiated pricing at a customer-product level,
based on willingness to pay.

Patterns in the analysis highlight opportunities for differentiated pricing at a customer-product level,
based on willingness to pay.

27
We strive to provide individuals with disabilities equal access to our website. If you would like
information about this content we will be happy to work with you. Please email us at:
McKinsey_Website_Accessibility@mckinsey.com

Many marketers end up simply burying their heads in the sand. They develop prices based on simplistic
factors such as the cost to produce the product, standard margins, prices for similar products, volume
discounts, and so on. They fall back on old practices to manage the products as they always have or cite
“market prices” as an excuse for not attacking the issues. Perhaps worst of all, they rely on “tried and
tested” historical methods, such as a universal 10 percent price hike on everything.

“What happened in practice then was that every year we had price increases based on scale and
volume, but not based on science,” says the head of sales operations at a multinational energy
company. “Our people just didn’t think it was possible to do it any other way. And, quite frankly, our
people were not well prepared to convince our customers of the need to increase prices.”

Four steps to turn data into profits

The key to better pricing is understanding fully the data now at a company’s disposal. It requires not
zooming out but zooming in. As Tom O’Brien, group vice president and general manager for marketing
and sales at Sasol, said of this approach, “The [sales] teams knew their pricing, they may have known
their volumes, but this was something more: extremely granular data, literally from each and every
invoice, by product, by customer, by packaging.”

In fact, some of the most exciting examples of using big data in a B2B context actually transcend pricing
and touch on other aspects of a company’s commercial engine. For example, “dynamic deal scoring”
provides price guidance at the level of individual deals, decision-escalation points, incentives,
performance scoring, and more, based on a set of similar win/loss deals. Using smaller, relevant deal
samples is essential, as the factors tied to any one deal will vary, rendering an overarching set of deals
useless as a benchmark. We’ve seen this applied in the technology sector with great success—yielding
increases of four to eight percentage points in return on sales (versus same-company control groups).

To get sufficiently granular, companies need to do four things.

Listen to the data. Setting the best prices is not a data challenge (companies generally already sit on a
treasure trove of data); it’s an analysis challenge. The best B2C companies know how to interpret and
act on the wealth of data they have, but B2B companies tend to manage data rather than use it to drive
decisions. Good analytics can help companies identify how factors that are often overlooked—such as
the broader economic situation, product preferences, and sales-representative negotiations—reveal
what drives prices for each customer segment and product.

Automate. It’s too expensive and time-consuming to analyze thousands of products manually.
Automated systems can identify narrow segments, determine what drives value for each one, and
match that with historical transactional data. This allows companies to set prices for clusters of products
and segments based on data. Automation also makes it much easier to replicate and tweak analyses so
it’s not necessary to start from scratch every time.

28
Build skills and confidence. Implementing new prices is as much a communications challenge as an
operational one. Successful companies overinvest in thoughtful change programs to help their sales
forces understand and embrace new pricing approaches. Companies need to work closely with sales
reps to explain the reasons for the price recommendations and how the system works so that they trust
the prices enough to sell them to their customers. Equally important is developing a clear set of
communications to provide a rationale for the prices in order to highlight value, and then tailoring those
arguments to the customer. Intensive negotiation training is also critical for giving sales reps the
confidence and tools to make convincing arguments when speaking with clients. The best leaders
accompany sales reps to the most difficult clients and focus on getting quick wins so that sales reps
develop the confidence to adopt the new pricing approach. “It was critical to show that leadership was
behind this new approach,” says the managing director of a multinational energy company. “And we did
this by joining visits to difficult customers. We were able to not only help our sales reps but also show
how the argumentation worked.”

Actively manage performance. To improve performance management, companies need to support the
sales force with useful targets. The greatest impact comes from ensuring that the front line has a
transparent view of profitability by customer and that the sales and marketing organization has the right
analytical skills to recognize and take advantage of the opportunity. The sales force also needs to be
empowered to adjust prices itself rather than relying on a centralized team. This requires a degree of
creativity in devising a customer-specific price strategy, as well as an entrepreneurial mind-set.
Incentives may also need to be changed alongside pricing policies and performance measurements.

We’ve seen companies in industries as diverse as software, chemicals, construction materials, and
telecommunications achieve impressive results by using big data to inform better pricing decisions. All
had enormous numbers of SKUs and transactions, as well as a fragmented portfolio of customers; all
saw a profit-margin lift of between 3 and 8 percent from setting prices at much more granular product
levels. In one case, a European building-materials company set prices that increased margins by up to 20
percent for selected products. To get the price right, companies should take advantage of big data and
invest enough resources in supporting their sales reps—or they may find themselves paying the high
price of lost profits.

29
Personal data is the basis of Advertising Business Model

What is Advertising Business Model


Stephanie Raines, April 18, 2018

https://yourbusiness.azcentral.com/advertising-model-14446.html

An advertising model is the strategic use of an advertising medium, with the goal of reaching a
specific target audience. An advertising medium is the type of media or vehicle the advertising is placed on.
Understanding the target market helps to create an effective message and helps to determine the appropriate
advertising medium. In order for a model to be effective, you must clearly understand the advantages and
limitations of each medium.

YouTube business model relied on ads

eMarketer Oct 9, 2018

https://www.emarketer.com/content/video-swells-to-25-of-us-digital-ad-spending

This year, YouTube will generate $3.36 billion in net US video ad revenues, up 17.1% over last year.
YouTube now derives 73% of its ad revenues from video in the US. YouTube overall represents a steady
11% of Google’s net US ad revenues.

30
Lowers Consumer Prices

Worldwide consumers are willing to trade personal data collection for lower
prices

Tesfaye, Mekebeb. 3-18-2019, "Financial service consumers are willing to share their personal data for
benefits and discounts," Business Insider, https://www.businessinsider.com/financial-service-
consumers-share-personal-data-for-benefits-discounts-2019-3

Despite concerns around the privacy of their data, 60% of consumers would be willing to share personal
data, such as location data and lifestyle information, with financial service providers if it results in lower
pricing on products or benefits like gym membership discount, per an Accenture report based on a
survey of 47,000 banking and insurance customers across 28 markets. consumers willingness to share
data in select scenarios

Business Insider Intelligence

Here are some of the key takeaways from the report:

Consumers are willing to share their data with financial institutions (FIs) in exchange for improved
services. Among the surveyed consumers, 81% said they would be open to sharing more information
with banks for faster and easier loan approvals. And when it comes to insurance, 79% of consumers
would be willing to give more data access if it would reduce the chances of injury or loss.

Customers are also willing to share information with service providers to receive personalized
propositions, especially for money management. Seventy-six percentof consumers would share their
information to receive personalized offers based on location, like discounts from retailers. And
personalization that helps with money management was one of the most desired services among
respondents: 57% said they want saving tips based on their spending habits and 51% said they want
updates on how much money they have left until their next pay day.

Despite global consumers' broad desire for data sharing, there was significant divergence across
geographies. China, with 67% of customers, was the highest when it came to consumer willingness to
share more data in exchange for personalized services. The UK and Germany were the lowest, with only
40% of consumers in both countries willing to share more data with banks and insurers. This is
particularly surprising for the UK, which leads the global open banking movement, in which consumer-
permitted data is shared between two or more financial service providers. Likely, this skepticism in the
UK and Germany is tied to the introduction of GDPR in May 2018.

Unsurprisingly, age and tech-savviness also generate significant attitude divergence on data sharing.
Almost all of the respondents (95%) classified as "pioneers" by Accenture, defined as typically tech-savvy

31
millennials or Gen Zers, are willing to share more data. This is compared with just over half (55%) of
"traditionalists," who are typically over the age of 55 and tend to be tech avoiders.

For banks and insurance firms that are able to leverage technology to deliver tailored services, these
findings are a boon. More and more incumbents have been looking to tap into their vast troves of
transactional data to gain a competitive advantage, in many instances driven by regulatory action —
most notably in the UK with Open Banking. But many incumbent FIs are built on legacy IT infrastructure,
with their data often generated and stored across different systems. As open banking gains more
traction globally and consumers continue to demand more tailored services that meet their needs, the
pressure is on for incumbents to transform their outdated and siloed systems, or otherwise run the risk
of obsolescence.

32
Customer Experience

Data collection allows businesses to find out what customers want and to tailor
experiences to the needs of the consumer

Adam C. Uzialko, 2018 "How and Why Businesses Collect Consumer Data," Business News Daily,
https://www.businessnewsdaily.com/10625-businesses-collecting-data.html

1. Improving customer experience

For many companies, consumer data offers a way to better understand and meet their customers'
demands. By analyzing customer behavior, as well as vast troves of reviews and feedback, companies
can nimbly modify their digital presence, goods or services to better suit the current marketplace.

Not only do companies use consumer data to improve consumer experiences as a whole, but they also
use data to make decisions on an individualized level, said Brandon Chopp, digital manager for
iHeartRaves.

"Our most important source of marketing intelligence comes from understanding customer data and
using it to improve our website functionality," Chopp said. "Our team has improved the customer
experience by creating customized promotions and special offers based on customer data. Since each
customer is going to have their own individual preferences, personalization is key."

33
Risk-averse privacy ideas often prevent organizations from creating great customer
experiences
Susan Moore https://www.gartner.com/smarterwithgartner/how-to-balance-personalization-with-data-
privacy/

Despite having less trust in brands to use their data ethically, millennials are more willing to provide
companies with information in exchange for convenience and personalized experiences, according to a
recent Gartner survey. This is the privacy paradox — the apparent inconsistency between customer
concerns about privacy and actual online behavior.

Customers expect to be recognized and want their experiences personalized

Companies often operate under the misconception that personalization and privacy are conflicting
efforts, not symbiotic opportunities. The privacy paradox sets up potential conflict between data and
analytics leaders, customer experience (CX) leaders, marketing leaders, security and risk leaders, and
other business and IT stakeholders. It undermines CX initiatives, frustrates customers and limits new
business value.

“Organizations are losing their best chances to create great customer experiences due to needlessly
risk-averse privacy ideas that limit the use of personal data,” says Penny Gillespie, VP Analyst, Gartner.
“The key is to bring value to customers and keep data use in context.”

34
Data collection can speed up the development of a new product for consumer
satisfaction

Food Quality and Preference, September 2019,"Understanding consumer data use in new product
development and the product life cycle in European food firms – An empirical study," No Publication,
https://www.sciencedirect.com/science/article/pii/S0950329318307225

New food products have a high chance of market failure. To improve the chances of new product
success, a consumer-oriented approach to product development has been recommended. The approach
emphasizes the importance of an optimal fit between consumers’ needs and the new product. To
achieve this goal, food professionals generate and use various consumer data types and methods.
However, very few studies address the extent to which the food industry uses consumer data in product
development. This study investigated to what extent European food firms use various consumer data in
different phases, i.e., new product development (NPD) and the product life cycle (PLC), and what data
collection methods they employ. The current study classified consumer data into three types: consumer
involvement, food trend, and environmental factor data. The results showed that more than 85% of the
respondents use all three data types in NPD, while they rarely use consumer data in the PLC.
Respondents most frequently use data collection methods such as focus groups, consumer surveys, and
indirect data collection (e.g., internet, magazines). These methods are less effective in assuring product
success and in developing new-to-the-world products. In fact, more than half of the respondents never
or rarely worked on new-to-the-world projects. Increasing the use of consumer data in the PLC and
adapting data collection methods to the type of the project and the phase of product development
present opportunities for food firms to improve chances of new product success.

35
Collecting personal data keeps the content free of charge
Communications consumer panel May 2011 Online personal data: the consumer perspective Communications Consumer Panel
research report

One of the main benefits for consumers of companies collecting personal data is that it helps to keep
content free at the point of use. Free content has underpinned the internet‟s success and enabled it to
grow quickly. Although it is not strictly true to call online content „free‟, since so much of it is paid for
through advertising revenue, the absence of up-front costs allows people to access much online content
regardless of their individual circumstances. This is also quite often the case with smartphone
applications; developers offer free, ad-supported versions of an application alongside the paid-for
premium version without adverts (and usually with greater functionality).

36
Marketing Strategy

Targeted marketing based on data collection allows the best products to get to
the consumer

Adam C. Uzialko, 2018 "How and Why Businesses Collect Consumer Data," Business News Daily,
https://www.businessnewsdaily.com/10625-businesses-collecting-data.html

2. Refining marketing strategy

Contextualized data can help companies understand how consumers are engaging with and responding
to their marketing campaigns, and adjust accordingly. This highly predictive use case gives businesses an
idea of what consumers will want based on what they have already done. Like other aspects of
consumer data analysis, marketing is becoming more about personalization as a result, said Brett
Downes, SEO manager at Ghost Marketing.

"Mapping users' journeys and personalizing their journey, not just through your website but further
onto platforms like YouTube, LinkedIn, Facebook or on to any other website is now essential," Downes
said. "Segmenting data effectively allows you to market to only the people you know are most likely to
engage. These have opened up new opportunities in industries previously very hard to market to."

Maximizing marketing Campaign Results

Omer Minkara https://www.aberdeen.com/cmo-essentials/good-bad-ugly-using-customer-data-for-


marketing/

Companies use data-driven marketing programs for a number of reasons. One primary goal is to drive
personalized buyer conversations that yield measurable outcomes in contributing to the sales
forecasted pipeline as well as closed business. Capturing customer data across different channels
provides marketers valuable insights on how buyers interact with specific content – facilitated by use of
technology tools such as web analytics and mobile analytics. By understanding how buyers interact with
specific content, marketers are better positioned at presenting the right customer with the right content
– therefore reducing the customer effort to find the relevant products / services that meet their needs.
In other words, create a win / win scenario for both the buyer and the seller.

37
Increase Data Security

Data collection can be used to create better security algorithms to find identity
theft faster and to secure personal databases

Adam C. Uzialko, 2018 "How and Why Businesses Collect Consumer Data," Business News Daily,
https://www.businessnewsdaily.com/10625-businesses-collecting-data.html

4. Using data to secure data

Some businesses even use consumer data as a means of securing more sensitive information. For
example, banking institutions will sometimes use voice recognition data to authorize a user to access
their financial information or protect them for fraudulent attempts to steal their information.

These systems work by marrying data from a customer's interaction with a call center and machine
learning algorithms that can identify and flag potentially fraudulent attempts to access a customer's
account. This takes some of the guesswork and human error out of catching a con.

As data capture and analytics technologies become more sophisticated, companies will find new and
more effective ways to collect and contextualize data on everything, including consumers. For
businesses, doing so is essential to remaining competitive well into the future; failing to do so, on the
other hand, is like running a race with your legs tied together. Insight is king, and insight in the modern
business environment is gleaned from contextualized data.

Using transaction data to protect consumers and merchants from fraud

Catherine Tucker MIT Sloan School of Management Joint WPISP-WPIE Roundtable “The Economics of
Personal Data and Privacy: 30 Years after the OECD Privacy Guidelines” 1 December 2010 Background
Paper#1 “The Economics Value of Online Customer Data ”

Payment card authorization and transaction information can be used to create patterns of card use,
such as purchase size, frequency and type of transaction. Services like Advanced Authorization from Visa
can evaluate worldwide authorization data and alerts payment card issuers to potential fraudulent
purchases in real time – both at checkout and at the ATM. Payment card issuers have the ability to
immediately notify the consumer of fraudulent or suspicious account activity, thereby blocking future
transactions and minimizing potential losses. Detects domestic and international fraud schemes that

38
range from single incidents to large scale assaults. Key to detecting fraud is the ability to identify
patterns of card use behavior based on past usage. For example, if a card is generally used for small,
everyday purchases and a large authorization is requested for jewelry and electronics, the risk score for
potential fraud is higher than usual. A centralized network is able to instantly recall and analyze millions
of pieces of information in its memory; Visa is able to identify emerging fraud trends as they happen,
not hours or days later. Issuers may decline the purchase authorization, ask to speak to the cardholder,
send a text to the cardholder asking for confirmation, or monitor the account for similar out-of-pattern
purchases. An analysis of past global transactions suggests the Advanced Authorization program could
help identify US $1.5 billion in fraud around the world. Thousands of issuers globally utilize risk scores at
time of purchase to detect fraudulent activity

39
Economy

Data collection can bring economic values for both companies and customers

Catherine Tucker MIT Sloan School of Management Joint WPISP-WPIE Roundtable “The Economics of
Personal Data and Privacy: 30 Years after the OECD Privacy Guidelines” 1 December 2010 Background
Paper#1 “The Economics Value of Online Customer Data ”

Targeted Advertising has economic value for the firms:

Since the first banner ad was shown in October 1994, online advertising has grown quickly. By 2009,
online ads accounted for $22 billion in spending (IAB, 2010). Online advertising is also important for
what it enables. In the United States alone, websites supported by advertising represent 2.1% of the
total U.S. gross domestic product (GDP) and directly employ more than 1.2 million people (Deighton and
Quelch, 2009).

The obvious question is why advertisers have seen so much economic value in marketing themselves
online. The answer lies in two features that are unique to the kind of data that can be collected online:
Measurability and Targetability. Measurability is higher because the digital nature of online advertising
means that who sees what ad and whether they respond can be tracked relatively easily. Targetability is
higher because firms know about consumers‟ browsing habits at the individual level and can then
choose whether to serve them an ad based on that profile. These two features mean that online
advertising can overcome the legendary critique of offline advertising: “I know half my advertising is
wasted, I just don’t know which half‟ - John Wanamaker, department store innovator, 1838-1922.

Measurability

The measurability of online advertising creates economic value because offline it is hard to observe the
link between a consumer seeing an ad and the same consumer subsequently buying the product. It
appears to work (i.e., people who see the ads might be more likely to buy than people who do not), but
the firm cannot see how. The firm does not know whether a consumer was motivated to buy because of
a particular newspaper ad, or because of a TV

ad, a specific billboard, or their new radio jingle. For long-term advertising campaigns that try to build
affection over time for a particular brand, this problem is especially acute. Macy‟s can observe who uses
their 20% off coupon in the Sunday paper, but Budweiser cannot observe whether their Bud Light ad
shown during the Superbowl is linked to higher sales in the long run. Further, even if firms can observe a
clear link between someone seeing an ad and then buying the product, it is not clear that there is a

40
causal link between the two. By contrast, online advertising is inherently measurable. The digital nature
of online advertising means that individual responses to ads can be easily recorded. For example, the
effectiveness of many forms of online advertising can be measured by whether or not someone clicks on
an ad. Often, through the use of cookies, IP addresses, and other tracking technologies, advertisers can
go beyond this simple click metric and observe directly whether users engage in a certain online action
(such as an online purchase, or subscribing to receive more information) after being exposed to an ad.

Targetability

Targetability increases the value of advertising to firms because they no longer have to pay for wasted
“eyeballs‟. In other words, firms do not have to pay money to serve ads to people who are unlikely to
buy a car or a vacation or suffer from that particular health complaint. Instead, firms can be reassured
they are allocating their money to serve ads to customers who are potentially likely to buy their product.
Behavioral targeting has empirically been shown to increase the economic value that firms place on
advertising. Beales (2010) finds that in 2009 the price of behaviorally targeted advertising was 2.68
times the price of untargeted advertising.

Targeted Advertising has economic value for the costumers:

The economic value generated by such targeted advertising activity does not only reflect advertising
revenues for web-platforms. There are also benefits for consumers. (McKinsey, 2010) use conjoint
techniques to estimate that in the US and Europe consumers received 100 billion euros in value in 2010
from advertising supported web services. This is three times greater than current revenue from
advertising, suggesting that the consumer value created is larger than advertising revenues would
indicate.

There is obvious economic value created by online advertising for firms in terms of advertising revenues.
However, there are also sources of economic value for customers. First, targeted internet advertising
may serve a useful informational role. Instead of being forced to view untargeted mass-media ads on TV,
consumers see ads that are related to their potentially unique interests and desires. For example,
people who are not in the market for booking a

vacation are less likely to have to see ads for travel companies, but instead may see ads related to their
actual hobbies. Supporting this contention that there is utility of targeted ads for consumers, is the
observation that conversion rates for behaviorally targeted ads are more than twice the rate of non-
targeted ads (Beales, 2010).

Second, there is evidence that growing evidence that targeted ads are optimally less obtrusive in design
than non-targeted ads (Goldfarb and Tucker, 2010a). Obtrusive ads are ads that are deliberately
designed to intrude on users‟ web browsing experience such as “take-over ads‟ “or floating-ads‟. Since

41
consumers have expressed dislike of these obtrusive ads, there may be benefits to targeted ads if users
do not have to experience ads that are also deliberately obtrusive.

Last, the benefit of profitable targeted advertising is that it funds and enables a wide variety of free-web
content and services. It is possible that these services dwarf the negative impact of privacy concerns.
McKinsey (2010) use conjoint analysis to suggest that for each euro an Internet user is willing to spend
to limit privacy and advertising disturbance, the user gets a value of six euros from using current ad-
funded Web application services.

Data collection is critical to the modern economy- Data is certainly used now

Manyika, July-August 2012. “Why Big Data is The New Competitive Advantage” Ivey Business Journal.
http://iveybusinessjournal.com/publication/why-big-data-is-the-new-competitive-advantage/

Data are now woven into every sector and function in the global economy, and, like other essential
factors of production such as hard assets and human capital, much of modern economic activity simply
could not take place without them. The use of Big Data — large pools of data that can be brought
together and analyzed to discern patterns and make better decisions — will become the basis of
competition and growth for individual firms, enhancing productivity and creating significant value for
the world economy by reducing waste and increasing the quality of products and services. Until now,
the torrent of data flooding our world has been a phenomenon that probably only excited a few data
geeks. But we are now at an inflection point. According to research from the McKinsey Global Institute
(MGI) and McKinsey & Company’s Business Technology Office, the sheer volume of data generated,
stored, and mined for insights has become economically relevant to businesses, government, and
consumers. The history of previous trends in IT investment and innovation and its impact on
competitiveness and productivity strongly suggest that Big Data can have a similar power, namely the
ability to transform our lives. The same preconditions that allowed previous waves of IT-enabled
innovation to power productivity, i.e., technology innovations followed by the adoption of
complementary management innovations, are in place for Big Data, and we expect suppliers of Big Data
technology and advanced analytic capabilities to have at least as much ongoing impact on productivity
as suppliers of other kinds of technology. All companies need to take Big Data and its potential to create
value seriously if they want to compete. For example, some retailers embracing big data see the
potential to increase their operating margins by 60 per cent. Big Data: A new competitive advantage The
use of Big Data is becoming a crucial way for leading companies to outperform their peers. In most
industries, established competitors and new entrants alike will leverage data-driven strategies to
innovate, compete, and capture value. Indeed, we found early examples of such use of data in every
sector we examined. In healthcare, data pioneers are analyzing the health outcomes of pharmaceuticals
when they were widely prescribed, and discovering benefits and risks that were not evident during
necessarily more limited clinical trials. Other early adopters of Big Data are using data from sensors
embedded in products from children’s toys to industrial goods to determine how these products are

42
actually used in the real world. Such knowledge then informs the creation of new service offerings and
the design of future products Big Data will help to create new growth opportunities and entirely new
categories of companies, such as those that aggregate and analyse industry data. Many of these will be
companies that sit in the middle of large information flows where data about products and services,
buyers and suppliers, consumer preferences and intent can be captured and analysed. Forward-thinking
leaders across sectors should begin aggressively to build their organisations’ Big Data capabilities. In
addition to the sheer scale of Big Data, the real-time and high-frequency nature of the data are also
important. For example, ‘now casting,’ the ability to estimate metrics such as consumer confidence,
immediately, something which previously could only be done retrospectively, is becoming more
extensively used, adding considerable power to prediction. Similarly, the high frequency of data allows
users to test theories in near real-time and to a level never before possible.

43
Data collection plays a critical role in business

Marr. Bernard. 2013 “The Awesome Ways Big Data is Used to Change Our World” Linkedin
https://www.linkedin.com/pulse/20131113065157-64875646-the-awesome-ways-big-data-is-used-
today-to-change-our-world

The term ‘Big Data’ is a massive buzzword at the moment and many say big data is all talk and no action.
This couldn’t be further from the truth. With this post, I want to show how big data is used today to add
real value. Eventually, every aspect of our lives will be affected by big data. However, there are some
areas where big data is already making a real difference today. I have categorized the application of big
data into 10 areas where I see the most widespread use as well as the highest benefits [For those of you
who would like to take a step back here and understand, in simple terms, what big data is, check out the
posts in my Big Data Guru column]. 1. Understanding and Targeting Customers This is one of the biggest
and most publicized areas of big data use today. Here, big data is used to better understand customers
and their behaviors and preferences. Companies are keen to expand their traditional data sets with
social media data, browser logs as well as text analytics and sensor data to get a more complete picture
of their customers. The big objective, in many cases, is to create predictive models. You might
remember the example of U.S. retailer Target, who is now able to very accurately predict when one of
their customers will expect a baby. Using big data, Telecom companies can now better predict customer
churn; Wal-Mart can predict what products will sell, and car insurance companies understand how well
their customers actually drive. Even government election campaigns can be optimized using big data
analytics. Some believe, Obama’s win after the 2012 presidential election campaign was due to his
team’s superior ability to use big data analytics.

44
Data collection maximizes efficiency in business

Marr. Bernard. 2013 “The Awesome Ways Big Data is Used to Change Our World” Linkedin
https://www.linkedin.com/pulse/20131113065157-64875646-the-awesome-ways-big-data-is-used-
today-to-change-our-world

2. Understanding and Optimizing Business Processes Big data is also increasingly used to optimize
business processes. Retailers are able to optimize their stock based on predictions generated from social
media data, web search trends and weather forecasts. One particular business process that is seeing a
lot of big data analytics is supply chain or delivery route optimization. Here, geographic positioning and
radio frequency identification sensors are used to track goods or delivery vehicles and optimize routes
by integrating live traffic data, etc. HR business processes are also being improved using big data
analytics. This includes the optimization of talent acquisition – Moneyball style, as well as the
measurement of company culture and staff engagement using big data tools.

45
Data collection will strengthen the economy

Kennedy, Joseph. 2014 “Big Data’s Economic Impact” Committee for Economic Development.
https://www.ced.org/blog/entry/big-datas-economic-impact

Big Data is beginning to have a significant impact on our knowledge of the world. This is important
because increases in human knowledge have always played a large role in increasing economic activity
and living standards. Continued improvements in the price and capacity of tools for collecting,
transmitting, storing, analyzing and acting upon data will make it easier to gather more information and
to turn it into actionable knowledge of how systems work. Big Data is best understood as an untapped
resource that technology finally allows us to exploit. For instance, data on weather, insects, and crop
plantings has always existed. But it is now possible to cost-effectively collect those data and use them in
an informed manner. We can keep a record of every plant’s history, including sprayings and rainfall.
When we drive a combine over the field, equipment can identify every plant as either crop or weed and
selectively apply herbicide to just the weeds. Such new use of data has the capacity to transform every
industry in similar ways. A recent OECD report listed some of the ways that more and better data will
affect the economy: • Producing new goods and services, such as the Nest home thermometer or mass
customized shoes; • Optimizing business processes; • More-targeted marketing that injects customer
feedback into product design; • Better organizational management; and • Faster innovation through a
shorter research and development cycle. A report from McKinsey Global Institute estimates that Big
Data could generate an additional $3 trillion in value every year in just seven industries. Of this, $1.3
trillion would benefit the United States. The report also estimated that over half of this value would go
to customers in forms such as fewer traffic jams, easier price comparisons, and better matching
between educational institutions and students. Note that some of these benefits do not affect GDP or
personal income as we measure them. They do, however, imply a better quality of life. The impact
affects more than consumers, however. Erik Brynjolfsson of MIT found that companies that adopt data-
driven decision making achieve 5 to 6 percent higher productivity and output growth than their peers,
even after controlling for other investments and the use of information technology. Similar differences
were found in asset utilization, return on equity, and market value. The Omidyar Network recently
released a study of the impact of Open Data policies on government. The report concluded that
implementation of these policies could boost annual income within the G20 by between $700 billion and
$950 billion. The benefits include reduced corruption, better workplace conditions, increased energy
efficiency, and improved foreign trade. Even the advertising industry, whose use of data is sometimes
viewed with suspicion, delivers large benefits. A study by the Direct Marketers Association found that
better use of data made marketing more efficient both by allowing companies to avoid sending
solicitations to individuals who are unlikely to buy their product and by matching customers with offers
that better meet their individual needs and interests. Big data also reduced barriers to entry by making
it easier for small companies to get useful market data. Finally, another McKinsey study concluded that
free Internet services underwritten by Internet advertising delivered significant benefits to Internet
users. It estimated the social surplus from these services at 120 billion euros, 80 percent of which went
to consumers. This trend in data also has an impact on workers. Data analysis has been called “the

46
sexiest job of the 21st century.” The United States already has an estimated 500,000 Big Data jobs. But
McKinsey estimates that there is a shortage of between 140,000 and 190,000 workers with advanced
degrees in statistics, computer engineering and other applied fields. Perhaps more important is the
shortage of 1.5 million managers and analysts who hold traditional jobs but are capable of integrating
Big Data into their decision making. The need to understand and act on improved data is likely to
increase worker productivity and pay. Thanks to continued technological improvements, data will
become even easier to collect, transmit, store, and analyze. Together with related advances in material
sciences, biotechnology, information technology, and nanotechnology, it will enable a vast range of new
products and services. As with any resource, the main constraint will be the ability to imagine new uses
for this resource and to build a viable business model around these uses that delivers valuable products
and services to consumers.

47
Data collection can become profits for businesses

Adam C. Uzialko, 2018 "How and Why Businesses Collect Consumer Data," Business News Daily,
https://www.businessnewsdaily.com/10625-businesses-collecting-data.html

3. Turning data into cash flow

Companies that capture data also stand to profit from it. Data brokers, or companies that buy and sell
information on customers, have risen as a new industry alongside big data. For businesses that are
capturing large amounts of data, this represents an opportunity for a new stream of revenue.

For advertisers, having this information available for purchase is immensely valuable, so the demand for
more and more data is ever increasing. That means the more disparate data sources data brokers can
pull from to package more thorough data profiles, the more money they can make by selling this
information to one another and advertisers.

48
Business Confidence

Data collection is critical to the stability of the future of business

Bhushan, Pritisesh. 2014 “Trade Surveillance with Big Data” Cognizant.


http://www.cognizant.com/InsightsWhitepapers/Trade-Surveillance-with-Big-Data-codex1096.pdf

Electronic trading has come a long way since the NASDAQ’s debut in 1971. Today’s fragmented
electronic market venues (the result of non- traditional exchanges competing for trades with traditional
exchanges) have created so- called “dark pools of liquidity.” Simultaneously, automated and algorithmic
trading has become more sophisticated — now enabling individuals and institutions to engage in high-
frequency trading (HFT). 1 As a result, the number of trades has increased tenfold in the last decade,
from 37 million trades in NYSE listed issues in February 2004 to 358 million in February 2014. 2 Traders
at capital market firms have been at the forefront of these advancements — pushing the envelope along
the way. How has this impacted trade surveillance and compliance teams? The rise of algorithmic
trading, where split-second execution decisions are made by high-performance computers, plus the
explosion of trading venues and the exponential growth of structured and unstructured data, are
challenging regulatory and compliance teams to rethink their surveillance techniques. Those that
depend on individual alerts can no longer meet most firms’ requirements. We believe that capital
markets firms require a radically new and holistic surveillance approach. This paper highlights some of
the key issues faced by regulators and compliance teams. We will also describe how new “big data”
solutions can help manage them.

49
The future of the stock market depends on data collection and analytics

Bhushan, Pritisesh. 2014 “Trade Surveillance with Big Data” Cognizant.


http://www.cognizant.com/InsightsWhitepapers/Trade-Surveillance-with-Big-Data-codex1096.pdf

The explosive growth of data over the last few years is taxing the IT infrastructure of many capital
markets firms. Fortunately, there are emerging technologies that can help these companies better
manage and leverage ever-bigger data pools. These tools can enable trading firms to end data triage and
retain useful historical information. By building a big-data architecture, IT organizations can keep both
structured and unstructured data in the same repository, and process substantial bits and bytes within
acceptable timeframes. This can help them uncover previously inaccessible “pearls” in today’s ever-
expanding ocean of data. Big data analytics involves collecting, classifying and analyzing huge volumes of
data to derive useful information, which becomes the platform for making logical business decisions
(see figure below). Relational database techniques have proven to be inadequate for processing large
quantities of data, and hence cannot be applied to big data sets. 9 For today’s capital markets firms, big
data sets can reach multiple petabytes (one petabyte is one quadrillion bits of data). A Big Data Analytics
Reference Architecture Front Office Consolidated Order Book Prop Orders Client Orders Market Dat a
Real-Time Market Data Reference Dat a Securities Data Corporate Actions Client Dat a Employee Data
Unstructured Dat a Macroeconomic News Phone Calls E-mails Corporate News Instant Msg. Twitter
Different Asset Class & All Relevant Venues Traders Dat a Intelligence Alerts Compliance Dashboard BI
Reports Users Regulators Risk Managers. Data Platform Several Petabytes of Data (Real-Time Query &
Updates) Real-Time Analytic Engine Compliance Team Executive Board Sales Reps & Traders Quality
Metrics Historical Market Data Historical Action Data Near-Term & Real-Time Actions cognizant 20-20
insights 4 To keep processing times tolerable, many organizations facing big-data challenges are
counting on new open-source technologies such as NoSQL (not only SQL) and data stores such as
Apache Hadoop, Cassandra and Accumulo. The figure on the previous page depicts a representative big-
data architecture appropriate for modern-day trade surveillance. A highly scalable in-memory data grid
(e.g., SAP’s HANA) can be used to store data feeds and events of interest. Real- time surveillance can
thus be enabled through exceptionally fast 10 open-source analytic tools such as complex event
processing (CEP). CEP technologies like Apache Spark, Shark and Mesos put big data to good use by
analyzing it in real time, along with other incidents. Meaningful events can also be recognized and
flagged in real time.

50
Financial Law Enforcement

Data collection helps the government watch for suspicious trading activity

Kumar, Sunil. 2015 “The Changing Face of Trade Surveillance and the Role of Analytics” Global
Consulting Practice http://www.tcs.com/SiteCollectionDocuments/White%20Papers/Changing-face-
trade-surveillance-role-analytics-0315-1.pdf

Big Data is playing a key role in improving the effectiveness of surveillance. Trade surveillance is
experiencing increased regulatory scrutiny and complexities due to the prevalence of multiple
communication platforms, making it difficult for regulators to perform market oversight functions. Big
Data technology will play a more important role in monitoring market participants’ trading activity both
at participants’ and regulators’ ends. This is done by ingesting enormous volumes of various types of
data originating from different channels (such as social media messages, blogs, emails , phone call logs ,
bank statements) and consolidating this structured and unstructured data into a usable database that
will allow advanced pattern-matching analytics to spot any anomalous behavior. Capital market entities
are also increasingly using Big Data f or enhanced business intelligence gathering. They employ
techniques such as Complex Event Processing (CEP), business rule -based text mining, machine learning,
and predictive analytics to perform market sentiment analysis, anomalous trading behavior detection,
and advanced trading analytics. However, there are still several challenges to the widespread adoption
of Big Data in capital markets surveillance. These include the lack of enhanced tools and techniques for
visualization and successful deployment by regulators and infrastructure entities, and gaps in the
skillsets (especially data scientists) needed to administer Big Data analytics solutions etc . As capital
market-specific usage of Big Data become more widespread , firms will not only have a better business
case to adopt it, but will also become technically more equipped to leverage it.

51
Government Efficiency

Data collection by the business sector provides efficiency tools for the
government

Pentland, Alex.2013 “The Data-Driven Society”


https://connection.mit.edu/sites/default/files/publication-
pdfs/data%20driven%20society%20sci%20amer_0.pdf

USING BIG DATA to diagnose problems and predict successes is one thing. What is even more exciting is
that we can use big data to design organizations, cities and governments that work better than the ones
we have today. The potential is easiest to see within corporations. By measuring idea flow, it is usually
possible to find simple changes that improve productivity and creative output. For instance, the
advertising department of a German bank had experienced serious problems launching successful new
product campaigns, and they wanted to know what they were doing wrong. When we studied the
problem with sociometric ID badges, we found that while groups within the organization were
exchanging lots of e-mails, almost no one talked to the employees in customer service. The reason was
simple: customer service was on another floor. This configuration caused huge problems. Inevitably, the
advertising department would end up designing ad campaigns that customer service was unable to
support. When management saw the diagram we produced depicting this broken flow of information,
they immediately realized they should move customer service to the same floor as the rest of the
groups. Problem solved. Increasing engagement is not a magic bullet. In fact, increasing engagement
without increasing exploration can cause problems. For instance, when postdoctoral student Yaniv
Altshuler and I measured information flow within the eToro social network of financial traders, we
found that at a certain point people become so interconnected that the flow of ideas is dominated by
feedback loops. Sure, everyone is trading ideas -- but they are the same ideas over and over. As a result,
the traders work in an echo chamber. And when feedback loops dominate within a group of traders,
financial bubbles happen. This is exactly how otherwise intelligent people all became convinced that
Pets.com was the stock of the century. Fortunately, we have found that we can manage the flow of
ideas between people by providing small incentives, or nudges, to individuals. Some incentives can
nudge isolated people to engage more with others; still others can encourage people mired in
groupthink to explore outside their current contacts. In an experiment with 2.7 million small-time,
individual eToro investors, we "tuned" the network by giving traders discount coupons that encouraged
them to explore the ideas of a more diverse set of other traders. As a result, the entire network
remained in the healthy wisdom-of-the-crowd region. What was more remarkable is that although we
applied the nudges only to a small number of traders, we were able to increase the profitability of all
social traders by more than 6 percent. Designing idea flows can also help solve the tragedy of the
commons, in which a few people behave in such a way that everyone suffers, yet the cost to any one

52
person is so small there is little motivation to fix the problem. An excellent example can be found in the
health insurance industry. People who fail to take medicine they need, or exercise, or eat sensibly have
higher health care costs, driving up the price of health insurance for everyone. Another example is when
tax collection is too centralized: local authorities have little incentive to ensure that everyone pays taxes,
and as a result, tax cheating becomes common. The usual solution is to find the offenders and offer
incentives or levy penalties designed to get them to behave better. This approach is expensive and
rarely works. Yet graduate student Ankur Mani and I have shown that promoting increased engagement
between people can minimize these situations. The key is to provide small cash incentives to those who
have the most interaction with the offenders, rewarding them rather than the offender for improved
behavior. In real-world situations -- with initiatives to encourage healthy behavior, for example, or to
prompt people to save energy -- we have found that this social-pressure-based approach is up to four
times as efficient as traditional methods. This same approach can be used for social mobilization -- in
emergencies, say, or any time a special, coordinated effort is needed to achieve some common goal. In
2009, for example, the Defense Advanced Research Projects Agency designed an experiment to
celebrate the 40th anniversary of the Internet. The idea was to show how social media and the Internet
could enable emergency mobilization across the U.S. DARPA offered a $40,000 prize for the team that
could most quickly find 10 red balloons placed across the continental U.S. Some 4,000 teams signed up
for the contest, and almost all took the simplest approach -- offering a reward to anyone who reported
seeing a balloon. My research group took a different tack. We split the reward money among those who
used their social networks to recruit a person who later saw a balloon and those who saw a balloon
themselves. This scheme, which is conceptually the same as the social-pressure approach to solving
tragedies of the commons, encouraged people to use their social networks as much as possible. We won
the contest by locating all 10 balloons in only nine hours.

53
Data collection can solve national problems

Pentland, Alex.2013 “The Data-Driven Society”


https://connection.mit.edu/sites/default/files/publication-
pdfs/data%20driven%20society%20sci%20amer_0.pdf

USING BIG DATA to diagnose problems and predict successes is one thing. What is even more exciting is
that we can use big data to design organizations, cities and governments that work better than the ones
we have today. The potential is easiest to see within corporations. By measuring idea flow, it is usually
possible to find simple changes that improve productivity and creative output. For instance, the
advertising department of a German bank had experienced serious problems launching successful new
product campaigns, and they wanted to know what they were doing wrong. When we studied the
problem with sociometric ID badges, we found that while groups within the organization were
exchanging lots of e-mails, almost no one talked to the employees in customer service. The reason was
simple: customer service was on another floor. This configuration caused huge problems. Inevitably, the
advertising department would end up designing ad campaigns that customer service was unable to
support. When management saw the diagram we produced depicting this broken flow of information,
they immediately realized they should move customer service to the same floor as the rest of the
groups. Problem solved. Increasing engagement is not a magic bullet. In fact, increasing engagement
without increasing exploration can cause problems. For instance, when postdoctoral student Yaniv
Altshuler and I measured information flow within the eToro social network of financial traders, we
found that at a certain point people become so interconnected that the flow of ideas is dominated by
feedback loops. Sure, everyone is trading ideas -- but they are the same ideas over and over. As a result,
the traders work in an echo chamber. And when feedback loops dominate within a group of traders,
financial bubbles happen. This is exactly how otherwise intelligent people all became convinced that
Pets.com was the stock of the century. Fortunately, we have found that we can manage the flow of
ideas between people by providing small incentives, or nudges, to individuals. Some incentives can
nudge isolated people to engage more with others; still others can encourage people mired in
groupthink to explore outside their current contacts. In an experiment with 2.7 million small-time,
individual eToro investors, we "tuned" the network by giving traders discount coupons that encouraged
them to explore the ideas of a more diverse set of other traders. As a result, the entire network
remained in the healthy wisdom-of-the-crowd region. What was more remarkable is that although we
applied the nudges only to a small number of traders, we were able to increase the profitability of all
social traders by more than 6 percent. Designing idea flows can also help solve the tragedy of the
commons, in which a few people behave in such a way that everyone suffers, yet the cost to any one
person is so small there is little motivation to fix the problem. An excellent example can be found in the
health insurance industry. People who fail to take medicine they need, or exercise, or eat sensibly have
higher health care costs, driving up the price of health insurance for everyone. Another example is when
tax collection is too centralized: local authorities have little incentive to ensure that everyone pays taxes,
and as a result, tax cheating becomes common. The usual solution is to find the offenders and offer
incentives or levy penalties designed to get them to behave better. This approach is expensive and

54
rarely works. Yet graduate student Ankur Mani and I have shown that promoting increased engagement
between people can minimize these situations. The key is to provide small cash incentives to those who
have the most interaction with the offenders, rewarding them rather than the offender for improved
behavior. In real-world situations -- with initiatives to encourage healthy behavior, for example, or to
prompt people to save energy -- we have found that this social-pressure-based approach is up to four
times as efficient as traditional methods. This same approach can be used for social mobilization -- in
emergencies, say, or any time a special, coordinated effort is needed to achieve some common goal. In
2009, for example, the Defense Advanced Research Projects Agency designed an experiment to
celebrate the 40th anniversary of the Internet. The idea was to show how social media and the Internet
could enable emergency mobilization across the U.S. DARPA offered a $40,000 prize for the team that
could most quickly find 10 red balloons placed across the continental U.S. Some 4,000 teams signed up
for the contest, and almost all took the simplest approach -- offering a reward to anyone who reported
seeing a balloon. My research group took a different tack. We split the reward money among those who
used their social networks to recruit a person who later saw a balloon and those who saw a balloon
themselves. This scheme, which is conceptually the same as the social-pressure approach to solving
tragedies of the commons, encouraged people to use their social networks as much as possible. We won
the contest by locating all 10 balloons in only nine hours.

55
Manufacturing

Empirics prove Big Data can help the manufacturing sector- only a question of utilizing new data

Manyika et al. 2011. “Big Data: The Next Frontier for Innovation, Competition, and Productivity”
McKinsey Global Institute.
http://www.mckinsey.com/insights/business_technology/big_data_the_next_frontier_for_innovation

The manufacturing sector has been the backbone of many developed economies and remains an
important driver of GDP and employment there. However, with the rise of production capacity and
capability in China and other low-cost nations, manufacturing has become an increasingly global activity,
featuring extended supply chains made possible by advances in information and communications
technology. While globalization is not a recent phenomenon, the explosion in information and
communication technology, along with reduced international freight costs and lower entry barriers to
markets worldwide, has hugely accelerated the industrial development path and created increasingly
complex webs of value chains spanning the world. Increasingly global and fragmented manufacturing
value chains create new challenges that manufacturers must overcome to sustain productivity growth.
In many cases, technological change and globalization have allowed countries to specialize in specific
stages of the production process. As a result, manufacturers have assembled global production and
supply chain networks to achieve cost advantages. For example, a typical global consumer electronics
manufacturer has production facilities on almost every continent, weighing logistics costs against
manufacturing costs to optimize the footprint of their facilities. Advanced manufacturers also often have
a large number of suppliers, specialized in producing specific types of components where they have
sustainable advantages both in cost and quality. It is typical for a large automobile original equipment
manufacturer (OEM) assembly plant to be supplied by up to 4,000 outside vendors. To continue
achieving high levels of productivity growth, manufacturers will need to leverage large datasets to drive
efficiency across the extended enterprise and to design and market higher-quality products. The “raw
material” is readily available; manufacturers already have a significant amount of digital data with which
to work. Manufacturing stores more data than any other sector—close to 2 exabytes of new data stored
in 2010. This sector generates data from a multitude of sources, from instrumented production
machinery (process control), to supply chain management systems, to systems that monitor the
performance of products that have already been sold (e.g., during a single cross-country flight, a Boeing
737 generates 240 terabytes of data). And the amount of data generated will continue to grow
exponentially. The number of RFID tags sold globally is projected to rise from 12 million in 2011 to 209
billion in 2021. IT systems installed along the value chain to monitor the extended enterprise are
creating additional stores of increasingly complex data, which currently tends to reside only in the IT
system where it is generated. Manufacturers will also begin to combine data from different systems
including, for example, computer-aided design, computer-aided engineering, computer-aided
manufacturing, collaborative product development management, and digital manufacturing, and across
organizational boundaries in, for instance, end-to-end supply chain data.

56
57
Aerospace Industry

Big data will be a pillar of the aerospace industry

Groh, Rainer. 2015 “Big Data in Aerospace” Aerospaceengineeringblog.com


http://aerospaceengineeringblog.com/big-data-in-aerospace/

“Big data” is all abuzz in the media these days. As more and more people are connected to the internet
and sensors become ubiquitous parts of daily hardware an unprecedented amount of information is
being produced. Some analysts project 40% growth in data over the next decade, which means that in a
decade 30 times the amount of data will be produced than today. Given this this trend, what are the
implications for the aerospace industry? Big data: According to Google a “buzzword to describe a
massive volume of both structured and unstructured data that is so large that it’s difficult to process
using traditional database and software techniques.” Fundamentally, big data is nothing new for the
aerospace industry. Sensors have been collecting data on aircraft for years ranging from binary data
such as speed, altitude and stability of the aircraft during flight, to damage and crack growth progression
at service intervals. The authorities and parties involved have done an incredible job at using routine
data and data gathered from failures to raise safety standards. What exactly does “big data” mean? Big
data is characterised by a data stream that is high in volume, high velocity and coming from multiple
sources and in a variety of forms. This combination of factors makes analysing and interpreting data via
a live stream incredibly difficult, but such a capability is exactly what is needed in the aerospace
environment. For example, structural health monitoring has received a lot of attention within research
institutes because an internal sensory system that provides information about the real stresses and
strains within a structure could improve prognostics about the “health” of a part and indicate when
service intervals and replacements are needed. Such a system could look at the usage data of an aircraft
and predict when a component needs replacing. For example, the likelihood that a part will fail could be
translating into an associated repair that is the best compromise in terms of safety and cost.
Furthermore, the information can be fed back to the structural engineers to improve the design for
future aircraft. Ideally you want to replicate the way the nervous system uses pain to signal damage
within the body and then trigger a remedy. Even though structural health monitoring systems are
feasible today, analysing the data stream in real time and providing diagnostics and prognostics remains
a challenge. Other areas within aerospace that will greatly benefit from insights gleaned from data
streams are cyber security, understanding automation and the human-machine interaction, aircraft
under different weather and traffic situations and supply chain management. Big data could also serve
as the underlying structure that establishes autonomous aircraft on a wide scale. Finally, big data opens
the door for a new type of adaptive design in which data from sensors are used to describe the
characteristics of a specific outcome, and a design is then iterated until the desired and actual data
match. This is very much an evolutionary, trial-and-error approach that will be invaluable for highly
complex systems where cause and effect are not easily correlated and deterministic approaches are not
possible. For example, a research team may define some general, not well defined hypothesis about a

58
future design or system they are trying to understand, and then use data analytics to explore the
available solutions and come up with initial insights into the governing factors of a system. In this case it
is imperative to fail quickly and find out what works and what does not. The algorithm can then be
refined iteratively by using the expertise of an engineer to point the computer in the right direction.
Thus, the main goal is to turn data into useful, actionable knowledge. For example in the 1990’s very
limited data existed in terms of understanding the airport taxi-way structure. Today we have the
opposite situation in that we have more data than we can actually use. Furthermore, not only the
quantity but also quality of data is increasing rapidly such that computer scientists are able to design
more detailed models to describe the underlying physics of complex systems. When converting data to
actionable information one challenge is how to account for as much of the data as possible before
reaching a conclusion. Thus, a high velocity, high volume and diverse data stream may not be the most
important characteristic for data analytics. Rather it is more important that the data be relevant,
complete and measurable. Therefore good insights can also be gleaned from smaller data if the data
analytics is powerful. While aerospace is neither search nor social media, big data is incredibly important
because the underlying stream from distributed data systems on aircraft or weather data systems can
be aggregated and analysed in consonance to create new insights for safety. Thus, in the aerospace
industry the major value drivers will be data analytics and data science, which will allow engineers and
scientists to combine datasets in new ways and gain insights from complex systems that are hard to
analyse deterministically. The major challenge is how to upscale the current systems into a new era
where the information system is the foundation of the entire aerospace environment. In this manner
data science will transform into a fundamental pillar of aerospace engineering, alongside the classical
foundations such as propulsion, structures, control and aerodynamics.

59
Disease Spread

Data collection can stop disease spreads and epidemics


Michael, Katina et al., 2013, “Big Data: New Opportunities and New Challenges”
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6527259

Since the Internet’s introduction, we’ve been steadily moving from text-based communications
to richer data that include images, videos, and interactive maps as well as associated metadata
such as geolocation information and time and date stamps. Twenty years ago, ISDN lines
couldn’t handle much more than basic graphics, but today’s high-speed communication networks
enable the transmission of storage-intensive data types. For instance, smartphone users can take
high-quality photographs and videos and upload them directly to social networking sites via Wi-Fi
and 3G or 4G cellular networks. We’ve also been steadily increasing the amount of data captured in
bidirectional interactions, both people-to- machine and machine-to-machine, by using telematics and
telemetry devices in systems of systems. Of even greater importance are e-health networks that
allow for data merging and sharing of high-resolution images in the form of patient x-rays, CT
scans, and MRIs between stakeholders. Advances in data storage and mining technologies make it
possible to preserve increasing amounts of data generated directly or indirectly by users and analyze it to
yield valuable new insights. For example, companies can study consumer purchasing trends to
better target marketing. In addition, near-real-time data from mobile phones could provide
detailed characteristics about shoppers that help reveal their complex decision-making
processes as they walk through malls.1 Big data can expose people’s hidden behavioral patterns
and even shed light on their intentions.2 More precisely, it can bridge the gap between what people
want to do and what they actually do as well as how they interact with others and their environment.3
This information is useful to government agencies as well as private compa-nies to support
decision making in areas ranging from law enforcement to social services to homeland security. It’s
particularly of interest to applied areas of situational awareness and the anticipatory approaches
required for near-real-time discovery. In the scientific domain, secondary uses of patient data
could lead to the discovery of cures for a wide range of devastating diseases and the prevention of
others.4 By revealing the genetic origin of illnesses, such as mutations related to cancer, the Human
Genome Project, completed in 2003, is one project that’s a testament to the promises of big data.
Consequently, researchers are now embarking on two major efforts, the Human Brain Project
(EU; www.humanbrainproject.eu/vision.html) and the US BRAIN Initiative
(www.whitehouse.gov/the- press-office/2013/04/02/fact-sheet-brain-initiative), in a quest
to construct a supercomputer simulation of the brain’s inner workings, in addition to mapping
the activity of about 100 billion neurons in the hope of unlocking answers to Alzheimer’s and

60
Parkinson’s. Other types of big data can be studied to help solve scientific problems in areas ranging from
climatology to geophysics to nanotechnology.

61
Crime Prevention

Data collection and analysis is driving law enforcement’s ability to predict and
stop crime before it happens. Can lower crime rates by 40%

Muggah, Robert. 6-15-2018, "How smart tech helps cities fight terrorism and crime," World Economic
Forum, https://www.weforum.org/agenda/2018/06/cities-crime-data-agile-security-robert-muggah/

Today’s cities are on the frontline of crime and terrorism. While some of them are clearly more at risk
than others, all of them are vulnerable. Not surprisingly, cities are experimenting with innovative
approaches to preventing crime and countering extremism.

The most successful are improving intelligence gathering, strengthening policing and community
outreach, and investing in new technologies to improve urban safety. Such cities are said to deploy 'agile
security': data-driven and problem-oriented approaches that speed up decision-making and design in
environmental changes to limit insecurity.

Agile security measures start with the premise that many types of crime, radicalization and terrorism are
non-random and even predictable. With some exceptions, they tend to cluster in time, space and among
specific population groups. The massive increase in computing power and advances in machine learning
have made it possible to sift through huge quantities of data related to crime and terrorism, to identify
underlying correlations and causes. The harnessing and processing of these data flows is crucial to
enabling agile security in cities.

Detecting crime before it happens

A precondition of agile security is connected urban infrastructure. When city authorities, private firms
and civic groups have access to real-time data - whether generated by crime-mapping platforms,
gunshot-detection systems, CCTVs or smart lights - they can can get better at detecting crime before it
occurs.

A growing array of crime prevention tools are not only connected to the cloud, they are also running off
deep neural networks. As a result, public authorities are more easily reading license plates, running
facial recognition software, mapping crime and terrorist networks and detecting suspicious anomalies.
Some of these technologies are even processing data within the devices themselves, to speed up crime-
fighting and terrorist prevention capabilities.

Another critical feature of agile security is leadership, especially in the law enforcement sector. A
growing number of metropolitan police are adopting problem-oriented policing practices and focusing
on hotspots to deter and control crime. Across North America and Western Europe, police, counter-
terrorism and emergency responders have set up fusion centres that stream multiple datasets from

62
across a wide range of sources, from city sensors to cybercrime units in private companies. Across the
Americas, the Middle East and Asia, police are also investing in machine learning tools to predict when
and where crime will occur, known in the business as real-time epidemic-type aftershock sequence
(ETAS) crime forecasting.

To be truly effective, agile security requires making pinprick changes to the built environment to deter
and design out threats of crime and terrorism. Deterrence may involve the use of defensive architecture
such as smart cameras, street lights, anti-vehicular systems, blast walls and strategically placed forest
canopy. The goal is to reduce the opportunities for perpetrators to target would-be victims or to do
damage.

Efforts to design out threats of crime and terrorism also involve making physical changes to the
environment, including building low-rise buildings, building green spaces and community centres,
promoting mixed communities and targeting renewal measures in neighbourhoods that exhibit
concentrated disadvantage. Investments in high-quality public goods and social cohesion can help
prevent crime and radicalization.

A final requirement of agile security is that it avoids curbing civil liberties, whether intentionally or
unintentionally. At a minimum, municipal governments need to find ways to consult with city residents
to discuss the motives and implications of new technologies. This means undertaking consultations,
especially in the most vulnerable communities.

Local authorities must also develop criteria related to personal data access, retention and redress, and
encourage algorithmic transparency where possible. New York City has recently established a task force
to examine the city’s automated decision-making systems. If citizens lose confidence in law enforcement
and social distance increases, this can undermine the latter’s ability to protect the public.

Ethical questions

Agile security measures can not only prevent crime, but also improve the efficiency of the criminal
justice system. For example, algorithms designed in the UK and US are being applied to determine
whether individuals charged with a crime represent a low, medium or high risk, and whether they are
eligible for pre-trial release or individual parole. AI-informed risk assessments can help judges - many of
whom have minutes to decide if someone is a flight risk, threat to society or could harm a witness - to
make more informed judgments.

Notwithstanding their promise, there are very serious ethical questions generated by the application of
these new technologies and public security. For example, without robust checks and balances,
technology-enabled security solutions can quickly corrode civil liberties, online and off. New facial
recognition and gait analysis technologies in China are seeking to detect 'suspicious behaviour' by
harvesting all manner of information on citizens. The company Cloud Walk Technology is mining
personal data to develop profiles on high-risk individuals. Not surprisingly, risk assessment and crime
prediction algorithms are coming under criticism for reproducing racial biases.

63
Of course, agile security is about more than deploying new technologies. While the revolution in policing
affairs certainly involved the digitization of data, it was powered by a transformation in police culture
and practices. Even so, proponents of agile security should engage with AI very cautiously, not least
because of its potential to reinforce biases. This is because AI is powered by real-world data that in turn
is produced by (biased) police officers. As a result, predictive tools have an inherent risk of producing
vicious feedback loops. Awareness of the unexpected impacts of AI is more important than ever. Agile
security measures should be informed by principles and procedures to ensure that the fairness and
transparency of these technologies are properly vetted.

If designed and deployed with diligence and care, the adoption of agile security measures can yield
economic savings. At a minimum, they should reduce unproductive expenditure on law enforcement
agencies, prosecutors, judges and penal authorities. By preventing crime and terrorism through
technology-enabled means, governments and businesses can also reduce medical costs generated by
victims, lower insurance premiums in high-risk cities, cut back on outlays on private security guards and
improve the overall investment climate.

While not a panacea, agile security can help prevent and reduce the risks of crime and terrorism.
Depending on the city and the types of security technologies deployed, a McKinsey’s report, published
this month, shows that the smart deployment of data-driven tools can help reduce fatalities by up to
10%, lower crime incidents by as much as 40% and dramatically reduce emergency response times.

While holding real promise, it is critical that agile security measures are introduced transparently and in
consultation with residents, with appropriate safeguards for data protection.

64
Counter terrorism

Counterterrorism relies on data collection through public and private sources

Parker, Scott. 6-22-2018, " 3 Ways to Use Data to Fight Terrorism and Money Laundering," Nextgov,
https://www.nextgov.com/ideas/2018/06/3-ways-use-data-fight-terrorism-and-money-
laundering/149043/

The increased severity of domestic security breaches due to terrorist threats and cyber crime poses a
strategic challenge for federal and state security services. The strengthening of human resources, now
widely deployed around the world, is not enough to meet the challenge alone. Increasing efficiency and
speed, controlling the means of communication used by terrorists, but also, and above all, anticipating
the lead-up to such actions, are all challenges that persist.

In this mass information age, the ability to handle big data—huge volumes of structured and
unstructured data—is absolutely crucial. Being able to analyze and extract key information in the fight
against cybercrime as quickly as possible will revolutionize the work of organizations mobilized in this
struggle. To increase efficiency, they must expand the data sources examined and optimize the
interoperability between their systems.

Cognitive search and analytics technologies are all about accessing the right information at the right
time—for people with the necessary authorization. These tools process big data in near real time to
surface patterns and relationships among disparate silos of information. Intelligent data processing
combined with machine learning enables computers to learn as they process information to deliver
increasingly relevant information. These tools can further the operational efficiency of intelligence
services and have the potential to exponentially increase their predictive analysis capacities.

Using these tools to become information-driven can help with the fight against terrorism, money
laundering and fraud. Here are a few examples:

Analyzing Text

Cognitive search and analytics tools enable data to be interpreted and similarities in topics and content
to be detected, even across disparate vocabularies. They automate and accelerate the creation of
networks mapping people, topics, locations, etc., while helping security services identify criminal
activity. Even in the case of "lone wolves," it is possible to draw upon the traces inevitably left on the
internet or the dark net to detect behavioral patterns, and thus prevent them moving toward taking
action.

Cross Referencing Account, Card Numbers and Fund Transfers

Cognitive search and analytics technologies can also play a role in the fight against money laundering,
which is one of the main sources of funding for terrorism. Investigators must accurately identify cyber

65
criminals, drawing upon huge amounts of data in an extremely short period of time. Cognitive
technology allows data—in particular, financial data such as account and card numbers or fund
transfers—to be automatically cross-referenced in order to identify fraudulent activity.

Sparse information can be precisely detected and combined for "mapping" purposes, tracing the links
between suspects and movements of capital. Cognitive search and analytics draw upon this interaction
mapping to detect traces of illegal activity and track them back to the perpetrators.

Social Media Monitoring

Monitoring social networks to track organized crime is fundamental to the work of the intelligence
services. They use open source intelligence, which includes all the intelligence obtained from public
sources of information.

Recent terrorist attacks have shown that responsiveness is the key to effective surveillance. Monitoring
social networks, discussion forums, blogs and other digital communication tools is an essential way of
detecting radicalizing profiles and gaining real-time insights into potential threats. It allows for
identification of behavior posing a threat to domestic security and anticipation of future attacks

Using advanced technology that is cognitive, proven, and complete is increasingly vital for a modern
intelligence service in its fight against terrorism, fraud and money laundering.

66
Big Data Analytics

Market Efficiency from Big Data Analytics has benefitted the UK economy by
£216 Billion

ICO 2017 (Information Commissioners Office. UK. “Big Data, Artificial Intelligence, Machine Learning
and ...,” April 9, 2017. https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-
and-data-protection.pdf.)

In 2012 the Centre for Economics and Business Research estimated that the cumulative benefit to the
UK economy of adopting big data technologies would amount to £216 billion over the period 2012-17,
and £149 billion of this would come from gains in business efficiency28 . 26. There are obvious commercial
benefits to companies, for example in being able to understand their customers at a granular level and hence
making their marketing more targeted and effective. Consumers may benefit from seeing more relevant
advertisements and tailored offers and from receiving enhanced services and products. For example, the process
of applying for insurance can be made easier, with fewer questions to answer, if the insurer or the broker can get other data they need through big data analytics.
27. Big data analytics is also helping the public sector to deliver more effective and efficient services, and produce positive outcomes that improve the quality of
people’s lives. This is shown by the following examples:

Big Data Analytics can spot deficiencies in the Health System

ICO 2017 (Information Commissioners Office. UK. “Big Data, Artificial Intelligence, Machine Learning
and ...,” April 9, 2017. https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-
and-data-protection.pdf.)

Health. In2009, Public Health England (PHE) was aware that cancer survival rates in the UK were poor
compared to Europe, suspecting this might be due to later diagnosis. After requests from Cancer
Research UK to quantify how people came to be diagnosed with cancer, the Routes to Diagnosis project was conceived to
seek answers to this question. This was a big data project that involved using complex algorithms to analyse 118

million records on 2 million patients from several data sources. The analysis revealed the ways in which
patients were diagnosed with cancer from 2006 to 2013. A key discovery (from results published in 2011) was that in
2006 almost 25% of cancer cases were only diagnosed in an emergency when the patient came to A&E.
Patients diagnosed via this route have lower chances of survival compared to other routes. So PHE was
able to put in place initiatives to increase diagnosis through other routes. The latest results (published in 2015) show that
by 2013 just 20% of cancers were diagnosed as an emergency29 . The understanding gained from this
study continues to inform public health initiatives such as PHE’s Be Clear on Cancer campaigns, which
raise awareness of the symptoms of lung cancer and help people to spot the symptoms early.30

67
Big Data Analysis increases education quality

ICO 2017 (Information Commissioners Office. UK. “Big Data, Artificial Intelligence, Machine Learning
and ...,” April 9, 2017. https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-
and-data-protection.pdf.)

Education. Learning analytics in higher education (HE) involves the combination of ‘static data’ such as
traditional student records with ‘fluid data’ such as swipe card data from entering campus buildings, using virtual learning
environments (VLEs) and downloading e-resources. The analysis of this information can reveal trends that help to improve

HE processes, benefiting both staff and students. Examples include the following:  Preventing drop-out via
early intervention with students who are identified as disengaged from their studies by analysing VLE login and campus attendance data.  The
ability for tutors to provide high-quality, specific feedback to students at regular intervals (as opposed to having
to wait until it is ‘too late’ – after an exam for instance). The feedback is based on pictures of student performance gleaned from analysis of data from all the
systems used by a student during their study.  Increased
self-reflection by students and a desire to improve their
performance based on access to their own performance data and the class averages.  Giving students shorter, more precise
lecture recordings based on data analysis that revealed patterns regarding the parts of full lecture
recordings that were repeatedly watched (assessment requirements, for example). Such benefits have been seen by HE
institutions including Nottingham Trent University, Liverpool John Moores University, the University of
Salford and the Open University31 .

Big Data Analytics reveal deficiencies int eh transport system to be improved

ICO 2017 (Information Commissioners Office. UK. “Big Data, Artificial Intelligence, Machine Learning
and ...,” April 9, 2017. https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-
and-data-protection.pdf.)

Transport. Transport for London (TfL) collects data on 31 million journeys every day including 20 million ticketing system
‘taps’, location and prediction information for 9,200 buses and traffic-flow information from 6,000 traffic signals and 1,400 cameras. Big
data analytics
are applied to this data to reveal travel patterns across the rail and bus networks. By identifying these
patterns, TfL can tailor its products and services to create benefits to travellers in London such as: 
more informed planning of closures and diversions to ensure as few travellers as possible are affected 
restructuring bus routes to meet the needs of travellers in specific areas of London; for instance, a new service
pattern for buses in the New Addington neighbourhood was introduced in October 201532  building new entrances, exits and platforms

to increase capacity at busy tube stations – as at Hammersmith tube station in February 201533 .

68
Micro Targeting

Traditional and Microtargeting ad comparative

Borgesius 2018 (Zuiderveen Borgesius, F.J., Möller, J., Kruikemeier, S., Ó Fathaigh, R., Irion, K.,
Dobber, T., Bodo, B. and de Vreese, C., 2018. Online Political Microtargeting: Promises and Threats for
Democracy. Utrecht Law Review, 14(1), pp.82–96. DOI: http://doi.org/10.18352/ulr.420)

Traditional forms of advertising, such as television ads, reach a mass audience. But not the entire
audience might be interested in such ads. Through microtargeting, specific audiences can be connected
with specific agenda points of political parties. So microtargeting could lead to more relevant information
or ads for specific audiences. To illustrate: say Alice is a 20 year-old citizen and is not interested in
politics. Yet Alice regularly checks her friends’ Facebook updates. On Facebook, Alice receives a political
ad that informs her about the viewpoints of a political party that targets younger citizens (e.g., pro-university
funding). Because the political information concerns an issue that appeals to younger citizens, Alice decides

to find more information about the party and its viewpoints. Thus, targeted political advertising
encourages Alice to find more information, and perhaps to vote for this party. There is a second reason why targeted
political information can amplify the effects of campaigns. Online political microtargeting might reach
citizens who are difficult to reach through mass media such as television. A challenge within democratic
societies is to reach politically uninterested voters and mobilise them to participate in politics. Such
citizens often opt out of traditional media exposure, such as watching television news and reading newspapers. It has been argued that those who
tune out of news may not be informed about politics.22 However, many of these citizens may use the internet, for instance for entertainment or

social media.23 By targeting these uninterested citizens online, a political party could reach them, expose them

to political information, and influence or persuade them. Such exposure increases the likelihood that
citizens cast their vote or become more interested in politics. In this way, targeted political information
may help to reach those who are difficult to reach in an offline environment. In sum, online political
microtargeting has possible advantages for citizens: it can reach citizens who ignore traditional media,
and it can interest people in politics through tailored messages. Microtargeting might thus increase
information, interest in politics, and electoral turnout.

69
The market of ideas is more tailored to the voter and their personal concerns

Borgesius 2018 (Zuiderveen Borgesius, F.J., Möller, J., Kruikemeier, S., Ó Fathaigh, R., Irion, K.,
Dobber, T., Bodo, B. and de Vreese, C., 2018. Online Political Microtargeting: Promises and Threats for
Democracy. Utrecht Law Review, 14(1), pp.82–96. DOI: http://doi.org/10.18352/ulr.420)

Regarding public opinion, microtargeting


promises to increase the diversity of political campaigns, and voters’
knowledge about certain issues. First, microtargeting could make political campaigns more diverse. In
representative democracies, voters select political parties that they find suitable to form the
government. During the election campaign, parties explain their political programme to the electorate
to generate support. From a liberal perspective on democracy, election campaigns contribute to the
marketplace of ideas.28 All parties offer their political ideas and priorities to the public who can then choose the party that best fits their political ideas,
preferences, and priorities. However, a key problem for voters is that the number of parties, each with a political programme, is so large that voters are overloaded
with information.29 Hence, voters choose to, metaphorically speaking, visit only a small number of market stands in the marketplace of ideas. Voters thus make
their electoral decisions with limited information. Microtargeting
can expose voters to information that is most relevant
for their voting decision. Many voters have specific interests in particular policy fields, for example
immigration or education. With microtargeting, political parties can target voters with information
within these preferred policy fields.30 Hence, voters can base their voting decision on the programme that
convinces them the most about the issue they care about the most. This would not be possible in an
exclusively mass-communicated information environment. Masscommunicated campaigns are usually limited to a small number of
issues that are discussed extensively by all parties. Such niche topics are unlikely to be discussed during national mass-communication campaigns.31 Microtargeting
could thus diversify political campaigns. Even
though there is a smaller audience for each issue, more issues could be
discussed during political campaigns. With microtargeting, topics which are only relevant to small audiences may get a market stand in the
marketplace of ideas. A potential benefit of microtargeting on public opinion is that voters can use their limited

attention to process political information more efficiently, and therefore can make better-informed
decisions. Thus, voters can base their decision on which candidate made the best proposal to solve the
problem that is most important to them.

70
Smart Cities

New development and the construction of infostructure relies on data collection

Chuti, Michael.2010, "The Internet of Things," McKinsey Quarterly,


http://www.mckinsey.com/industries/high-tech/our-insights/the-internet-of-things

Data from large numbers of sensors, deployed in infrastructure (such as roads and buildings) or to
report on environmental conditions (including soil moisture, ocean currents, or weather), can give
decision makers a heightened awareness of real-time events, particularly when the sensors are used
with advanced display or visualization technologies.

Data collection can help traffic management in cities

Chui, Michael. 2010, "The Internet of Things," McKinsey Quarterly,


http://www.mckinsey.com/industries/high-tech/our-insights/the-internet-of-things

Some advanced security systems already use elements of these technologies, but more far-reaching
applications are in the works as sensors become smaller and more powerful, and software systems more
adept at analyzing and displaying captured information. Logistics managers for airlines and trucking lines
already are tapping some early capabilities to get up-to-the-second knowledge of weather conditions,
traffic patterns, and vehicle locations. In this way, these managers are increasing their ability to make
constant routing adjustments that reduce congestion costs and increase a network’s effective capacity.

71
Data Science Good

Collection of data enables data science brings benefits


Mir Saeid Jan3 2020 https://www.quora.com/What-are-the-benefits-of-data-science

Data Science is one of the fastest-growing industries today. According to IBM, the annual demand for
data scientists, data developers, and data engineers will reach nearly 700,000 by 2020.

In layman’s terms, data science is a pool of tools and techniques used to study and evaluate data. Its
main objective is to extract valuable information for making viable business decisions.

Data Science concepts and processes are mostly derived from data engineering, statistics, programming,
social engineering, data warehousing, machine learning, and natural language processing.

The key techniques in use are data mining, big data analysis, data extraction, and data retrieval.

Therefore, you can say Data Science is a blend of various tools, algorithms, and machine learning
principles with the goal to discover hidden patterns from raw data.

Data science is a multidisciplinary field that has myriad advantages.

Here are some of its main benefits:

Better business value: The principal advantage of incorporating Data Science in an organization is faster
and better decision-making. These data-driven decisions, in turn, lead to higher profitability, improved
operational efficiency, business performance, and workflows.

Identification and refining of target audiences: Data Science helps with the precise identification of the
principal customer groups via a thorough analysis of disparate sources of data. It helps increase profit
margins as organizations can tailor services and products for specific customer groups.

Better risk analysis: Predictive analytics fueled by Big Data and Data Science allows users to scan and
analyze news reports and social media feeds to stay updated on the latest industry trends. Moreover, it
also promotes detailed health-tests on your suppliers and customers. It is useful for assessing risks and
taking necessary action for mitigation well in advance.

Recruit better in lesser time: Data Science can help your recruitment team make speedier and more
accurate selections through data mining, in-house processing of CVs and applications, and even
sophisticated data-driven aptitude tests and games.

72
Artificial Intelligence

AI development is being pushed along by the need for more advanced data
collection methods. 4 warrants: Advanced data collection, data analysis,
advanced recognition of documents, and self-sufficiency. These can be used in
multiple fields

Andre, Smith. 31-January-2019"Closing The Loop: How AI Is Changing Data Collection," The Digatilist,
https://www.digitalistmag.com/future-of-work/2019/01/31/closing-loop-how-ai-is-changing-data-
collection-06195955

As we move into 2019, it’s getting difficult to find an industry that’s not feeling the effects of recent
advances in artificial intelligence (AI) technology. We’re witnessing the beginnings of a revolution that’s
bound to change almost everything about the way companies do business and approach tasks at every
level.

All of the newest AI, however, can only be put into wide use if the companies that seek to deploy it have
access to high-quality data sources to feed it. Fortunately, AI is already moving into the realm of data
collection as well, creating the possibility of building self-feeding AI systems in the near future. For some
insight, here’s a look at the ways that AI is impacting data collection right now.

Data solicitation and classification

Generally speaking, today’s AI-powered solutions rely on existing data stores or active input from
human sources to form the foundation of their operations. However, the latest generation of chatbots
and related systems has started to become far more adept at soliciting information from the people
they interact with and can even proactively request needed data without any human intervention. The
same process can be found at work in AI-driven online surveys, which can adapt and react to factors like
sentiment and context to determine how to respond and when to ask further questions. After gathering
the data, another class of AI systems can sort through it (as well as any other unstructured data sources
that may be available) to classify the available information so it may be used further up the chain.

Automated data extraction

The latest AI systems are also proving quite adept at scanning troves of documents for identifying
information such as document numbers and other contextual clues without being preprogrammed to do
so. That’s a crucial development since it enables the AI to look at digitized versions of paper records with
more of a human eye, which was one of the last major hurdles for businesses trying to leverage
historical data that previously would have required a large, dedicated staff to prepare and clean. Today,
AI can recognize patterns in document formats to identify information that previous generations of the
technology would have misinterpreted or overlooked completely.

73
Data validation and cross-referencing

Another way that AI is impacting data collection is the fact that the technology is now sufficiently
advanced and can autonomously verify and cross-reference data inputs to maintain high data quality.
It’s the latest evolution in data anomaly detection, where the systems can not only spot an outlier within
collected data sets but can also cross-reference new data with existing data to look for conflicts or
concurrence. That process helps to prevent the collection of duplicate data and works to keep the
overall flow of information both accurate and valid. It’s also worth noting that data validation tasks had
been one of the most labor-intensive parts of any data collection and storage operation, and AI now
reduces it to a real-time process that requires almost no direct human intervention.

A self-contained ecosystem

As things stand today, businesses are getting closer to a time when their AI systems will become far
more self-contained than they are at present. As AI gets better and better at seeking out, classifying, and
validating needed data on its own without being prompted to do so, we may soon see AI systems that
can evolve on their own, free from the shackles of limited or inadequate data input. Once that happens,
business AI systems will be able to grow alongside the companies they serve, offering both insight and
innovation, and delivering value on a scale that even the most optimistic technologist may have
dismissed as fantasy just a few short years ago.

74
Data collection and AI go hand in hand. The analysis of security camera data
proves this point

Rahul Asthana, 7-30-2019, "How Big Data and Artificial Intelligence Work Together," Colocation America,
https://www.colocationamerica.com/blog/big-data-and-artificial-intelligence

The trouble with Big Data is that there is too much of it. An example is the data collected from the video
surveillance cameras for a small community. The community is safer because of the 100 or so video
cameras covering the streets, parking lots, and intersections.

These cameras operate 24 hours per day, 365 days per year. They collect a total of 2,400 hours of video
footage every day, which equals 876,000 hours of video footage each year. If human beings had to
review this data for suspicious activity, at real-time speed, this would require a staff of 60 people. That is
not economically feasible.

The only way to deal with this amount of collected information is to manage it with data-scanning and
analytics using artificial intelligence (AI) software algorithms.

Big Data Offerings

For companies that want to process Big Data, they need robust IT computer systems, AI programming,
consultants for analytics, and support services. Here are some considerations:

On-Premises Mainframes vs. Off-Site Servers: Major international conglomerates can afford the large
mainframe systems to process Big Data on-premises. However, most companies choose dedicated
servers or co-location servers, which provide the processing power needed for the analysis of Big Data.
This IT solution is managed off-site. The advantages of having servers off-site include outsourcing the
maintenance of them, lowering capital investment in facilities, and having the flexibility to increase IT
services as needed.

AI Programming: Data mining of Big Data is achieved by using AI programming that works with
algorithms to find patterns in the Big Data that are noteworthy. This provides insights that help
management make better-informed decisions.

Consultants for Analytics: It is cost-effective for most companies to outsource the IT consultant work to
develop their analytics of Big Data. These specialists can be engaged on an “as-needed” basis to
construct an analytics program that is suitable for a company’s requirements.

Support Services: Support services include managed hosting, IP transit services, and connections to the
cloud.

Big Data is managed well by using large data centers that are strategically placed. The main benefit of
using redundant processing systems that are physically located in separate geographic areas is that any
localized failure, such as one caused by a natural disaster, does not take the entire system down.

75
Business-critical hardware needs to have at least triple redundancy to achieve 99.9% uptime
performance. Foundational IT structures consider the risk of calamities and plan for those that can
impact an IT network. Load-balancing, in real-time, manages any partial systemic failure of some
network servers to re-route the processing to the servers that remain operating in the network.

Big Data Storage Management

Data storage requirements for Big Data are substantial. One approach is to capture and process the
localized data and then forward the storage to a more extensive storage system that is maintained in
the cloud.

Another approach is to use a “virtualized” data system that creates a virtual layer of the data. This
virtual layer knows where the data is stored on the network. When calculations are being made using
the AI algorithm in a virtual system, only the data needed for that specific calculation is accessed. The
original data storage remains intact and in place, without the need for copying data files

This approach utilizes a network-wide data management protocol. It reduces the need for data storage
memory as well as improves the computational processing speeds.

Artificial Intelligence

Firstmark produced an infographic chart that shows the 2019 Data & AI Landscape. The trend of moving
Big Data computational processing to the cloud is undisputable. Forbes reports that distributed data
storage is being rapidly replaced by storing Big Data on the cloud and then data mining using SaaS AI
programs.

A major digital transformation allows companies to improve management decisions and discover
insights that lead to innovation. The change comes from deep learning. Deep learning is a technique of
using AI programming to enhance its functions through machine learning.

No human intervention or detailed programming by human beings, for each conceivable instance, is
necessary. Instead, a set of algorithms are designed that the programming uses to learn by the
application of the algorithms to a large data set.

Artificial Intelligence and Big Data

AI and Big Data are being used increasingly by companies of modest size. They access the IT hardware
resources available from data centers. Then, they apply the AI tools available as cloud services to the Big
Data that they collect.

DZone notes that some ways that AI is applied to Big Data Analytics include:

Detecting Anomalies: AI can analyze Big Data to detect anomalies (unusual occurrences) in the data set.
This can be applied to networks of sensors and parameters that have a predefined appropriate range.
Any node of the network that is outside of the range is identified as a potential problem that needs
attention.

76
Probabilities of Future Outcomes: AI can analyze Big Data using Bayes theorem. The likelihood of an
event occurring can be determined using known conditions that have a certain probability of influencing
the future outcome.

Recognizing Patterns: AI can analyze Big Data to look for patterns that might otherwise remain
undetected by human supervision.

Data Bars and Graphs: AI can analyze Big Data to look for patterns in bars and graphs that are made
from the underlying data set.

Another key driver of this trend is that Big Data is increasing through the explosion of connected devices
being deployed with the expansion of the Internet of Things (IoT).

77
Data collection will enable the development of the Internet of Things

Rahul Asthana, 7-30-2019, "How Big Data and Artificial Intelligence Work Together," Colocation America,
https://www.colocationamerica.com/blog/big-data-and-artificial-intelligence

Internet of Things (IoT)

Estimates by Techjury are that by 2025, there will be over 64 billion devices worldwide that are
connected on the Internet of Things (IoT). There already are about 24 billion IoT devices.

Each device collects data. This trend is responsible for an exponential increase in Big Data. Collecting
such massive data from numerous “smart” devices is only useful if it can be processed, and data mined
in meaningful ways.

Improved Trust

AI is useful for identification of people using biometric data such as facial recognition, fingerprints, and
retinal scans of eyes.

Conclusion

AI and Big Data are now permanently connected. Their combined usage will expand significantly over
the years. This megatrend is driven by the strong value proposition of the AI analytics applied to Big
Data and the rapid expansion of the IoT, increasing the amount of Big Data.

78
IoT/AI Good (Econ)

IoT development is great for the consumer as it provides financial incentives

Bridgwater, Adrian. 2015, "Will Internet Of Things Robots Take Over Earth By 2020?" Forbes,
http://www.forbes.com/sites/adrianbridgwater/2015/01/21/will-internet-of-things-robots-take-over-
earth-by-2020/2/#51ea42b11aa3

The German software maker also points to the less obvious area of financial services and banking saying
that IoT is just beginning to emerge in the financial services sector as retail banks are still grappling with
the cost/benefit ratio of implementing IoT technology. “However, as banks seek new ways to engage
customers and increase their line of business opportunities, they will begin to use IoT-enabled incentives
to grow their customer base and make banking easier and more rewarding. Three emerging trends in
this space include mobile banking, spending tracking and wearables,” stated SAP’s Lynch.

79
IoT/AI Good (Healthcare)

Data tracking allows the IoT to manage healthcare better than people

Atzori, Luigi. 2010‘‘Mediterranea” of Reggio Calabria, Italy, and Giacomo Morabito, University of
Catania, Italy, October, 2010, The Internet of Things: A Survey, p. 9

Many are the benefits provided by the IoT technologies to the healthcare domain and the resulting
applications can be grouped mostly into: tracking of objects and people (staff and patients);
identification and authentication of people; automatic data collection and sensing.

Tracking is the function aimed at the identification of a person or object in motion. This includes both
real-time position tracking, such as the case of patient-flow monitoring to improve workflow in
hospitals, and tracking of motion through choke points, such as access to designated areas. In relation to
assets, tracking is most frequently applied to continuous inventory location tracking (for example for
maintenance, availability when needed and monitoring of use), and materials tracking to prevent left-ins
during surgery, such as specimen and blood products.

80
Data Collection and the IoT

Data collection has led to the development of the IoT. This has revolutionized
the way data is used and analyzed and will better the lives of the consumer

Shore, Joel. 2015, "How IIoT and Consumer IoT Handle Data Differently," Tech Target,
http://searchcloudapplications.techtarget.com/feature/How-IIoT-and-consumer-IoT-handle-data-
differently

The challenge, said Christian Renaud, research director for IoT at 451 Research, is creating cloud
applications that are robust enough to handle the torrent of data without slowing down. "You're dealing
with zillions of miniscule packets containing telemetry and diagnostic data," he said. "You can't have an
application start skipping packets because it can't handle the pace." General Electric (GE) has put that
torrent to good use, developing Predix, an IIoT analytics and big data platform that examines sensor
telemetry from industrial machinery to minimize downtime. As the world's largest maker of jet engines
for commercial airliners, GE's aviation division used Predix to analyze 340 terabytes of data from 3.4
million flights to improve asset performance and minimize disruptions

81
Data collection has allowed for better consumer tech developments and
innovations

Rose, Karen. October 2015, “THE INTERNET OF THINGS: AN OVERVIEW Understanding the Issues and
Challenges of a More Connected World,” Internet Society,
https://www.internetsociety.org/sites/default/files/ISOC-IoT-Overview-20151221-en.pdf

New algorithms and rapid increases in computing power, data storage, and cloud services enable the
aggregation, correlation, and analysis of vast quantities of data; these large and dynamic datasets
provide new opportunities for extracting information and knowledge. Cloud computing, which leverages
remote, networked computing resources to process, manage, and store data, allows small and
distributed devices to interact with powerful back-end analytic and control capabilities.

Rebuttal to IoT causes Privacy Invasions

IoT is a growing industry, and with it comes growing security protections. As the technology advances,
the security systems surrounding IoT will do so as well. Additionally, privacy is becoming more obsolete
as individuals put more and more of their lives on the web.

82
IoT/AI Good (Privacy)

Privacy concerns come from the misunderstanding of the technology

Wasik, Bill. 2013, "In the Programmable World, All Our Objects Will Act as One," Wired News,
https://www.wired.com/2013/05/internet-of-things-2

Certainly the gradual acceptance of smart toll tags for cars (e.g., E-ZPass) shows that such qualms can be
overcome, so long as there’s a demonstrated benefit and a fair assurance of security. In that regard,
personalized billboards are arguably a step in the wrong direction, but wireless payments will make
users happy; so too will the coffee shop that knows your order and lets you skip the line, or the rental-
car seat that adjusts to your preferences before you sit in it. Just as with social networking, the privacy
concerns of a sensor-¬connected world will be fast outweighed by the strange pleasures of residing in it.

83
AT: Privacy

Companies have heard the concerns of the public and are fixing the issue. Tech
makers like Samsung are making improvements to their products and creating
new software to protect privacy and installing features to informs consumers of
data collection with the ability to switch them off

FALLON1- PATRICK T. 18-2020, "Needing protection from hackers, Samsung's smart TVs get an app that
controls how viewer data is shared," Fortune, https://fortune.com/2020/01/18/samsung-smart-tv-data-
privacy-hacker/

Earlier this month at CES 2020, Samsung unveiled a new app for its smart TVs called Privacy Choices. The
app will arm Samsung TV owners with the abilities to see how their television is tracking them and to
turn that tracking off. With televisions among the most ubiquitous devices in homes, Samsung's timing
to announce this app is right—or maybe overdue some experts argue, as smart TVs are increasingly
becoming a security and privacy concern. And some experts believe the trouble could get worse.

The new app is the latest in a string of acknowledgements that televisions are collecting information on
viewers. There's also mounting evidence that malicious hackers are targeting internet-connected
televisions that have video cameras and microphones to spy and steal data.

Last month, the FBI warned consumers that internet-connected smart televisions are vulnerable to
hacks ranging from the annoying to the downright creepy. "At the low end of the risk spectrum,
(hackers) can change channels, play with the volume, and show your kids inappropriate videos," the FBI
said. "In a worst-case scenario, they can turn on your bedroom TV's camera and microphone and silently
cyberstalk you."

It's not often consumers think about the security implications of their televisions, as typical hacking
stories center on smartphones, computers, networks, and websites. But consumers should be ready for
more television hacks, says Rishi Kaul, a television and security expert at Ovum.

"As our televisions become home to increasingly sensitive information (e.g. financial info, health data,
etc.), the devices become more attractive targets for hacking," Kaul says.

Moreover, hackers have become emboldened by television manufacturers seeing security as an after-
thought—if they think about it at all. "[TVs] have not been designed with security considerations in
mind," IHS Markit analyst Paul Gray says.

Ken Munro, a security expert at Pen Test Partners, says for years there's been evidence that hackers are
increasingly targeting televisions and finding new methods to attack TVs. And in large part, he blames
the TV-makers themselves.

84
"Security research, over the last 5 years, has shone a light on poor behavior by TV manufacturers,"
Munro says. He adds that TV manufacturers are only starting to come around to the idea of
safeguarding against "audio listening and improved privacy controls."

To date, there have been precious few ways for TV owners to protect themselves from hacks. Samsung
sells televisions with McAfee Security for TV software built-in, which lets users to scan their TVs for
malware—but it's only available in a handful of models. Most other manufacturers don't bundle anti-
malware software with their televisions and fail to provide an easy method for getting a malware
scanner on the device.

Instead, TV users need to be informed and actually take action, Munro says. From turning off cameras to
adjusting network settings, the only way to come close to safeguarding a television is to spend time
tweaking.

"I spent about 30 minutes working through the various settings on my latest Samsung TV, switching off
functionality and deselecting various options," Munro says of his own efforts to protect his television.
But even after all that time, he acknowledges that his television still isn't perfectly secure.

An eye on privacy

Although Samsung's Privacy Choices app won't necessarily harden security against hacks—because it
doesn't provide tools to stop hackers; it gives people control over how their data is shared—all three
analysts believe it's a step in the right direction.

"The company is hamstringing its own data collection capabilities in pursuit of stronger transparency
and privacy controls," Kaul says. It'll remain unclear, however, what other kind of data Samsung might
be able to collect until the Privacy Choices app is actually released and its final slate of user controls is
made public.

Munro agrees that Samsung's app is a welcome addition. But he notes that users still need to get the
app on their televisions, review their settings, and turn off what they don't like.

"I would really like to see data privacy options switched on by default, so the consumer has to make a
conscious decision to share their data," he says.

Looking ahead, analysts are concerned about the prospect of television security and privacy. While
Samsung has taken some steps in addressing the problem, it has 31% market share of global television
sales, according to IHS Markit. The rest of the market needs to follow Samsung's lead to create a
broader security net for consumers.

But whether the competition actually follows remains to be seen. Munro fears that TV makers have a
"financial incentive" to limit privacy controls and increase their per-unit margins by selling consumer
data. For its part, Samsung has said on several occasions that it doesn't collect or sell user data from its
televisions.

Ultimately, the only way to safeguard consumer privacy might be through lawmakers regulating the
industry and requiring TV makers to think about privacy. The problem, however, is that such regulation
has been slow going.

85
In 2018, Senators Edward J. Markey (D-MA.) and Richard Blumenthal (D-CT.) called on federal regulators
to investigate smart TV privacy and protect American users. However, the request didn't compel federal
regulators to actually investigate, and given the FBI's warning last month, little has changed.

That said, the U.S. Federal Trade Commission (FTC) has, at times, targeted TV makers for violating user
privacy. In 2017, for instance, the FTC fined Vizio $2.2 million after discovering that the company's smart
TVs were collecting "as many as 100 billion data points each day from millions of TVs." Vizio was then
selling that information, which included what people were watching and when, to third-party
advertisers. It was an important indicator to TV makers that the government stepped in, but little has
happened since.

As Munro suggests, the onus is still on TV makers to use the tools at their disposal and protect user
privacy. The question centers on whether they will. "It is perfectly possible to create a much more
secure TV," Munro says "if the manufacturer is so motivated."

AT: Privacy risks part1

http://www.theregister.co.uk/2010/10/ 04/iab_cookie_advice/

86
In general, the uses of customer data described in this section are designed to reduce firm’s costs and
the attractiveness of their product offering by reducing information asymmetries. As such, they
represent a real gain to society. However, in some sense the storage of this data represents a larger
potential privacy risk to individuals than advertising data. Data like this tends both to be stored for
longer than data used for advertising purposes and to be more easily tied back to an individual. We
discuss these risks in turn.

First, most data stored for online advertising is attached to an anonymous profile attached to a
particular IP address. It is far harder for an external party to tie such data back to an specific individual
user than the kind of data used for product personalization discussed in this section, which has the
explicit purpose of linking online data to a real person and their actions.

Second, the majority of online advertising data is stored for a short time. Indeed, the IAB suggested in
2010 that such data collection could be limited to a 48-hour window .Though this met with some
controversy, it is indicative of the extent to which data for advertising is short-lived. Purchase decisions
occur relatively quickly, so prior browsing behavior quickly becomes irrelevant to predicting whether a
customer will buy.

87
AT: Privacy risks part2

Joe Kennedy April 17 2017 Harvard Business Review“ Should Antitrust Regulators Stop Companies from
Collecting So Much Data?” https://hbr.org/2017/04/should-antitrust-regulators-stop-companies-from-
collecting-so-much-data

With regard to free services, while companies such as Facebook, Google, and Twitter may have a very
large share of the consumer markets for their narrow service offerings, the markets themselves are two-
sided — and the side where they earn most of their revenue is advertising, which is characterized by
fierce competition, powerful counterparties, and constant evaluation of the relative performance of
different advertising outlets. So in this case traditional concerns of abuse, such as pricing below marginal
cost and product tying, don’t really apply, and can actually benefit competition and consumers.

When it comes to privacy, those who don’t believe that merely possessing lots of data is anticompetitive
suggest that antitrust regulators should leave that to privacy and consumer protection regulators. In the
United States, that principally means the Federal Trade Commission, which to date has largely acted on
a case-by-case basis to deal with bad conduct stemming from the use of data. There is no evidence that
the mere possession of more data provides any greater risk to privacy. But data does drive many of our
most important emerging technologies, including autonomous cars, language translation, and other
artificial intelligence–based innovations. Nor is there evidence that consumers are demanding more
privacy protection in the products they use. Most consumers are willing to share large amounts of
personal data in return for free services they value. Consumers tend to object only when their data is
actually misused, something regulators already take action to address.

88
User’s data has been under protection

20Accenture-Guarding-and-Growing-Personal-Data-Value-POV-Low-Res.pdf

Against a backdrop of growing consumer concern, many governments are reviewing regulation to
protect data. In 1993, four countries had data-privacy regulations. By late 2013, this number had
reached 101. This means that 66 percent of the world’s population was covered by data-protection
regulation in 2013, up from 42 percent in 1993. In some cases, these regulations are placing more
restrictive provisions on businesses than before. For example, some countries are moving toward data
localization, requiring personal data about any of their citizens to be held domestically rather than
stored or managed abroad. Russia has set a timetable to introduce data localization in late 2015. As a
result of the new law, it is reported that large technology companies might have to pay as much as
US$200 million to build a data center in Russia, compared with US$43 million in the United States.

Privacy and Information Technology

First published Thu Nov 20, 2014; substantive revision Wed Oct 30, 2019

Acknowledging that there are moral reasons for protecting personal data, data protection laws are in
force in almost all countries. The basic moral principle underlying these laws is the requirement of
informed consent for processing by the data subject, providing the subject (at least in principle) with
control over potential negative effects as discussed above. Furthermore, processing of personal
information requires that its purpose be specified, its use be limited, individuals be notified and allowed
to correct inaccuracies, and the holder of the data be accountable to oversight authorities (OECD 1980).
Because it is impossible to guarantee compliance of all types of data processing in all these areas and
applications with these rules and laws in traditional ways, so-called “privacy-enhancing technologies”
(PETs) and identity management systems are expected to replace human oversight in many cases. The
challenge with respect to privacy in the twenty-first century is to assure that technology is designed in
such a way that it incorporates privacy requirements in the software, architecture, infrastructure, and
work processes in a way that makes privacy violations unlikely to occur. New generations of privacy
regulations (e.g. GDPR) now require standardly a “privacy by design” approach. The data ecosystems
and socio-technical systems, supply chains, organisations, including incentive structures, business
processes, and technical hardware and software, training of personnel, should all be designed in such a
way that the likelihood of privacy violations is a low as possible.

89
AT: Data stolen by hackers

Data that is collected can be encrypted using blockchain technology. This keeps
the data safe in the servers and in transmission to others

Petersson, David. Oct 31, 2018,"What Companies Do With Your Personal Data And How Blockchain
Protects It," Forbes, https://www.forbes.com/sites/davidpetersson/2018/10/31/what-companies-do-
with-your-personal-data-and-how-blockchain-protects-it/

The technical response

The “problem” with computer data is that it is easily replicated - contrary to paper documents. When it
comes to paper money, blockchain has done a decent job in preventing this feature; by
cryptographically signing the transactions, it ensures there is just one true “owner,” and by
decentralizing and spreading the data into several nodes, it effectively combats the single-point-of-
failure syndrome. Even if hackers manipulate and overwrite the data, they still have to convince at least
51% of the network to accept their forgery as a valid transaction.

While this works well for monetary transactions, it becomes catastrophic when applied to personal
information. Blockchain could effectively protect the ownership rights of personal data, but it does not
do good on protecting it from being seen – especially as everyone would receive a copy of that data. For
this reason, we have the concept of Self-Sovereign Identities, or SSI for short.

SSI primer

SSI is based on the principle of encryption, where public and private cryptographic keys are used to
“sign” documents. Normally, these keys are generated by an app on your device and are unique to you.
To simplify how this works, this cryptographic concept is based on mathematical tricks. For every
document, we can generate a “hash number” that is (almost) unique to every document in the world.
This hash number is obtained by reading all (or parts) of a document and, considering the values and
sequence of bytes, create a unique number that represents that document.

Next, the private key is used to “sign” that document, which means a new number is generated based
on the combination of the two. The good part is that this operation is unidirectional. It’s like guessing
prime numbers; there is no formula for that – we just need to divide the number by half of the
preceding numbers to see if it is a prime or not.

But, there is a way to verify the number and that is via the public key. By comparing the final hash with
the public key we can be sure that the person is the true owner of that document, as no one else in the
world has access to that private key (this is why it is so disastrous to lose your private keys – millions
were lost in Bitcoin due to this error).

90
SSI takes this cryptographic concept and applies it to personal data: all data is stored on the user’s
device, and only parts that are necessary will be shared with the outside world. This means to attest if
the user is above 18 years of age, the birth date does not need to be shared; the requesting party
merely receives a yes/no answer.

Blockchain’s role

While the Personally Identifiable Information is not shared on the ledger, the coordination between the
different parties needs orchestration, and that’s where blockchain comes in. In the previous example, an
entity needs to verify a user’s age. For this reason, they turn to validators or attestators. These entities
have been in contact with the individual and issued proofs, such as a driver’s license or a university
degree, or a birth certificate. When users present their proofs, the validators are queried and asked to
validate these claims and offer the yes/no answer mentioned above.

This format of sharing data is much more secure. “When releasing raw information to a lender or
financial service, you normally need to provide the full raw info (like SSN, full name, or address)”
according to Leimgruber. “With Bloom, you can share proof of verification without sharing raw info.”
The companies are receiving a minimum amount of data and even the storage is decentralized, which
lifts a heavy burden when it comes to GDPR compliance.

91
AT: Tech-giants Monopoly
Antitrust and Google

Robert H. Bork, solicitor general of the United States of the U.S. Court of Appeals, adviser to Google, April 6,
2012 https://www.chicagotribune.com/opinion/ct-xpm-2012-04-06-ct-perspec-0405-bork-20120406-
story.html

Expansion in the field is easier still. Google's competitors, Microsoft and Yahoo, are or once were giants in other
realms and could take over the market instantly if their search engines were more effective. Consumers can
switch search engines without cost instantaneously. This is why an argument that a search engine will bias
results in favor of its own or sponsored sites makes no economic sense. A search engine that promotes its own
inferior products over products people prefer will immediately lose its consumer base.
There is no coherent case for monopolization because a search engine, like Google, is free to consumers and
they can switch to an alternative search engine with a click.

92
Too much privacy makes data impossible to use

Data Disclosure under Perfect Sample Privacy


Borzoo Rassouli, Fernando E. Rosas, and Deniz G¨und¨uz,
University of Essex, Colchester, UK 2 Imperial College London, London, UK
https://www.researchgate.net/publication/332186457_Data_Disclosure_under_Perfect_Sample_Privacy

The highest privacy standard that a data disclosure strategy can guarantee, called perfect privacy, corresponds
to when nothing can be learned about an individual that could not have been learned without the disclosed
data anyway. While studied in perfect privacy is often disregarded for being too restrictive, corresponding to
an extreme choice within the trade-off that exists between privacy and utility. The most popular approach that
takes advantage of this trade-off is differential privacy, which is equipped with free parameters that can be
flexibly tuned in order to adapt to the requirements of diverse scenarios. However, while these degrees of
freedom provide significant flexibility, determining the range of values that can guarantee that the system is
“secure enough” is usually not straightforward .

93
Microsoft’s commitment to GDPR, privacy and putting customers in control of
their own data

Julie Brill - Corporate Vice President for Global Privacy and Regulatory Affairs and Chief Privacy Officer of
Microsoft https://blogs.microsoft.com/on-the-issues/2018/05/21/microsofts-commitment-to-gdpr-privacy-
and-putting-customers-in-control-of-their-own-data/

“We are committed to making sure that our products and services comply with GDPR. That’s why we’ve had
more than 1,600 engineers across the company working on GDPR projects. Since its enactment in 2016, we’ve
made significant investments to redesign our tools, systems and processes to meet the requirements of GDPR.
Today, GDPR compliance is deeply ingrained in the culture at Microsoft and embedded in the processes and
practices that are at the heart of how we build and deliver products and services.”

94
Con

95
Data Brokers Bad

Popular dating and social media apps have been selling personal data to
hundreds of third party sites and advertisers behind the backs of the users

Sarah, Suhauna Hussain. 1-14-2020, "Grindr, Tinder and OkCupid apps share personal data, group finds,"
Los Angeles Times, https://www.latimes.com/world-nation/story/2020-01-14/dating-apps-leak-
personal-data-norwegian-group-says

Grindr is sharing detailed personal data with thousands of advertising partners, allowing them to receive
information about users’ location, age, gender and sexual orientation, a Norwegian consumer group
said.

Other apps, including popular dating apps Tinder and OkCupid, share similar user information, the group
said. Its findings show how data can spread among companies, and they raise questions about how
exactly the companies behind the apps are engaging with Europe’s data protections and tackling
California’s new privacy law, which went into effect Jan. 1.

Grindr — which describes itself as the world’s largest social networking app for gay, bi, trans and queer
people — gave user data to third parties involved in advertising and profiling, according to a report by
the Norwegian Consumer Council that was released Tuesday. Twitter Inc. ad subsidiary MoPub was used
as a mediator for the data sharing and passed personal data to third parties, the report said.

“Every time you open an app like Grindr, advertisement networks get your GPS location, device
identifiers and even the fact that you use a gay dating app,” Austrian privacy activist Max Schrems said.
“This is an insane violation of users’ [European Union] privacy rights.”

The consumer group and Schrems’ privacy organization have filed three complaints against Grindr and
five ad-tech companies to the Norwegian Data Protection Authority for breaching European data
protection regulations.

Match Group Inc.’s popular dating apps OkCupid and Tinder share data with each other and other
brands owned by the company, the research found. OkCupid gave information pertaining to customers’
sexuality, drug use and political views to the analytics company Braze Inc., the organization said.

A Match Group spokeswoman said that OkCupid uses Braze to manage communications to its users, but
that it only shared “the specific information deemed necessary” and “in line with the applicable laws,”
including the European privacy law known as GDPR as well as the new California Consumer Privacy Act,
or CCPA.

96
Data collection companies have huge power. For as little as $60 US dollars, you
can get thousands of people’s data. One company has over 3000 points of data
on almost every American. Huge violation of privacy from multiple websites and
apps

Duxfield, Pm Flint and Mitchell, Scott. 5-31-2019, "Personal data of thousands of Australians sold for just
$US60," ABC News, https://www.abc.net.au/news/2019-05-31/online-privacy-personal-data-purchased-
for-$us60-warning-experts/11157092

For just $US60, a company registered in New York state is selling the data of over 2,000 Australian
women who have signed up for online dating services.

For one woman, 'Rosie' — who wished to remain anonymous — her file included her age, contact
details, place of employment and photographs.

The file also noted that while she did not have children, she would like some in the future.

Rosie's mother told the ABC her daughter was "quite shocked" to learn how intimate details of her life
and her future hopes were being sold online for a profit.

"I feel like it's more than one website that this information has come from," she said.

The company that sold the information obtained it from dating apps and websites, but would not
respond to questions about exactly how it got the data.

Sarah, a 27-year-old woman whose data was also included with the purchase, said the she was
concerned about safety after learning her data was available for sale.

"It would be pretty easy to track me down even from just my name and profession," she said.

Sarah had previously been doxed online, with contact details and photos of her posted maliciously to
the website 4chan.

"It's pretty gross to learn that your identity is getting treated like a commodity that's for sale," she said.

"It makes you feel a bit small and powerless."

Dating sites often include the right to share or on-sell client data as part of the terms and conditions of
starting an account.

Gathering data points

This case is a classic example of how our data is being sold around the world without our knowledge,
according to Katina Michael, a professor in computing and information technology at the University of
Wollongong.

97
"There are companies that are scraping people's data of all types — dating is quite obtrusive — and
consumers do not understand what is possible with sophisticated data-scraping algorithms," Professor
Michael said.

The companies that accumulate and combine this information are known as data brokers. The US
Federal Trade Commission found that one data broker alone had 3,000 pieces of data on nearly every
person in the United States.

It is difficult to know exactly how many companies are selling and trading data in this way, but credible
estimates put the number of data brokers in the United States alone at between 2,500 and 4,000
companies.?

Many suspect their phone is spying on them, but that doesn't mean it is.

University of Maryland law professor Frank Pasquale said brokers would use data to classify people into
certain categories that could be discriminatory

He gave the example of grouping consumers as "elderly and gullible" and then selling their information
to gambling marketers.

"People have no idea, nobody has any idea of the vulnerabilities it entails," Professor Pasquale said.

"There's all kinds of data in there that can be used against us — by insurers by employers — and we just
sort of have to hope that the laws keep those bad uses at bay.

For Australians, relying on the law to protect our data is difficult as much of it is stored outside
Australia's jurisdiction.

"The mere fact that we're using international platforms to begin with means our data was already
residing in America," explained Professor Michael.

"For instance, you may have a Gmail account — it may look like you're in Australia but your information
is being stored in America."

Services like Paypal, which is now used by more than 7 million Australians, shares its user data with over
600 different third parties.

While data brokers are well-established in the US, they are becoming increasingly involved in new
international markets like Australia.

"There's global data brokers that are saying, 'We can use the same algorithms as in the United States
and we can apply them to other countries'," Professor Michael said.

Data gets linked to social media

Siva Vaidhyanathan, a professor in Media Studies at the University of Virginia, said when it comes to
building these multiple data points into a complete picture of us, no one does it better than tech giants
like Facebook and Google.

98
"Facebook for years purchased commercial databases and government databases, so they could cross-
list all that data with the data that they had gathered from you," he said.

For example, if you use one of Facebook's apps on your phone, such as Facebook Messenger, Whatsapp
or Instagram, then the tech giant can record your location.

"If you walk through a shopping centre, Facebook keeps track of the shops that you enter and cross-
references that with any commercial activity that it has followed," he said, adding that other tech giants
like Google collect similar information.

"You've told Facebook who your closest friends are; who your closest family are.

"You have also told Facebook what your political interests are, what music you like and what books you
read.

"I couldn't imagine a richer picture of each of us. Facebook essentially has a doppelganger of us in its
servers — our expressions and desires."

Professor Michael said her greatest concern was the judgements that would be made about consumers
by algorithms based on all of this data.

"It is basically creating classes of people and it's creating segregation," she said.

"When we leave things to algorithms we get things wrong and I'm worried that in the next 10 years we'll
see algorithms go out of control.

"Judgements are being made about me that I couldn't even conceive of."

99
Technology Monopolies Bad

Data collection for business use harms new startups and gives large
corporations a huge advantage

Radinsky, Kira. 3-2-2015, "Data Monopolists Like Google Are Threatening the Economy" Harvard
Business Review, https://hbr.org/2015/03/data-monopolists-like-google-are-threatening-the-economy

The White House recently released a report about the danger of big data in our lives. Its main focus was
the same old topic of how it can hurt customer privacy. The Federal Trade Commission and National
Telecommunications and Information Administration have also expressed concerns about consumer
privacy, as have PwC and the Wall Street Journal.

However, big data holds many other risks. Chief among these, in my mind, is the threat to free market
competition.

Today, we see companies building their IP not solely on technology, but rather on proprietary data and
its derivatives. As ever-increasing amounts of data are collected by businesses, new opportunities arise
to build new markets and products based on this data. This is all to the good. But what happens next?
Data becomes the barrier-to-entry to the market and thus prevents new competitors from entering. As a
result of the established player’s access to vast amounts of proprietary data, overall industry
competitiveness suffers. This hurts the economy.

Federal government regulators must ask themselves: Should data that only one company owns, to the
extent that it prevents others from entering the market, be considered a form of monopoly?

The search market is a perfect example of data as an unfair barrier-to-entry. Google revolutionized the
search market in 1996 when it introduced a search-engine algorithm based on the concept of website
importance — the famous PageRank algorithm. But search algorithms have significantly evolved since
then, and today, most of the modern search engines are based on machine learning algorithms
combining thousands of factors — only one of which is the PageRank of a website. Today, the most
prominent factors are historical search query logs and their corresponding search result clicks. Studies
show that the historical search improves search results up to 31%. In effect, today’s search engines
cannot reach high-quality results without this historical user behavior.

This creates a reality in which new players, even those with better algorithms, cannot enter the market
and compete with the established players, with their deep records of previous user behavior. The new
entrants are almost certainly doomed to fail. This is the exact challenge Microsoft faced when it decided
to enter the search market years after Google – how could it build a search technology with no past user
behavior? (Disclosure: I previously worked as a researcher at Microsoft, but had nothing to do with
Bing.) The solution came one year later when they formed an alliance with Yahoo search, gaining access
to their years of user search behavior data. But Bing still lags far behind Google.

100
This dynamic isn’t limited only to internet search. Given the importance of data to every industry, data-
based barriers to entry can affect anything from agriculture, where equipment data is mined to help
farms improve yields, to academia, where school performance and census data is mined to improve
education. Even in medicine, hospitals specializing in certain diseases become the sole owners of the
medical data that could be mined for a potential cure.

While data monopolies hurt both small start-ups and large, established companies, it’s the biggest
corporate players who have the biggest data advantage. McKinsey calculates that in 15 out of 17 sectors
in the U.S. economy, companies with more than 1,000 employees store, on average, over 235 terabytes
of data—more data than is contained in the entire US Library of Congress.

Data is a strategy – and we need to start thinking about it as one. It should adhere to the same
competitive standards as other business strategies. Data monopolists’ ability to block competitors from
entering the market is not markedly different from that of the oil monopolist Standard Oil or the
railroad monopolist Northern Securities Company.

Perhaps the time has come for a Sherman Antitrust Act – but for data. Unsure where you come down on
this issue? Consider this: studies have shown that around 70% of organizations still aren’t doing much
with big data. If that’s your company, you’ve probably already lost to the data monopolists.

101
Data collection harms people in every country as companies monopolize the
world

Kharpal, Arjun. 2016 “Regulators should review ‘monopolization’ of data for A.I. by US tech firms, British
lawmakers say,” CNBC, https://www.cnbc.com/2018/04/16/uk-lawmakers-warn-of-data-
monopolization-for-ai-by-big-us-tech-firms.html

Regulators should review the “potential monopolization of data” by U.S. technology giants in the U.K.
that could hamper homegrown development of artificial intelligence (AI), an influential body has
recommended.

A committee made up of lawmakers from the House of Lords, the upper house of Britain’s parliament,
released a report Monday on the need for the ethical development of AI.

They took written evidence from 223 witnesses and interviewed 57 people during their investigation.

One witness, Professor Richard Susskind, spoke about the “unprecedented concentration of wealth and
power in a small number of corporations” such as Alibaba, Alphabet, Amazon, Apple, Facebook,
Microsoft and Tencent. The lawmakers said in their report that this was a “view widely held” among a
number of witnesses.

The House of Lords committee said that the dominance of large technology firms could hamper
development of AI in Britain.

“While we welcome the investments made by large overseas technology companies in the U.K.
economy, and the benefits they bring, the increasing consolidation of power and influence by a select
few risks damaging the continuation, and development, of the U.K.’s thriving home-grown AI start-up
sector,” the report said.

“The monopolization of data demonstrates the need for strong ethical, data protection and competition
frameworks in the U.K., and for continued vigilance from the regulators.”

The British lawmakers said that the government and the U.K.’s competition watchdog should “review
proactively the use and potential monopolization of data by the big technology companies.”

A.I. code

The 181-page report explored the development of AI, the impact on jobs and society and areas such as
healthcare and the military.

One recommendation was creating an “AI code” that can be adopted nationally and internationally. The
code urges AI to be developed for the common good and says the technology should not be used to
“diminish the data rights or privacy of individuals, families or communities.”

“The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial
intelligence,” the AI code says.

102
Lawmakers also recognized that many jobs will disappear with new ones created and said “significant”
government investment in skills and training will be necessary to mitigate the negative effects of AI.

103
Data collection violates monopoly laws and harms the consumer by killing
competion internationally

Bajak, Frank. 11-8-2019, "Top antitrust enforcer warns Big Tech over data collection," AP NEWS,
https://apnews.com/a31ee585d23143769823791942e736ab

CAMBRIDGE, Massachusetts (AP) — The Justice Department’s top antitrust official warned Big Tech
companies Friday that the government could pursue them for anticompetitive behavior related to their
troves of user data, including for cutting off data access to competitors.

“Antitrust enforcers cannot turn a blind eye to the serious competition questions that digital markets
have raised,” Assistant Attorney General Makan Delrahim told an antitrust conference at Harvard Law
School.

Delrahim did not name any specific companies, but his office is investigating companies including
Google while the Federal Trade Commission probes Facebook. The House Judiciary Committee is also
conducting an inquiry looks at those two companies plus Amazon and Apple.

All but Apple are members of the Computer and Communications Industry Association , a tech lobbying
group sponsoring Friday’s conference.

Delrahim said some of the most interesting and alarming legal issues raised by the rise of the digital
economy are in the “collection, aggregation and commercial use of consumer data,” which he called
“analogous to a new currency.”

He said his office is studying “the ways market power can manifest in industries where data plays a key
role,” particularly when large amounts of data are amassed that are “quite personal and unique in
nature” and offers insight into “the most intimate aspects of human choice and behavior, including
personal health, emotional well-being, civic engagement and financial fitness.

That, said Delrahim, can create “avenues for abuse.”

The acquisition of such data is especially valuable for companies in the business of selling predictions
about human behavior, he said. That’s how Google and Facebook — which dominate global search and
social media — attract targeted advertising.

He cited Harvard Business School professor emerita Shoshana Zuboff’s theory of “surveillance
capitalism,” which holds that the “behavioral data” those companies acquire through their nominally
free services is a wholly new kind of product. Zuboff considers it massively invasive and exploitative.

Delrahim said that “although privacy fits primarily within the realm of consumer protection law, it would
be a grave mistake to believe that privacy concerns can never play a role in antitrust analysis.”

104
He cited several studies indicating people’s willingness to “relinquish data for a fairly small incentive”
including a study in which 1,500 students at the Massachusetts Institute of Technology “were willing to
share the contact information of their closest friends in exchange for only a pizza.”

Robust competition can spur companies to offer more and better privacy protections, Delrahim said.

“Without competition, a dominant firm can more easily reduce quality - such as by decreasing privacy
protections - without losing a significant number of users,” he said.

That has been a major criticism of both Facebook and Google.

Delrahim also said his office is being “especially vigilant about the potential for anticompetitive effects
when a company cuts off a profitable relationship supplying business partners with key data, code, or
other technological inputs in ways that are contrary to the company’s economic interests.”

A lawsuit filed in California against Facebook by a small startup called Six4Three claims it forced
thousands of partners out of business by cutting off their access to valuable user data in 2015 while
continuing to provide it to preferred partners that generated big advertising revenues. Facebook says it
restricted access out of concern for user privacy.

One company not cut off until later, the political consultancy Cambridge Analytica, obtained the
personal data on 87 million people without their knowledge or consent. That revelation triggered
intense scrutiny of Facebook and other Big Tech giants, including investigations by most state attorneys
general of both Google and Facebook.

105
Moral reasons for protecting privacy
Privacy and Information Technology

First published Thu Nov 20, 2014; substantive revision Wed Oct 30, 2019

The following types of moral reasons for the protection of personal data and for providing direct or
indirect control over access to those data by others can be distinguished (van den Hoven 2008):

1. Prevention of harm: Unrestricted access by others to one ‘s bank account, profile, social media
account, cloud repositories, characteristics, and whereabouts can be used to harm the data
subject in a variety of ways.
2. Informational inequality: Personal data have become commodities. Individuals are usually not in
a good position to negotiate contracts about the use of their data and do not have the means to
check whether partners live up to the terms of the contract. Data protection laws, regulation and
governance aim at establishing fair conditions for drafting contracts about personal data
transmission and exchange and providing data subjects with checks and balances, guarantees for
redress and means to monitor compliance with the terms of the contract. Flexible pricing, price
targeting and price gauging, dynamic negotiations are typically undertaken on the basis of
asymmetrical information and great disparities in access to information. Also choice modelling in
marketing, micro-targeting in political campaigns, and nudging in policy implementation exploit a
basic informational inequality of principal and agent.
3. Informational injustice and discrimination: Personal information provided in one sphere or
context (for example, health care) may change its meaning when used in another sphere or
context (such as commercial transactions) and may lead to discrimination and disadvantages for
the individual. This is related to the discussion on contextual integrity by Nissenbaum (2004) and
Walzerian spheres of justice (Van den Hoven 2008).
4. Encroachment on moral autonomy and human dignity: Lack of privacy may expose individuals to
outside forces that influence their choices and bring them to make decisions they would not have
otherwise made. Mass surveillance leads to a situation where routinely, systematically, and
continuously individuals make choices and decisions because they know others are watching
them. This affects their status as autonomous beings and has what sometimes is described as a
“chilling effect” on them and on society. Closely related are considerations of violations of respect
for persons and human dignity. The massive accumulation of data relevant to a person ‘s identity
(e.g. brain-computer interfaces, identity graphs, digital doubles or digital twins, analysis of the
topology of one ‘s social networks) may give rise to the idea that we know a particular person
since there is so much information about her. It can be argued that being able to figure people
out on the basis of their big data constitutes an epistemic and moral immodesty (Bruynseels &
Van den Hoven 2015), which fails to respect the fact that human beings are subjects with private
mental states that have a certain quality that is inaccessible from an external perspective (third
or second person perspective) – however detailed and accurate that may be. Respecting privacy
would then imply a recognition of this moral phenomenology of human persons, i.e. recognizing
that a human being is always more than advanced digital technologies can deliver.
These considerations all provide good moral reasons for limiting and constraining access to personal
data and providing individuals with control over their data.

106
Violation of Privacy

Privacy laws do little to nothing to protect the consumer as data companies will
find a way around the laws

Sarah, Suhauna Hussain. 1-14-2020, "Grindr, Tinder and OkCupid apps share personal data, group finds,"
Los Angeles Times, https://www.latimes.com/world-nation/story/2020-01-14/dating-apps-leak-
personal-data-norwegian-group-says

Braze also said it didn’t sell personal data, nor share that data between customers. “We disclose how we
use data and provide our customers with tools native to our services that enable full compliance with
GDPR and CCPA rights of individuals,” a Braze spokesman said.

The California law requires companies that sell personal data to third parties to provide a prominent
opt-out button; Grindr does not seem to do this. In its privacy policy, Grindr says that its California users
are “directing” it to disclose their personal information, and that therefore it’s allowed to share data
with third-party advertising companies. “Grindr does not sell your personal data,” the policy says.

The law does not clearly lay out what counts as selling data, “and that has produced anarchy among
businesses in California, with each one possibly interpreting it differently,” said Eric Goldman, a Santa
Clara University School of Law professor who co-directs the school’s High Tech Law Institute.

How California’s attorney general interprets and enforces the new law will be crucial, experts say. State
Atty. Gen. Xavier Becerra’s office, which is tasked with interpreting and enforcing the law, published its
first round of draft regulations in October. A final set is still in the works, and the law won’t be enforced
until July.

But given the sensitivity of the information they have, dating apps in particular should take privacy and
security extremely seriously, Goldman said. Exposing a person’s sexual orientation, for example, could
change that person’s life.

Grindr has faced criticism in the past for sharing users’ HIV status with two mobile app service
companies. (In 2018 the company announced it would stop sharing this information.)

Representatives for Grindr didn’t immediately respond to requests for comment.

Twitter is investigating the issue to “understand the sufficiency of Grindr’s consent mechanism” and has
disabled the company’s MoPub account, a Twitter representative said.

European consumer group BEUC urged national regulators to “immediately” investigate online
advertising companies over possible violations of the bloc’s data protection rules, following the
Norwegian report. It also has written to Margrethe Vestager, the European Commission executive vice
president, urging her to take action.

107
“The report provides compelling evidence about how these so-called ad-tech companies collect vast
amounts of personal data from people using mobile devices, which advertising companies and
marketeers then use to target consumers,” the consumer group said in an emailed statement. This
happens “without a valid legal base and without consumers knowing it.”

The European Union’s data protection law, GDPR, came into force in 2018 setting rules for what
websites can do with user data. It mandates that companies must get unambiguous consent to collect
information from visitors. The most serious violations can lead to fines of as much as 4% of a company’s
global annual sales.

It’s part of a broader push across Europe to crack down on companies that fail to protect customer data.
In January last year, Alphabet Inc.’s Google was hit with a $56-million fine by France’s privacy regulator
after Schrems made a complaint about Google’s privacy policies. Before the EU law took effect, the
French watchdog levied maximum fines of about $170,000.

The U.K. threatened Marriott International Inc. with a $128-million fine in July following a hack of its
reservation database, just days after the U.K.’s Information Commissioner’s Office proposed handing an
approximately $240-million penalty to British Airways in the wake of a data breach.

Schrems has for years taken on large tech companies’ use of personal information, including filing
lawsuits challenging the legal mechanisms Facebook Inc. and thousands of other companies use to move
that data across borders.

He’s become even more active since GDPR kicked in, filing privacy complaints against companies
including Amazon.com Inc. and Netflix Inc., accusing them of breaching the bloc’s strict data protection
rules. The complaints are also a test for national data protection authorities, who are obliged to examine
them.

In addition to the European complaints, a coalition of nine U.S. consumer groups urged the U.S. Federal
Trade Commission and the attorneys general of California, Texas and Oregon to open investigations.

“All of these apps are available to users in the U.S. and many of the companies involved are
headquartered in the U.S.,” groups including the Center for Digital Democracy and the Electronic Privacy
Information Center said in a letter to the FTC. They asked the agency to look into whether the apps have
upheld their privacy commitments.

108
Many advertisers use data collection tools without realizing the privacy issue at
the heart of collection

Eddy, Max. 10-10-2018, "How Companies Turn Your Data Into Money ," PCMAG,
https://www.pcmag.com/news/how-companies-turn-your-data-into-money

Budington said that in some cases, app developers may be including tracking SDKs without fully
understanding the privacy implications for users and perhaps without ever receiving the data
themselves. Developers sometimes get paid for including the SDKs and may include them as tools for
debugging or gathering analytics. The SDK operators, however, can then potentially receive information
about people's behaviors and app usage.

As for devices with built-in digital assistants, such as the Google Home and Amazon Echo, it is true that
these services send recordings of your queries back to the respective companies for processing. With
the Google Assistant and Alexa voice assistants, you can even listen to recordings of every question
you've ever asked. Budington said that while companies have been clear on what kind of data they're
gathering with these devices and services, what they're using the data for is much more opaque.

Budington doesn't expect this data economy to change, at least without external pressure. Most efforts
by companies to improve user privacy typically don't solve what he sees as the real problem.
"[Companies] are willing to set up privacy filters with regard to other users, because that doesn't affect
their bottom line; but they're still getting that data themselves."

Budington also doesn't see fixes coming from Congress. "I don't see much hope for that in the US," he
told me. "Often, I think, when regulation comes into play, it's ill-worded and misapplied. And because of
that, you don't have the necessary protection, and [it] can often do more damage than it does good."

The argument against Budington's position on privacy is that targeted advertising and the data
collection behind it are fair compensation for companies that provide free online services. Google,
Facebook, and Twitter would likely not exist if they couldn't turn user data into cash. Not everyone has
the money to pay for subscriptions or is willing to—but most people have value to advertisers as
potential consumers.

That argument rings hollow to Budington. "People don't have a lot of options if they're going to interact
with the world. Most people like to take pictures and upload them to Instagram," he said. The EFF
created Privacy Badger—a browser extension that blocks ads and trackers—to address this lack of
choice. It lets users toggle which trackers are allowed to interact with their web experience, and it
replaces social widgets and embedded YouTube videos with badger icons that viewers have to click in
order to activate (and then, in turn, information about the viewer is transmitted).

109
Google’s ventures into medical data collection raise big flags about the security
of our health care privacy and data

Barber, Gregory. 11.11.2019"Google Is Slurping Up Health Data—and It Looks Totally Legal," Wired,
https://www.wired.com/story/google-is-slurping-up-health-dataand-it-looks-totally-legal/

Last week, when Google gobbled up Fitbit in a $2.1 billion acquisition, the talk was mostly about what
the company would do with all that wrist-jingling and power-walking data. It’s no secret that Google’s
parent, Alphabet—along with fellow giants Apple and Facebook—is on an aggressive hunt for health
data. But it turns out there’s a cheaper way to get access to it: Teaming up with health care providers.

On Monday, The Wall Street Journal reported details on Project Nightingale, Google’s under-the-radar
partnership with Ascension, the nation’s second-largest health system. The project, which reportedly
began last year, includes sharing the personal health data of tens of millions of unsuspecting patients.
The bulk of the work is being done under Google’s Cloud division, which has been developing AI-based
services for medical providers.

Google says it is operating as a business associate of Ascension, an arrangement that can grant it
identifiable health information, but with legal limitations. Under the Health Insurance Portability and
Accountability Act, better known as HIPAA, patient records and other medical details can be used “only
to help the covered entity carry out its healthcare functions.” A major aspect of the work involves
designing a health platform for Ascension that can suggest individualized treatment plans, tests, and
procedures.

The Journal says Google is doing the work for free with the idea of testing a platform that can be sold to
other health care providers, and ostensibly trained on their respective datasets. In addition to the Cloud
team, Google employees with access include members of Google Brain, which focuses on AI applications

Dianne Bourque, an attorney at the legal firm Mintz who specializes in health law, says HIPAA, while
generally strict, is also written to encourage improvements to health care quality. “If you're shocked
that your entire medical record just went to a giant company like Google, it doesn’t make you feel better
that it's reasonable under HIPAA,” she says. “But it is.”

The federal health care privacy law allows hospitals and other health care providers to share information
with its business associates without asking patients first. That’s why your clinic doesn’t get permission
from you to share your information with its cloud-based electronic medical record vendor.

HIPAA defines the functions of a business associate quite broadly, says Mark Rothstein, a bioethicist and
public health law scholar at the University of Louisville. That allows health care systems to divulge all
sorts of sensitive information to companies patients might not expect, without ever having to tell them.
In this case, Rothstein says, Google’s services could be seen as “quality improvement,” one of HIPAA’s
permitted uses for business associates. But he says it’s unclear why the company would need to know

110
the names and birthdates of patients to pull that off. Each patient could instead have been assigned a
unique number by Ascension so that they remained anonymous to Google.

“The fact that this data is individually identifiable suggests there’s an ultimate use where a person’s
identity is going to be important,” says Rothstein. “If the goal was just to develop a model that would be
valuable for making better-informed decisions, then you can do that with deidentified data. This
suggests that’s not exactly what they’re after.”

In fact, according to Bourque, Google would have to anonymize the information before it could be used
to develop machine learning models it can sell in other contexts. Given the potential breadth of the
data, one of the biggest remaining questions is whether Ascension has given the tech giant permission
to do so.

111
Loss of privacy when Anonymization doesn’t work

Joanna Redden and Jessica Brand https://datajusticelab.org/data-harm-record/

This can happen unintentionally when attempts to release data anonymously do not work. Big data
makes anonymity difficult because it is possible to re-identify data that has been anonymized by
combining multiple data points.

AOL Example

As detailed by Paul Ohm, in 2006 America Online (AOL) launched ‘AOL Research’ to ‘embrace the vision
of an open research community’. The initiative involved publicly releasing twenty million search queries
from 650,000 users of AOL’s search engine. The data, which represented three months of activity, was
posted to a public website. Although the data was anonymized, once the data was posted some users
demonstrated that it was possible to identify people’s identities using the data which included name,
age and address.

Two New York Times reporters Michael Barbaro and Tom Zeller Jr. cross-linked data to identify Thelma
Arnold, a sixty-two year old widow from Lilburn Georgia. Her case demonstrates the problems with
‘anonymisation’ in an age of big data, but also the danger in reading too much into search queries. As
Barbaro and Zeller note, Ms Arnold’s search queries ‘hand tremors’, ‘nicotine effects on the body’, ‘dry
mouth’ and ‘bipolar’, could lead someone to think she suffered from a range of health issues. Such a
conclusion could have negative effects if the organization making that conclusion was her insurance
provider. In fact, when they interviewed Arnold, Barbaro and Zeller found that Arnold often does
searches for her friends because she wants to help them.

Netflix Example

In 2006 Netflix publicly released one hundred million records detailing the film ratings of 500,000 of its
users between Dec. 1999 and Dec. 2005. As Ohm reports, the objective was to launch a competition and
for those competing to use this data to improve Netflix’s recommendation algorithm. Netflix
anonymized the data by assigning users a unique identifier. Researchers from the University of Texas
demonstrated not long after this release how relatively easy it was for people to be re-identified with
the data. This led to a court case in which Jane Doe argued that the data could be used to out her
sexuality. Jane Doe argued that her homosexuality was being revealed by the data as it revealed her

112
interest in gay and lesbian themed films. She argued the data outed her, a lesbian mother, against her
wishes and could damage herself and her family. The court case was covered by Wired in 2009.

113
Kills Social Protests

Data companies can isolate GPS and location data tied to mobile devices of
protestors. This data, once in the hands of the opposition, kills the ability of
anonymous protest worldwide

Warzel ,Charlie and Thompson, Stuart A. 12-21-2019, "Opinion," New York Times,
https://www.nytimes.com/interactive/2019/12/21/opinion/location-data-democracy-protests.html

IN FOOTAGE FROM DRONES hovering above, the nighttime streets of Hong Kong look almost
incandescent, a constellation of tens of thousands of cellphone flashlights, swaying in unison. Each
twinkle is a marker of attendance and a plea for freedom. The demonstrators, some clad in masks to
thwart the government’s network of facial recognition cameras, find safety in numbers.

But in addition to the bright lights, each phone is also emitting another beacon in the darkness — one
that’s invisible to the human eye. This signal is captured and collected, sometimes many times per
minute, not by a drone but by smartphone apps. The signal keeps broadcasting, long after the protesters
turn off their camera lights, head to their homes and take off their masks.

In the United States, and across the world, any protester who brings a phone to a public demonstration
is tracked and that person’s presence at the event is duly recorded in commercial datasets. At the same
time, political parties are beginning to collect and purchase phone location for voter persuasion.

“Without question it’s sinister,” said Todd Gitlin, professor of journalism at Columbia University and
former president of Students for a Democratic Society, a prominent activist group in the 1960s. “It will
chill certain constitutionally permitted expressions. If people know they’ll be tracked, it will certainly
make them think twice before linking themselves to a movement.”

A trove of location data with more than 50 billion location pings from the phones of more than 12
million Americans obtained by Times Opinion helps to illustrate the risks that such comprehensive
monitoring poses to the right of Americans to assemble and participate in a healthy democracy.

Within minutes, with no special training and a little bit of Google searching, Times Opinion was able to
single out and identify individuals at public demonstrations large and small from coast to coast.

By tracking specific devices, we followed demonstrators from the 2017 Women’s March back to their
homes. We were able to identify individuals at the 2017 Inauguration Day Black Bloc protests. It was
easy to follow them to their workplaces. In some instances — for example, a February clash between
antifascists and far-right supporters of Milo Yiannopolous in Berkeley, Calif. — it took little effort to
identify the homes of protesters and then their family members.

The anonymity of demonstrators has long been a contentious issue. Governments generally don’t like
the idea for fear that masked protesters might be more likely to incite riots. Several states, including

114
New York and Georgia, have laws that prohibit wearing masks at public demonstrations. Countries
including Canada and Spain have rules to limit or prohibit masks at riots or unlawful gatherings. But in
the smartphone era — masked or not — no one can get lost in a sea of faces anymore.

Imagine the following nightmare scenarios: Governments using location data to identify political
enemies at major protests. Prosecutors or the police using location information to intimidate criminal
defendants into taking plea deals. A rogue employee at an ad-tech location company sharing raw data
with a politically motivated group. A megadonor purchasing a location company to help bolster political
targeting abilities for his party and using the information to dox protesters. A white supremacist group
breaching the insecure servers of a small location startup and learning the home addresses of potential
targets..

Lokman Tsui, an activist, researcher and professor at the Chinese University of Hong Kong, told us that
third parties that sell this data are a problem because “the standards to buy this information aren’t that
rigorous — it’s not like the companies have ethical review boards. The university I’m at is able to buy
data, and it’s fairly easy to get it. And the kind of data they can buy makes me raise my eyebrows, ‘Oh,
wow, you can buy that?’ Creepy data.”

The data doesn’t even need to leak or transfer hands — its mere existence can have a chilling effect on
democratic participation. Word has already spread through the more professional protester circles to
leave cellphones at home, toggle them to airplane mode or simply power them off. Many antifascist
protesters show up to rallies covering their faces to protect their identities from hate groups, the police
and the press. “But that means you’re only getting the diehards to show.... We tell people don’t bring
your phone to protests or if you do, keep GPS off at the very least… The more secure you are the less
able you are to organize,” an antifascist researcher told us. He agreed to be quoted only if we did not
reveal his name.

115
Political Manipulation

Political parties can use data to track voters for leverage and intimidation. This
leads to a cycle of voter confidence loss and risks authoritarian governments
obsessed with keeping power

Warzel ,Charlie and Thompson, Stuart A. 12-21-2019, "Opinion," New York Times,
https://www.nytimes.com/interactive/2019/12/21/opinion/location-data-democracy-protests.html

Location data is already part of the 2020 race for the White House. Political action committees for
Republicans and Democrats have invested in location data to target voters based on their interest. For
example, companies are enlisting data brokers to help monitor the movements of churchgoers to find
conservative-leaning voters and sway their votes.

In company documents from 2017, Phunware, a Texas-based technology company, describes the race to
collect location data to target voters as a “gold rush,” suggesting that “as soon as the first few political
campaigns realize the value of mobile ad targeting for voter engagement, the floodgates will open.
Which campaigns will get there first and strike it rich?”

The company reportedly signed a deal with American Made Media Consultants, a company set up by the
Trump campaign manager, Brad Parscale, to offer location collection services. Phunware touts voters’
smartphones as “the ultimate voter file.” Its marketing claims that mobile data can tell campaigns
“everything from the device operating system (iOS or Android) to what other apps are on the device,
what Wi-Fi networks the device joins and much more. And that doesn’t even cover the information it’s
possible to infer, such as gender, age, lifestyle preferences and so on.”

These are, of course, just the early days. Much of the political manipulation happening now looks no
different from serving up a standard political ad at the right moment. The future, however, could get
dark quickly. Political candidates rich in location data could combine it with financial information and
other personally identifiable details to build deep psychographic profiles designed to manipulate and
push voters in unseen directions. Would-be autocrats or despots could leverage this information to
misinform or divide voters and keep political enemies from showing up to the polls on election day.

Then, once in power, they could leverage their troves of data to intimidate activists and squash protests.
Those brave enough to rebel might be tracked and followed to their homes. At the very least, their
names could be put into registries.

Public dissent could quickly become too risky a proposition, given that the record of one’s attendance at
a rally could be held against them at a later date. Big Data, once the domain of marketers, could become
a means to elevate dictators to power and then frustrate attempts to remove them.

116
It is not difficult, in other words, to imagine a system of social control arising from infrastructure built for
advertising. That’s why regulation is critical. “It is very clear from the examples of the intersection of
authoritarianism and surveillance that we’ve seen around the world that a privacy bill of rights is
absolutely necessary,” said Edward Markey, the Massachusetts senator who wrote the Children’s Online
Privacy Protection Act of 1998. “Privacy needs to start being seen as a human right.”

Carlo Ratti, a professor at M.I.T. and director of its Senseable City Lab, echoed the senator’s concerns.
“The present path is untenable,” he told us. “If you have asymmetrical control of information, it is very
dangerous. Whether it’s companies or states, they can crush political opponents before they can band
together. If we go this route, it is very dangerous and very volatile.”

117
Data = Surveillance

Surveillance and data collection are one and the same

Lyon, David. 2014, “Surveillance, Snowden, and Big Data: Capacities, consequences, and critique”,
http://bds.sagepub.com/content/spbds/1/2/2053951714541861.full.pdf

The Snowden revelations about National Security Agency surveillance, starting in 2013, along with the
ambiguous complicity of internet companies and the international controversies that followed provide a
perfect segue into contemporary conundrums of surveillance and Big Data. Attention has shifted from
late C20th information technologies and networks to a C21st focus on data, currently crystallized in ‘‘Big
Data.’’ Big Data intensifies certain surveillance trends associated with information technology and
networks, and is thus implicated in fresh but fluid configurations. This is considered in three main ways:
One, the capacities of Big Data (including metadata) intensify surveillance by expanding interconnected
datasets and analytical tools. Existing dynamics of influence, risk-management, and control increase
their speed and scope through new techniques, especially predictive analytics. Two, while Big Data
appears to be about size, qualitative change in surveillance practices is also perceptible, accenting
consequences. Important trends persist – the control motif, faith in technology, public-private synergies,
and user-involvement – but the future-orientation increasingly severs surveillance from history and
memory and the quest for pattern-discovery is used to justify unprecedented access to data. Three, the
ethical turn becomes more urgent as a mode of critique. Modernity’s predilection for certain definitions
of privacy betrays the subjects of surveillance who, so far from conforming to the abstract, disembodied
image of both computing and legal practices, are engaged and embodied users-in-relation whose
activities both fuel and foreclose surveillance.

118
Surveillance leads to the death of privacy and discrimination

Keen, Andrew. 2015, "Is the Internet Hurting More Than Helping?" WBUR - Boston's NPR Station,
http://www.wbur.org/hereandnow/2015/03/16/internet-economics-keen

Its cultural ramifications are equally chilling. Rather than creating transparency and openness, the
Internet is creating a panopticon of information-gathering and surveillance services in which we, the
users of big data networks like Facebook, have been packaged as their all-too-transparent product.
Rather than creating more democracy, it is empowering the rule of the mob. Rather than encouraging
tolerance, it has unleashed such a distasteful war on women that many no longer feel welcome on the
network. Rather than fostering a renaissance, it has created a selfie-centered culture of voyeurism and
narcissism. Rather than establishing more diversity, it is massively enriching a tiny group of young white
men in black limousines. Rather than making us happy, it’s compounding our rage.

119
Private sector data collection translates to government programs and
monitoring

Jesse. Jay. 2015 “Three Ways DOD Technology May Light the Way to Actionable Big Data in the Private
Sector” Jobber Tech Talk. http://www.jobbertechtalk.com/three-ways-dod-technology-may-light-the-
way-to-actionable-big-data-in-the-private-sector-jay-jesse/

Defense sector programs and research—from the Internet itself to applications like Apple’s Siri—often
manifest in paradigm-changing innovation. In the Big Data arena, military applications are high stakes:
for example, the urgency of harnessing massive amounts of data—in different formats and from wildly
different sources—to model and pinpoint terrorist and criminal activity across the globe. From this
arena, new applications and best practices are emerging that will result in gains far beyond their military
and intelligence community origins. Here are three ways that military initiatives will show the private
sector how to get more out of Big Data programs. Threat Analysis Becomes Opportunity Analysis Public
safety and antiterrorism agencies need clear and succinct pictures of the crime and security
environment: What is happening, where is it happening, and why? To gain this view, they leverage
massive amounts of high-quality, synthesized, actionable information for applications such as proactive
policing of urban trouble spots (civilian) or using collection and analysis to find and neutralize makers of
IEDs (military). Defense sector vendors have led the way in enabling analysts to rapidly perform complex
searches across any data source—structured and unstructured databases, spreadsheets, mobile data,
RSS feeds and documents—and quickly make visual sense of them in both space and time with
geospatial displays, hotspot maps and timelines, just to name a few. “Actionable intelligence” is a high-
stakes deliverable in the police and military arenas. But it is not that difficult to make the leap from
“suspect” to “customer” to see how understanding future behavior in multiple dimensions will help
product makers and marketers see and spot opportunities, rather than threats. Being able to spot
trends, draw links and make connections between demographic groups, behavior patterns and actual
geographic markets from what was previously a pile of disconnected and disorganized data sources has
huge potential. This potential is already being leveraged in consumer contexts. Especially when we
consider the importance of visualization and location in determining how, where and why consumer
enterprises must marshal their production, distribution, marketing and sales resources against
sophisticated competitors. While the stakes aren’t as high as they are in comparison to the global
counterterrorism theater, they’re high enough to justify pinpointing where resources are most needed,
enabling decision makers to deliver the greatest operational impact, reducing inefficiency and waste and
optimizing limited resources. This is where DoD leads the way. Mobile Command Makes the 21st
Century’s Ultimate “Street Team” An incident such as a bombing throws an urban area into
pandemonium as public safety commanders, analysts and field operators scramble to assess damage,
help survivors and search for clues about the perpetrators. Today’s field command technology must
provide these vital personnel with relevant data while en route and at the scene, viewing the
information they need on the move and uploading scene information back to command control—all
securely shared via Wi-Fi, cellular data or satellite—using a wide variety of devices and media. The
expanding real-time operational picture they create together drives faster, better decision making,

120
speeding the time from a state of chaos to a state of control and setting the stage for investigations that
lead to justice as pictures, videos and other evidence from the scene flood into the hands of analysts.
Critical incident response management systems developed for the DoD will set the global baseline for
private sector applications where anybody from large-scale event producers to experiential marketers
find they can gain a competitive edge from the ability to seamlessly and securely report, collect, store
and retrieve operational intel. Team members can capture, relay and organize event data with
sophistication never before seen, quickly associating all incoming media in a master analysis engine. The
public safety crisis solution of today sets the stage for the sophisticated, real-time event logistics and
marketing mobile apps of tomorrow. Enterprise Search Finds Better Needles in Bigger Haystacks From
finding opportunities in sales data that helps craft better strategy to loss prevention initiatives, Big Data
is undergoing rapid evolution and delivering more exciting results in both the private and defense
sectors. The defense sector can speed gains in the area of data acquisition and enterprise search—the
gateway enablers to the fruits of big data. By accounting for volume, variety and velocity, we equip
business analysts and leaders to “shrink the haystack,” establishing a data processing ecosystem that
can process, enable search and allow users to interact with the data in fruitful ways, rather than being
overwhelmed and in the dark. The end result is better and precise decision-making through superior
insight by revealing threats and opportunities that had previously been invisible in a mass of data. The
first stage of enabling these gains is to pull all information into a common environment so that it can be
pushed through an analysis pipeline. DoD vendors contending with massive amounts of data have led
the way in fashioning connector architecture, normalizing and staging data, and compartmentalizing it
into usable subsets. Defense sector solution providers then developed customized search and discovery
systems that empowered analysts to thin the haystack in search of the valuable data needles they
sought. NLP (natural language processing)-driven semantic enrichment represents a further refining and
enhancement of the search experience, setting the stage for deeper analytics. Search and NLP are the
one-two punch that fuses what the analyst knows with what he or she doesn’t know, allowing users to
constantly tune and refine smaller subsets of data for key factors. The system “learns” as the user
refines their searches to better target their data domain, constantly improving search effectiveness. It
began with counterterrorism experts looking for a particular piece of equipment involved in bomb-
making, but has equal power for financial analysts trying to isolate a particular kind of transaction and
yield profitable insight for their companies. The data integration and enterprise search achievements of
defense sector vendors are paving the way for more fruitful Big Data results across the world. These are
just three areas where defense sector technology gains translated into benefits for the private sector. I’ll
explore more of this landscape in the future.

121
Targeted Ads Bad

Data collection allows for targeted ads. These can target the most vulnerable
individuals and communities with harmful products

Eddy, Max. 10-10-2018, "How Companies Turn Your Data Into Money ," PCMAG,
https://www.pcmag.com/news/how-companies-turn-your-data-into-money

Targeting and Retargeting

Bill Budington, a senior staff technologist with the Electronic Frontier Foundation, sees the avenues for
data gathering everywhere: advertising identifiers in the headers of mobile web traffic, fingerprinting
browsers, customer tracking in stores using Wi-Fi probe data, SDKs inside mobile apps, and ultrasonic
tones from TV that are outside the range of hearing but can be detected by apps on smart devices to
track viewing habits.

Some data isn't being used yet—he said, for example, that the genetic information gathered by
23andMe could one day be used for advertising or for discrimination. Genetics being used for
advertising is something from a hyper-capitalist cyberpunk fever dream; and yet, it's plausible.

"There is no legal regime for the protection of that data, so consumers need to be on watch for it in the
US and make those choices," said Budington. "The US is at the forefront of deploying those
technologies, and the companies that are starting are going to target US customers first. In a lot of ways,
the US serves as a playground for the big-data economy, which means that US citizens have to be more
aware of the dangers."

The collected data has value because of how it's used in online advertising, specifically targeted
advertising: when a company sends an ad your way based on information about you, such as your
location, age, and race. Targeted ads, the thinking goes, are not only more likely to result in a sale (or at
least a click), they're also supposed to be more relevant to consumers.

Budington pointed out that there's a dark side to this kind of advertising. "I have targeted ads that are
more attuned to my desires and my wants... But if you have someone who has an alcohol abuse
problem getting a liquor store ad…" He trailed off, letting the implication hang.

Your local liquor store probably isn't advertising in this way, but vulnerable communities are being
targeted for specific ads. For-profit universities, for example, target low-income people, Budington said.
"You pay thousands and thousands of dollars, and they give you a diploma that isn't worth the paper it's
printed on. Targeted advertising has a really pernicious side."

122
A subset of targeted ads is ad retargeting. Retargeted ads take into account your previous online activity
in order to push an ad your way. For example, tracking pixels can be added to a webpage. When the site
loads, the owner of a tracking pixel will see that a computer requested said pixel and that it loaded at a
particular time. It can even capture identifying information about the computer that visited the site.

This is what creates the unnerving experience of seeing an ad on one website, and then seeing it again
on another site. The ad "follows" you across the web, hoping for a click.

This has given rise to a popular conspiracy theory: that phones and smart devices are listening in and
then targeting ads based on what you're saying. One study debunked this claim, demonstrating that
mobile phones didn't seem to be sending audio data—but some apps were found to be transmitting
screenshots of device activity. Apps using the Silverpush software development kit (SDK) were listening
for ultrasonic beacons (as mentioned above), but Google has worked to suppress the use of this
technology on its Android platform.

Microtargeting refers to the capacity of digital advertizing to target individuals

Kim 2018 (Kim, M. K. Y. M. (2018, April 17). The Stealth Media? Groups and Targets behind Divisive
Issue Campaigns on Facebook. Political Communication. University of Wisconsin-Madison. Retrieved
from https://tandfonline.com/doi/abs/10.1080/10584609.2018.1476425)

Publicly inaccessible digital ads, namely dark posts,¶ illuminate the way digital advertising operates in¶ general:
its microtargeting capacity. Microtargeting¶ refers to a narrowly defined, individual-level audience¶
targeting, media placement, and message customization¶ strategy (Kim, 2016). Microtargeting can go as
narrow¶ as targeting each and every individual in the nation, but¶ the term encompasses a general trend:
the shift in¶ targeting, placement, and customization from the¶ aggregate (such as a media market) to the
individual, as¶ narrowly as possible.¶ By gathering a vast amount of data, including digital¶ trace data, and
by utilizing predictive modeling¶ techniques, campaigns create enhanced profiles that¶ identify and
target specific types of individuals, and¶ then customize their messages. Different individuals¶ therefore are
targeted with different messages. For¶ instance, in the 2016 U.S. election campaign, the firm¶ Cambridge
Analytica created psychographic¶ classifications of voters by harvesting Facebook users’¶ posts, likes, and
social networks and matching them¶ with their comprehensive voter profile data. Cambridge¶ Analytica
then customized ad messages in accordance¶ with the audience’s psychographics, geographics, and¶
demographics (Guardian, November 2015). For¶ example, while issue campaigns concerning guns would¶ be
concentrated in rural areas in Wisconsin, campaigns¶ promoting racial conflict would be concentrated
in¶ Milwaukee, Wisconsin. Among Wisconsin individuals¶ interested in guns, those who have a high level
of¶ insecurity would be targeted with fear appeals (e.g.,¶ “Hillary will take away your guns”) while those who are¶ family-
oriented would receive messages like “guns¶ protect your loved ones.” Data-driven, digitally enabled¶ targeting strategies have been increasingly
adopted by¶ political campaigns (Hersh, 2015; Kreiss, 2016).

123
Facebook’s Social Networking made Cambridge Analytica Possible
Cadwalladr 2017 (Carole Cadwalladr, 5-7-2017, "The great British Brexit robbery: how our democracy
was hijacked," Guardian, https://www.theguardian.com/technology/2017/may/07/the-great-british-
brexit-robbery-hijacked-democracy)

And it was Facebook that made it possible. I t


was from Facebook that Cambridge Analytica obtained its vast dataset in
the first place. Earlier, psychologists at Cambridge University harvested Facebook data (legally) for research purposes
and published pioneering peer-reviewed work about determining personality traits, political partisanship, sexuality and

much more from people’s Facebook “likes”. And SCL/Cambridge Analytica contracted a scientist at the
university, Dr Aleksandr Kogan, to harvest new Facebook data. And he did so by paying people to take a
personality quiz which also allowed not just their own Facebook profiles to be harvested, but also those
of their friends – a process then allowed by the social network.¶ Facebook was the source of the psychological insights that
enabled Cambridge Analytica to target individuals. It was also the mechanism that enabled them to be delivered on a large scale. ¶ The company also

(perfectly legally) bought consumer datasets – on everything from magazine subscriptions to airline travel –

and uniquely it appended these with the psych data to voter files. It matched all this information to
people’s addresses, their phone numbers and often their email addresses. “The goal is to capture every
single aspect of every voter’s information environment,” said David. “And the personality data enabled
Cambridge Analytica to craft individual messages.”

Cambridge Analytica harvested information from over 50 million users


Rosenburg 2018 (Matthew Rosenberg, Nicholas Confessore and Carole Cadwalladr, 3-17-2018, "How
Trump Consultants Exploited the Facebook Data of Millions," No Publication,
https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html)

So the firm harvested private information from the Facebook profiles of more than 50 million users
without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social
network’s history. The breach allowed the company to exploit the private social media activity of a huge swath

of the American electorate, developing techniques that underpinned its work on President Trump’s
campaign in 2016.

Ad Targeting was ‘Highly Effective’ in the 2016 US Elections and Brexit


Warwick 2018 (University of Warwick., 2018, October 25 Targeted Facebook ads shown to be highly
effective in the 2016 US Presidential election. ScienceDaily. Retrieved January 17, 2020 from
www.sciencedaily.com/releases/2018/10/181025103303.htm)
Dr Federica Liberini from ETH Zurich, said: "Our research allowed us to build a simple measure for tracking the intensity of political campaigns conducted on social
media.In the context of the 2016 US Presidential elections, we find that political micro-targeting was
particularly effective when based on ideology and gender or educational level, and much less so when based on race or
age. Our results show that social media effectively empowered politicians to influence key groups of

voters in electoral races, and it is further evidence that recent political outcomes, such as Brexit and the
election of President Trump, might be largely due to the use of data analytics."¶ Dr Antonio Russo also from ETH Zurich
added: "Our finding that Facebook had a strong effect on turnout suggests that social media has great potential for stimulating the political participation of people
who would otherwise have lost interest in politics. In a world where confidence in democracy is dwindling, I believe this is good news. However, we still have much
to learn about whether the information that voters are exposed to on social media really helps them make informed choices"

124
Russia used microtargeting during the 2016 US elections
Kim 2018 (Kim, M. K. Y. M. (2018, April 17). The Stealth Media? Groups and Targets behind Divisive
Issue Campaigns on Facebook. Political Communication. University of Wisconsin-Madison. Retrieved
from https://tandfonline.com/doi/abs/10.1080/10584609.2018.1476425)

After a long silence, Facebook finally admitted that 3,000 ads linked to 470 Facebook accounts or Pages were
purchased by groups linked to the Russian state during the 2016 U.S. Elections (Stamos, Facebook Newsroom,
September 6, 2017). Facebook also noted that the ads primarily focused on divisive social and political issues

such as guns, LGBT rights, immigration, and race, and targeted specific categories of individuals. Along
with Facebook, Google and Twitter testified at public hearings conducted by the congressional
Intelligence Committee that their ads were also purchased by the same Kremlin-linked Russian
operations. Foreign interference with US elections, of course, raised public indignation and dismay. The Founding Fathers held a firm belief that American
democracy must be free from foreign interference: “The jealousy of a free people ought to be constantly awake, since history and experience prove that foreign
influence is one of the most baneful foes of republican government” (George Washington, September 17, 1796; from Whitney, the Republic, January 1852). When
digital media, where ordinary citizens routinely share information through social networks, were found to be used by foreign entities to spread false information
and sow discord in the nation, the public was deeply alarmed, and rightly so. The foreign digital operations present a profound challenge to those who believe in the
democratic potential of digital media, which includes the development of public passion on the issues of personal concern (e.g., Kim, 2009); the mobilization of
decentralized, alternative voices (e.g., Karpf, 2011); and the organization of collective action (e.g., Bennett & Sergerberg, 2013)

Trump’s Microtargeting produced more than 50,000 ads per day


Timberg 2019 (Craig Timberg, Washington Post, 12-9-2019, "Critics say Facebook’s powerful ad tools
may imperil democracy. But politicians love them.,"
https://www.washingtonpost.com/technology/2019/12/09/critics-say-facebooks-powerful-ad-tools-
may-imperil-democracy-politicians-love-them/)

The Russians at the Internet Research Agency used Custom Audiences in 2016 for some of its Facebook
ad purchases, targeting people who had visited websites and Facebook pages the Russians had created
on such hot-button issues as illegal immigration, African American political activism and the rising
prominence of Muslims in the United States. But Custom Audiences and microtargeting in general go far beyond such
nefarious actors. Their use is routine in political online advertising by campaigns on the right, left and middle.¶ Trump’s campaign in
2016, for example, produced more than 50,000 ads a day ― many with just slight variations in graphics, wording or colors ―
according to Brad Parscale, who was the campaign’s digital adviser and now is the campaign manager of Trump’s reelection effort, while
describing the operation in a “60 Minutes” interview in 2017.¶ “Facebook now lets you get to places and places possibly
that you would never go with TV ads,” Parscale said in the interview. “Now, I can find, you know, 15 people in the Florida
Panhandle that I would never buy a TV commercial for. And, we took opportunities that I think the other side didn’t.” ¶ The result was
that an Army veteran in Texas concerned about immigration probably saw ads different from those seen
by a Pennsylvania nurse concerned about health care. And, more important, they each likely saw ads
that were different from those shown to their neighbors, their friends, even their siblings or spouses,
because Facebook has such enormous data stores it can distinguish among people with generally similar
views.

10,000 different ads were used during the Trump Campaign


Lewis 2018 (Paul Lewis, 3-23-2018, "Leaked: Cambridge Analytica's blueprint for Trump victory,"
Guardian, https://www.theguardian.com/uk-news/2018/mar/23/leaked-cambridge-analyticas-
blueprint-for-trump-victory)

125
A former employee explained to the Guardian how it details the techniques used by the Trump campaign to micro-target US voters with carefully tailored messages
about the Republican nominee across digital channels. Intensive
survey research, data modelling and performance-
optimising algorithms were used to target 10,000 different ads to different audiences in the months
leading up to the election. The ads were viewed billions of times, according to the presentation.¶ The document was presented
to Cambridge Analytica employees in London, New York and Washington DC weeks after Trump’s victory, providing an insight into how the controversial firm
helped pull off one of the most dramatic political upsets in modern history. ¶ “This is the debrief of the data-driven digital campaign that was employed for Mr
Trump,” said Brittany Kaiser, 30, who was Cambridge Analytica’s business development director until two weeks ago, when she left over a contractual dispute. ¶ She
is the second former employee to come forward in less than a week, talking exclusively to the Guardian about the inner workings of the firm, including the work she
said it conducted on the UK’s EU membership referendum. ¶ She said she had access to a copy of the same document now obtained by the Guardian, and had used
it to showcase the campaign’s secret methods to potential clients of Cambridge Analytica.

Trump Campaign ads had a feedback loop to check ad effectiveness


Lewis 2018 (Paul Lewis, 3-23-2018, "Leaked: Cambridge Analytica's blueprint for Trump victory,"
Guardian, https://www.theguardian.com/uk-news/2018/mar/23/leaked-cambridge-analyticas-
blueprint-for-trump-victory)

The document contains very little information about how the campaign used Facebook data. One page, however, suggests Cambridge
Analytica was
able to constantly monitor the effectiveness of its messaging on different types of voters, giving the
company and the campaign constant feedback about levels of engagement on platforms such as Twitter,
Facebook and Snapchat. The feedback loop meant the algorithms could be constantly updated and
improved to deliver thousands of different messages to voters depending on their profile.¶ The level of
information the company could glean about voters – and the apparent appetite among Silicon Valley companies to cash in on the advertising bonanza – is illustrated
on another page which shows how the Trump campaign used a prime piece of marketing real estate on election day: YouTube’s entire masthead.

Card detailing the most effective ad strategies


Lewis 2018 (Paul Lewis, 3-23-2018, "Leaked: Cambridge Analytica's blueprint for Trump victory,"
Guardian, https://www.theguardian.com/uk-news/2018/mar/23/leaked-cambridge-analyticas-
blueprint-for-trump-victory)

One of the most effective ads, according to Kaiser, was a piece of native advertising on the political news website Politico, which was
also profiled in the presentation. The interactive graphic, which looked like a piece of journalism and purported to

list “10 inconvenient truths about the Clinton Foundation”, appeared for several weeks to people from a
list of key swing states when they visited the site. It was produced by the in-house Politico team that creates sponsored content.¶ The
Cambridge Analytica presentation dedicates an entire slide to the ad, which is described as having achieved “[had] an average engagement

time of four minutes”. Kaiser described the ad as “the most successful thing we pushed out”. ¶ Politico said editorial journalists were not involved in the
campaign, and similar ads were purchased by the Bernie Sanders and Clinton campaigns. ¶ Advertisements on Facebook, Twitter, Google

and the music-sharing app Pandora were used to help convince 35,000 supporters to install an app used
by the most active supporters.¶ According to the presentation, Cambridge Analytica and the Trump campaign also
used a new advertising technique offered by Twitter, launched at the start of the election year, which
enabled clients to kickstart viral tweets.¶ The “conversational ads” feature was used to encourage Trump’s followers to tweet using a set of
pre-determined hashtags.¶ The campaign also took advantage of an ad opportunity provided by Snapchat,

enabling users to swipe up and immediately see a preloaded web page. While not useful for securing
donors, Cambridge Analytica deemed the tool useful for engaging potential voter “contacts”, according to the
presentation.¶ One of the final slides explains how the company used paid-for Google ads to implement “persuasion

search advertising”, to push pro-Trump and anti-Clinton search results through the company’s main
search facility.

126
Human Dignity

Data collection and sorting evaluates people based on numbers not on the
human value of individuals. This treat people unfairly and discriminates against
good individuals

Christl, Wolfie. October 2017, "How Companies Use Personal Data Against People," Cracked Labs,
https://crackedlabs.org/en/data-against-people

Today, companies aggregate, trade, and utilize personal information at unprecedented levels. Their
unilateral and extensive access to data about the characteristics, behaviors, and lives of billions allows
them to constantly monitor, follow, judge, sort, rate, and rank people as they see fit. Our previous
report documented the massive scale and scope of today’s networks of digital tracking and profiling. It
investigated relevant industries, business models, platforms, services, devices, technologies, and data
flows, focusing on their implications for people – whether as individuals, consumers, or citizens – and
society at large.

This working paper examines how the corporate use of personal information can affect individuals,
groups of people, and society at large, particularly in the context of automated decisions,
personalization and data-driven persuasion. After briefly reviewing our previous research’s findings and
key developments in recent years, this paper explores their potential to be used against people in detail.

Systems that make decisions about people based on their data produce substantial adverse effects that
can massively limit their choices, opportunities, and life-chances. These systems are largely opaque,
nontransparent, arbitrary, biased, unfair, and unaccountable – even in areas such as credit rating that
have long been regulated in some way. Through data-driven personalization, companies and other
institutions can easily utilize information asymmetries in order to exploit personal weaknesses with
calculated efficiency. Personalized persuasion strategies provide the means to effectively influence
behavior at scale. As companies increasingly and unilaterally shape the networked environments and
experiences that underlie and determine everyday life, manipulative, misleading, deceptive, or even
coercive strategies can be automated and customized down to the individual level.

Based on the examination of business practices and their implications we conclude that, in their current
state, today’s commercial networks of digital tracking and profiling show a massive potential to limit
personal agency, autonomy, and human dignity. This not only deeply affects individuals, but also society
at large. By improving the ability to exclude or precisely target already disadvantaged groups, current
corporate practices utilizing personal information tend toward disproportionally affecting these groups
and therefore increase social and economic inequality. Especially when combined with influencing
strategies derived from neuroeconomics and behavioral economics, data-driven persuasion undermines
the concept of rational choice and thus the basic foundation of market economy. When used in political
campaigns or in other efforts to shape public policy, it may undermine democracy at large.

127
While this working paper does not directly offer solutions, it examines, documents, structures, and
contextualizes today’s commercial personal data industries and their implications; further research will
build on this basis. Hopefully, it will also encourage and contribute to further work by others.

128
Data collecting can create more social problems

June 6, 2019 Doug Specht Senior Lecturer in Media and Communications, University of Westminster
https://theconversation.com/tech-companies-collect-our-data-every-day-but-even-the-biggest-
datasets-cant-solve-social-issues-118133

Despite huge databases of personal information, tech companies rarely have enough to make properly
informed decisions, and this leads to products and technologies that can enhance social biases and
inequality, rather than address them.

Microsoft apologised after its chatbot started spewing hate speech. “Racist” soap dispensers failed to
work for people of colour. Algorithm errors caused Flickr to mislabel concentration camps as “jungle
gyms”. CV sorting tools rejected applications from women and there are deep concerns over police use
of facial recognition tools.

These issues aren’t going unnoticed. A recent report found that 28% of UK tech workers were worried
that the tech they worked on had negative consequences for society. And UK independent research
organization NESTA has suggested that as the darker sides of digital technology become clearer, “public
demand for more accountable, democratic, more human alternatives is growing”.

The problem is that these are social, not digital, problems. Attempting to solve those problems through
more data and better algorithms only serves to hide the underlying causes of inequality. Collecting more
data doesn’t actually make people better represented, instead it serves to increase how much they are
being surveilled by poorly regulated tech companies. The companies become instruments of
classification, categorizing people into different groups by gender, ethnicity and economic class, until
their database looks balanced and complete.

These processes have a limiting effect on personal freedom by eroding privacy and forcing people to
self-censor – hiding details of their lives that, for example, potential employers may find and disapprove
of. Increasing data collection has disproportionately negative affects on the very groups that the process
is supposed to help. Additional data collection leads to the over-monitoring of poorer communities by
crime prediction software, or other issues such as minority neighborhoods paying more for car
insurance than white neighborhoods with the same risk levels.

129
Data collection is dangerous
Data collection is a dangerous tool that endangers the public. People don’t
know the risk

EFF 14 “Big Data in Private Sector and Public Sector Surveillance” EEF
https://www.eff.org/files/2014/04/08/eff-big-data-comments.pdf

The collection and analysis of big data, which was a niche field within computer science just two decades
ago, has exploded into a $100 billion industry. 4 Big data is now used in sectors as diverse as energy,
medicine, advertising, and telecommunications. Because of the explosive growth of this field, companies
ranging from startups in Silicon Valley to established multi-national corporations are adopting the
mantra of "collect it all," in the belief that running a variety of analytics on big data will increase the
value of their products or the companies themselves. In many cases companies outsource the use of big
data to intermediary entities known as data brokers , which collect , analyze, and sell consumer
information that can include highly personal details like marital status, religion, political affiliation, tax
status, and others. A website may have an agreement with a data broker to better identify who their
customers are so they can place more effective ads — often in exchange for their customers' brows ing
habits and demographic information. Data brokers receive and aggregate consumer data from a variety
of sources: transactional data from retailers and stores, loyalty cards, direct responses and surveys,
social media and website interactions, public rec ords, and more. 5 They then aggregate this information
across sources and use it to create highly detailed profiles about individuals — one 4 “Data, data
everywhere.” The Economist, Feb. 25, 2010. https://web.archiv
e.org/web/20131207192955/http://www.economist.com/node/15557443. Last accessed March 28,
2014. 5 See Dixon, Pam. "What Information Do Data Brokers Have on Consumers?" World Privacy
Forum, December 18, 2013. Last accessed March 30, 2014. particular data broker is said to have 1,500
data points on over 700 million individuals. 6 It's been revealed that these highly detailed profiles
include names like "Ethnic Second - City Strugglers," "Rural and Barely Making It," and "Credit Crunched:
City Families ," as well as sensitive lists such as police officers and their home addresses; lists of rape
victims; genetic disease sufferers; and Hispanic payday loan responders. 7 The vast majority of
information data brokers use to create these lists is data which consumers unintentionally expose in
large part because they simply do not know how or when they are being tracked , or what information is
being collected. As a result the information is almost perfectly asymmetric: brokers know a great deal
about consumers, but most consumers have no idea these parties actually even exist. This asymmetry is
related to the first harm consumers are exposed to as a result of private - sector big data usage, namely
the significant power imbalance between consumers and the companies wielding the data and analysis
tools. For example, if a company uses big data analysis to inform its hiring decisions (say by analyzing a
database on the web browsing habits of potential employees acquired from a data broker), would a
rejected prospective employee learn why she was not offered a job, be able to see the data that led to
the decision or the algorithm that processed the data, or dispute the correctness of either? 8 In general,
the fact that people may be treated differently based on data and algorithms that they know little about

130
and have no recourse for correcting creates elementary fairness and transparency problems. 9 A related
problem results from the fact that even if consumers are aware of what data they are providing about
themselves and who they are providing it to, they frequently believe wrongly that the law or a
company's privacy policies block certain uses of that data or its dissemination. As explained by Chris
Hoofnagle and Jennifer King in their study of Californians' perceptions of online privacy: 6 See Brill, Julie.
"Demanding transparency from data brokers." The Washington Post, August 15, 2013.
http://www.washingtonpost.com/opinions/demanding - transparency - from - data -
brokers/2013/08/15/00609680 - 0382 - 11e3 - 9259 - e2aafe5a5f84_story.html. Last accessed March 30,
2014. 7 S ee Dixon, Pam. "What Information Do Data Brokers Have on Consumers?" World Privacy
Forum, December 18, 2013. Last accessed March 30, 2014. 8 One could argue that it would be in a
company's best interests to use data that is as accurate as possible. However, a company's ultimate goal
is to be as profitable as possible, and big data analysis is only carried out to further that goal. No rational
company would acquire better quality data when the cost of doing so would be greater than the
estimated returns. This exposes the fundamental mismatch in incentives between companies (whose big
data will only be as accurate as profitability dictates) and individuals (who primarily care about whether
the data about they themselves is accurate). Even a competitive market might not be able to completely
resolve this issue, since making sure all the data is accurate 100% of the time will likely require human -
intensive, and therefore costly, dispute/redress processes. 9 Dwork and Mulligan, "It’s Not Privacy, and
It’s Not Fair," 66 STAN. L. REV. ONLINE 35 (2013). Californians who shop online believe that privacy
policies prohibit third - party information sharing. A majority of Californians believes that privacy policies
create the right to require a website to delete personal information upon request, a general right to sue
for damages, a right to be informed of security breaches, a right to assistance if id entity theft occurs,
and a right to access and correct data. 10 Additionally, users may not know to what extent data is
shared with unknown third - parties: an online project called "theDataMap" reflects this data - sharing
landscape. 11 But even a good understanding of the legal and policy protections for data is insufficient
to protect a consumer from harm, due in large part to the next danger: loss of privacy due to
individualized analysis and tracking by private - sector use of big data. By “connecting the dots” between
different, disparate datasets, or even by analyzing data from the same dataset that on its face does not
seem to have any connection, companies can infer characteristics about people that they might not
otherwise wish to be made public, or at least not wish to share with certain third - parties (for example,
the well - known Target pregnancy example). Very few consumers realize the power of statistical
analysis and other big data algorithms. Even if consumers are aware of what specific data they are
sharing, they may not understand what inferences could be made based on that data. The risk of abuse
of the underlying datasets remains. As the recent hack on Target's credit card systems demonstrates,
even large, well - financed companies can suffer from massive data breaches that put consumers' data
in the hands of a malicious third - party. 12 This danger is especially grave when companies collect and
save all data possible, regardless of its current value, with the idea that a profitable use might later
emerge. Unfortunately, the collection of data into more concentrated repositories creates a tempting
target for malicious agents. Additionally, , EFF has long been concerned that private - sector mass data
accumulation strongly facilitates government data accumulation given the many ways that companies
can be induced or compelled to provide data to the government. Finally, even if the above dangers are
avoided, we emphasize that many "common sense" approaches to preserving privacy and anonymity in
big data do not actually accomplish their goals. Malicious actors could use a variety of sophisticated

131
statistical and information - theoretic 10 Hoofnagle, Chris Jay and King, Jennifer, " What Californians
Understand about Privacy Online." (September 3, 2008). Available at SSRN:
http://ssrn.com/abstract=1262130 or http://dx.doi.org/10.2139/ssrn.1262130 11 See h
ttp://thedatamap.org/ 12 Elgin, Ben; Lawrence, Dune; Matlack, Carol; Riley, Michael. “Missed Alarms
and 40 Million Stolen Credit Card Numbers: How Target Blew It.” Bloomberg BusinessWeek , March 13,
2014. https://web.archive.org/web/20140313132757/http://www .businessweek.com/articles/2014 -
03 - 13/target - missed - alarms - in - epic - hack - of - credit - card - data. Last accessed March 29, 2014.
algorithms to extract identifiable data from what appears to be an anonymized dataset. 13 This is
especially true if the malicious agent has access to individual datasets that might not pose a privacy risk
on their own, but when combined tog ether can be used to infer private information.

132
Harms of big data society
Joanna Redden https://theconversation.com/six-ways-and-counting-that-big-data-systems-are-harming-
society-88660

1. Targeting based on vulnerability

With big data comes new ways to socially sort with increasing precision. By combining multiple forms of
data sets, a lot can be learned. This has been called “algorithmic profiling” and raises concerns about
how little people know about how their data is collected as they search, communicate, buy, visit sites,
travel, and so on.

Much of this sorting goes under the radar, although the practices of data brokers have been getting
attention. In her testimony to the US Congress, World Privacy Forum’s Pam Dixon reported finding data
brokers selling lists of rape victims, addresses of domestic violence shelters, sufferers of genetic
diseases, sufferers of addiction and more.

2. Misuse of personal information

Concerns have been raised about how credit card companies are using personal details like where
someone shops or whether or not they have paid for marriage counselling to set rates and limits. One
study details the case of a man who found his credit rating reduced because American Express
determined that others who shopped where he shopped had a poor repayment history.

This event, in 2008, was an early big data example of “creditworthiness by association” and is linked to
ongoing practices of determining value or trustworthiness by drawing on big data to make predictions
about people.

3. Discrimination

As corporations, government bodies and others make use of big data, it is key to know that
discrimination can and is happening – both unintentionally and intentionally. This can happen as
algorithmically driven systems offer, deny or mediate access to services or opportunities to people
differently.

Some are raising concerns about how new uses of big data may negatively influence people’s abilities
get housing or insurance – or to access education or get a job. A 2017 investigation by ProPublica and
Consumer Reports showed that minority neighborhoods pay more for car insurance than white
neighborhoods with the same risk levels. ProPublica also shows how new prediction tools used in
courtrooms for sentencing and bonds “are biased against blacks”. Others raise concerns about how big
data processes make it easier to target particular groups and discriminate against them.

And there are numerous reports of facial recognition systems that have problems identifying people
who are not white. As argued here, this issue becomes increasingly important as facial recognition tools
are adopted by government agencies, police and security systems.

This kind of discrimination is not limited to skin color. One study of Google ads found that men and
women are being shown different job adverts, with men receiving ads for higher paying jobs more often.
And data scientist Cathy O’Neil has raised concerns about how the personality tests and automated

133
systems used by companies to sort through job applications may be using health information to
disqualify certain applicants based on their history.

There are also concerns that the use of crime prediction software can lead to the over-monitoring of
poor communities, as O’Neil also found. The inclusion of nuisance crimes such as vagrancy in crime
prediction models distorts the analysis and “creates a pernicious feedback loop” by drawing more police
into the areas where there is likely to be vagrancy. This leads to more punishment and recorded crimes
in these areas.

4. Data breaches

There are numerous examples of data breaches in recent years. These can lead to identity theft,
blackmail, reputation damage and distress. They can also create a lot of anxiety about future effects.
One study discusses these issues and points to several examples:

• The Office of Policy Management breach in Washington in 2015 leaked people’s fingerprints,
background check information, and analysis of security risks.

• In 2015 Ashley Madison, a commercial website billed as enabling extramarital affairs, was
breached and more than 25 gigabytes of company data including user details were leaked.

• The 2013 Target breach in the US resulted in leaked credit card information, bank account
numbers and other financial data.

5. Political manipulation and social harm

Fake news, bots and filter bubbles have been in the news a lot lately. They can lead to social and
political harm as the information that informs citizens is manipulated, potentially leading to
misinformation and undermining democratic and political processes as well as social well-being.

One recent study by researchers at the Oxford Internet Institute details the diverse ways that people are
trying to use social media to manipulate public opinion across nine countries.

6. Data and system errors

Big data blacklisting and watch-lists in the US have wrongfully identified individuals. It has been found
that being wrongfully identified in this case can negatively affect employment, ability to travel – and in
some cases lead to wrongful detention and deportation.

In Australia, for example, there have been investigations into the government’s automated debt
recovery system after numerous complaints of errors and unfair targeting of vulnerable people. And
American academic Virginia Eubanks has detailed the system failures that devastated the lives of many
in Indiana, Florida and Texas at great cost to taxpayers. The automated system errors led to people
losing access to their Medicaid, food stamps and benefits.

134
Misuse of personal data

Communications Consumer Panel research report “Online personal data: the consumer perspective” May,2011

There are potential risks attached to the sharing and collecting of personal data. In the last few years,
the media have reported a number of cases of personal data being mislaid or collected inadvertently.
There is a risk that online companies may misuse or mishandle the personal data they hold and that
individual details, e.g. email addresses, may be disclosed inadvertently or that data is not stored
securely. The disclosure of location information is a particular concern in the case of children.

There is also a risk that consumers are not fully aware of the potential implications of their decisions
when they benefit from “free‟ services or applications online. For example, many consumers use
services that are free at the point of use, such as clip-art and screensaver sites, or reference sites such as
dictionaries, and generally do not realize that in the process they may be downloading tracking cookies
or software that enable companies to collect their personal data.

Even when consumers are aware that they are providing personal information to an online business,
they are often not fully aware of what happens to it afterwards. While people may choose to provide
data to a particular website they trust, they may not realize that their data may be passed on to third
parties.

Companies generally explain how they protect consumers‟ privacy in their terms and conditions and
privacy statements. Consumers often sign up to these terms and conditions when they sign up for
services. However, not all consumers read, and some may not fully understand, these terms and
conditions or privacy statements because of the length of the documents and the technical language
they tend to use. The result is that while the companies are technically being transparent, in practice
their policies might not be understood by the people who are affected by them. Facebook has
responded to such criticism by developing a new policy that explains more simply how it handles
consumers‟ data and how this is shared with third parties.

It is also important to examine how the gathering and processing of personal data can affect people’s
privacy. When people use online services they often consent, explicitly or implicitly, to allow the services
access to what they would normally consider private information. Things become more problematic if
people feel that they have no choice but to relinquish elements of their privacy in order to make full use
of the internet.

Personal data can be collected from nearly all online activity. That makes it very difficult for people to
opt out completely. The fact that small bits of data from different sources can be aggregated to form a
more complete picture of an individual makes reaching an informed decision even more difficult.

A potential risk is that if there were a greater level of public concern about how personal data are
collected online it could deter the development of innovative services and applications, and the benefits
for consumers that they bring.

Another privacy issue stems from the way people tend to use search engines. People rely on these
services to navigate the internet, so they occupy a central role online. In general, many items that

135
people search for will, in themselves, be relatively innocuous, for example the names of online clothes
retailers or concert listings. Others will be more sensitive, for example if they relate to a person’s
medical record. While there are intrinsic privacy issues with someone else knowing such sensitive
information, they are less severe if they cannot be directly linked back to the person concerned.
However, it is also common for people to search for terms like their name, or to include their home
postcode to identify local businesses. Where people do this, it becomes possible for sensitive but
apparently anonymous information to be linked back to an individual.

136
Personal data could be sold to government

Pratap Chatterjee Oct 8 2013https://www.alternet.org/2013/10/how-private-tech-companies-are-


collecting-data-you-and-selling-them-feds-huge/

We willingly hand over all of this information to the big data companies and in return they facilitate our
communications and provide us with diversions. Take Google, which offers free email, data storage, and
phone calls to many of us, or Verizon, which charges for smartphones and home phones. We can
withdraw from them anytime, just as we believe that we can delete our day-to-day social activities from
Facebook or Twitter.

But there is a second kind of data company of which most people are unaware: high-tech outfits that
simply help themselves to our information in order to allow U.S. government agencies to dig into our
past and present. Some of this is legal, since most of us have signed away the rights to our own
information on digital forms that few ever bother to read, but much of it is, to put the matter politely,
questionable.

This second category is made up of professional surveillance companies. They generally work for or sell
their products to the government — in other words, they are paid with our tax dollars — but we have no
control over them. Harris Corporation provides technology to the FBI to track, via our mobile phones,
where we go; Glimmerglass builds tools that the U.S. intelligence community can use to intercept our
overseas calls; and companies like James Bimen Associates design software to hack into our computers.

137
Government use of personal data could go wrong (data error)
Joanna Redden and Jessica Brand https://datajusticelab.org/data-harm-record/

Big data blacklisting and watch-lists in the U.S. have wrongfully identified individuals. As detailed by
Margaret Hu, being wrongfully identified in this case can negatively affect employment, ability to travel,
and in some cases lead to wrongful detention and deportation.

Hu details the problems with the American E-Verify programme, which ‘attempts to “verify” the identity
or citizenship of a worker based upon complex statistical algorithms and multiple databases’. Employers
across states use the programme to determine if a person is legally able to work in the U.S. Hu writes
that it appears that employers have wrongfully denied employment for thousands. Hu argues that e-
verify is problematic due to the unreliability of the data that informs the database screening protocol.
The problems with the e-verify programme have also been detailed by Upturn. A study by the American
Civil Liberties Union demonstrates that errors are far more likely to affect foreign-born employees and
citizens with foreign names. People with multiple surnames and women who change their names after
marriage are also more likely to face errors. Harm is further exacerbated by the difficulty in challenging
or correcting e-verify errors. As discussed by Alex Rosenblat and others: ‘[L]ow-wage, hourly workers,
whether they are flagged for a spelling error or for other reasons, often lack the time, resources, or legal
literacy required to navigate complex bureaucracies to correct misinformation about them in a national
database’.

Hu also raises concerns about The Prioritised Enforcement Programme (PEP), formerly the Secure
Communities Programme (S-COMM). This is a data-sharing programme between the Federal Bureau of
Investigation (FBI), DHS and local law enforcement agencies that requires local agencies to run
fingerprints taken from suspects against federal fingerprint databases (ibid: 1770). The programme has
made errors. For example, inaccurate database screening results wrongfully targeted 5,880 US citizens
for potential detention and deportation, leading critics to question the reliability of PEP/S-COMM’s
algorithms and data. Furthermore, by using the biometric data of arrestees contained in the S-COMM
databases the Immigration and Customs Enforcement (ICE) reportedly may have wrongly apprehended
approximately 3,600 US citizens, due to faulty information feeding database screening protocols. As Hu
points out, ‘error-prone’ databases and screening protocols ‘appear to facilitate the unlawful detention
and deportation of US citizens’.

Hu argues that the big data systems underlying both E-Verify and S-COMM/PEP are causing harm by
mistakenly targeting and assigning inferential guilt to individuals. Legally speaking, this kind of digitally
generated suspicion is at odds with constitutional rights and there is a growing consensus, at least in the
U.S, on the need for substantive and binding due process when it comes to big data governance.

138
Numerous accounts of errors were published in the press and calls for investigation were taken up by
opposition politicians. One case involved a man who was repeatedly sent letters saying he owed the
government repayment of $4,000. This turned out to be an error. The man, who suffers from depression
and became suicidal, said he successfully convinced the government this was an error only to receive a
similar letter a few months later. He again successfully proved this was an error. One of the
ombudsman’s conclusions was that better project planning and risk management should have been
done from the outset.

Other examples of failure include attempts to automate welfare services in the U.S. Virginia Eubanks
details the system failures that devastated the lives of many in Indiana, Florida and Texas at great cost to
taxpayers. The automated system errors led to people losing access to their Medicaid, food stamps and
benefits. The changes made to the system led to crisis, hospitalization and as Eubanks reports, death.
These states cancelled their contracts and were then sued.

Data Errors – small data

Big data applications used by governments rely on combining multiple data sets. As noted by Logan and
Ferguson, ‘small data (i.e. individual level discrete data points) … provides the building blocks for all
data-driven systems’. The accuracy of big data applications will be affected by the accuracy of small
data. We already know there are issues with government data, just two examples: 1) in the United
States, in 2011 the Los Angeles Times reported that nearly 1500 people were unlawfully arrested in the
previous five years due to invalid warrants and 2) in New York, a Legal Action Center study of rap sheet
records ‘found that sixty-two percent contained at least one significant error and that thirty-two percent
contained multiple errors’.

139
Harms due to algorithm / machine bias
Joanna Redden and Jessica Brand https://datajusticelab.org/data-harm-record/

Research into predictive policing and predictive sentencing shows the potential to over-monitor and
criminalize marginalized communities and the poor.1

Journalists working with ProPublica are investigating algorithmic injustice. Their article titled ‘Machine
Bias’ in particular, has received a great deal of attention. Julia Angwin, Jeff Larson, Surya Mattu and
Lauren Kirchner’s investigation was a response to concerns being raised by various communities about
judicial processes of risk assessment. These processes of risk assessment involved computer programs
that produce scores predicting the likelihood that people charged with crimes would commit future
crimes. These scores are being integrated throughout the US criminal justice system and influencing
decisions about bond amounts and sentencing. The ProPublica journalists looked at the risk scores
assigned to 7,000 people and checked to see how many were charged with new crimes. They found that
the scores were ‘remarkably unreliable in forecasting violent crime’. They found that only 61%, just over
half, of those predicted to commit future crimes did. But the big issue is bias. They found that the
system was much more likely to flag black defendants as future criminals, wrongly labelling them as
future criminals at twice the rate as white defendants. White people were also wrongly labelled as low
risk more often than black defendants. The challenge is that these risk scores and the algorithm that
determines them is produced by a for profit company, so researchers were not able to interrogate the
algorithm only the outcomes. ProPublica reports that the software is one of the most widely used tools
in the country.

Kristian Lum and William Isaac, of the Human Rights Data Analysis Group, recently published an article
detailing bias in predictive policing. They note that because predictive policing tools rely on historical
data, predictive policing should be understood as predicting where police are likely to make arrests and
not necessarily where crime is happening. As noted by Lum and Isaac, as well as by O’Neil, if nuisance
crimes like vagrancy are added to these models this further complicates matters and there is an over
policing of poor communities, more arrests, and you have a feedback loop of injustice. Lum and Isaac
used a range of data sources to produce an estimate of illicit drug use from non-criminal justice,
population based data sources which they then compared to police records. They found that while drug
arrests tend to happen in non-white low income communities, drug crimes are more evenly distributed
across the community. Using one of the most popular predictive policing tools, they find that the tool
targets black people twice as much as whites even though their data on drug use shows that drug use is

1See: Sullivan, E and Greene, R (2015) States predict inmates’ future crimes with secretive Surveys. AP, Feb. 24, available at:
http://bigstory.ap.org/article/; Barocas, S and Selbst, A D (2016) Big data’s disparate impact. California Law Review 104: 671-
732; Starr, S (2016) The odds of justice: actuarial risk prediction and the criminal justice system. Chance 29(1): 49-51.

140
roughly equivalent across racial classifications. Similarly they find that low income households are
targeted by police at much higher rates than higher income households.

O’Neil describes how crime prediction software, as used by the police in Pennsylvania leads to a biased
feedback loop. In this case the police include nuisance crimes, such as vagrancy, in their prediction
model. The inclusion of nuisance crimes, or so-called antisocial behavior, in a model that predicts where
future crimes will occur distorts the analysis and ‘creates a pernicious feedback loop’ by drawing more
police into the areas where there is likely to be vagrancy. This leads to more punishment and recorded
crimes in these areas, poor areas where there is likely to be vagrancy. O’Neil draws attention to specific
examples of problems: Pennsylvania police use of PredPol, the NYCPD use of CompStat and the
Philadelphia police use of Hunchlab.2

2 O’Neil, C (2016) Weapons of Math Destruction, London: Allen Lane, p. 84-87.

141
People who claim safety of data collection are only looking at old school meaningless
data, not the personal data of today
Phillips, John. 2014 “Why analyzing Big Data can be bad for business”, June 4, 2014,
http://www.cnbc.com/id/101644059

Big data – where every aspect of your life is being quantified in every way imaginable – may be a term
you are only just beginning to recognize. But get ready for another one: apophenia. In the movie "Silver
Linings Playbook," Robert DeNiro's character – a diehard Philadelphia Eagles fan – believes various
random and unrelated factors such as the position of the TV remote controls and whether or not his son
watches the game with him could factor into whether his team wins or loses. While most people would
refer to this as superstition, others might call it apophenia – the experience of seeing meaningful
patterns or connections in random or meaningless data. The phenomenon arises from a subconscious
tendency to seek out patterns – such as faces in clouds and hidden messages in music – because our
brains are wired this way. And, it can be bad for business, researchers say. "Big data tempts some
researchers to believe that they can see everything at a 30,000-foot view," Danah Boyd, principal
researcher at Microsoft Research and Kate Crawford, associate professor at the University of New South
Wales wrote in a paper. "It is the kind of data that encourages the practice of apophenia: seeing
patterns where none actually exist, simply because massive quantities of data can offer connections that
radiate in all directions," the paper noted. Drawing inaccurate conclusions from big data analysis could
prove costly for companies in how it influences decision making from advertising to management. One
example of big data analysis gone awry was Google, which developed Flu Trends in 2008 – a tool that
geographically tracks searches for flu-related words over time. The idea was that people showing flu
symptoms would search specific terms on Google to help self-diagnose and that these web searches
could be used to create a real-time map of flu outbreaks. While Google Flu Trends performed well for
some time there was an anomaly in December 2012. According to an article in Nature magazine,
Google's flu-case estimates were twice as high as those from the Center for Disease Control and
Prevention. The cause? Researchers suggested that widespread media coverage of the U.S. flu season
may have boosted flu-related searches, inflating the number of cases that Google's algorithm identified.
A pharmacy using this data to better decide on the appropriate inventory level of flu-related drugs could
have easily overstocked on such drugs. "Brands are becoming increasingly dependent upon data to
manage their relationship with customers and to drive their businesses. Given this reliance, it's frankly
pretty scary how data-driven decisions often seem to be arrived at and acted upon in a relatively
unquestioning way," Colin Strong, managing director at GfK NOP business and technology told CNBC.
"There will be very real commercial implications for companies that don't stop and question how these
decisions are being arrived at," he added. "Working with big data is still subjective, and what it
quantifies does not necessarily have a closer claim on objective truth – particularly when considering
messages from social media sites," Boyd and Crawford added in their paper.

142
Discrimination

Data collection will be used to eliminate people from consideration from some
products and price people out of the market as prices rise beyond what they can
pay

Gonzalez-Miranda, Maria. 5-29-2018, "How Big Data and online markets will lead to higher — not lower
— prices," MarketWatch, https://www.marketwatch.com/story/how-big-data-and-online-markets-can-
lead-to-higher-prices-2018-05-19

Part of the segmentation of online markets involves web companies testing price points to estimate
precisely the demand curve and its links to household characteristics. For example, a May 2017 article in
The Atlantic notes that, “As Christmas approached in 2015, the price of pumpkin-pie spice went wild. …
Amazon’s price for a one-ounce jar was either $4.49 or $8.99, depending on when you looked.”

This form of price discrimination is legal as long as it does not occur on the basis of race, ethnicity,
gender, or religion. Taken to the extreme, it means that data about our preferences, incomes, and
spending patterns could soon be used to determine an individually calibrated price for all transactions.
In that scenario, 100% of consumer surplus could potentially be extracted 100% of the time.

To be sure, price discrimination will not happen for every good and service, and the trend could be
tempered by competition from offline retailers or new entrants vying for market share by offering lower
prices to everyone.

Alternatively, the data collected in some industries could become so widely shared across competing
firms that they will all converge on a single price for each individual. In fact, companies today are
probably already facing this kind of price segmentation, especially those that have amassed a lot of
public data.

This suggests that markets could potentially become extremely fragmented, such that consumers’
choices will be strictly limited to the offerings that have been selected according to their data profiles.
As any student of economics understands, this kind of situation decreases overall welfare, because
every consumer will be forced to pay the maximum of what they are willing to spend for each good or
service they purchase, keeping nothing “extra” for themselves.

Making matters worse, rapidly rising capital and skill requirements for production, among other factors,
is sustaining a trend toward less competition among companies across a wide range of sectors in
advanced economies. This, together with the systematic “extraction” of consumer surplus, will have far-
reaching macroeconomic implications, particularly through changes in private consumption patterns.

143
For consumers, the slice of the economic pie made available by their disposable incomes will shrink in
real terms, leading to a fall in aggregate demand. Thus, at the end of the day, there will be less for
everyone.

Amid the ongoing debate about how the dominant tech firms should and should not be allowed to use
personal data collected from users online, many of these firms have continued to decide these
questions for themselves — and, by extension, for the rest of us, too.

For the sake of social welfare in the years and decades ahead, we must ensure that these decisions are
compatible with the creation and maintenance of healthy, competitive markets. After all, a system that
benefits consumers benefits everyone.

144
Ad Targeting Discriminates in the Coding
Biddle 2019 (Sam Biddle, 4-4-2019, "Facebook’s Ad Algorithm Is a Race and Gender Stereotyping
Machine, New Study Suggests," Intercept, https://theintercept.com/2019/04/03/facebook-ad-
algorithm-race-gender/)

researchers
The new research focuses on the second step of advertising on Facebook, the process of ad delivery, rather than on ad targeting. Essentially, the

created ads without any demographic target at all and watched where Facebook placed them. The
results, said the researchers, were disturbing:¶ Critically, we observe significant skew in delivery along gender and racial
lines for “real” ads for employment and housing opportunities despite neutral targeting parameters. Our
results demonstrate previously unknown mechanisms that can lead to potentially discriminatory ad delivery, even when advertisers set their targeting parameters
the researchers requested only that their ads reach Facebook
to be highly inclusive.¶ Rather than targeting a demographic niche,

users in the United States, leaving matters of ethnicity and gender entirely up to Facebook’s black box. As
Facebook itself tells potential advertisers, “We try to show people the ads that are most pertinent to them.” What exactly does the company’s ad-targeting black
box, left to its own devices, consider pertinent? Are Facebook’s ad-serving algorithms as prone to bias like so many others? The answer will not surprise you. ¶ For
one portion of the study, researchers ran ads for a wide variety of job listings in North Carolina, from janitors to nurses to lawyers, without any further demographic
targeting options. With all other things being equal, the
study found that “Facebook delivered our ads for jobs in the
lumber industry to an audience that was 72% white and 90% men, supermarket cashier positions to an
audience of 85% women, and jobs with taxi companies to a 75% black audience even though the target
audience we specified was identical for all ads.” Ad displays for “artificial intelligence developer” listings
also skewed white, while listings for secretarial work overwhelmingly found their way to female
Facebook users.

Biddle 2019 (Sam Biddle, 4-4-2019, "Facebook’s Ad Algorithm Is a Race and Gender Stereotyping
Machine, New Study Suggests," Intercept, https://theintercept.com/2019/04/03/facebook-ad-
algorithm-race-gender/)

In the case of housing ads — an area Facebook has already shown in the past has potential for discriminatory abuse — the results were
also heavily skewed along racial lines. “In our experiments,” the researchers wrote, “Facebook delivered our broadly
targeted ads for houses for sale to audiences of 75% white users, when ads for rentals were shown to a
more demographically balanced audience.” In other cases, the study found that “Facebook delivered
some of our housing ads to audiences of over 85% white users while they delivered other ads to over
65% Black users (depending on the content of the ad) even though the ads were targeted identically.Ӧ

145
Behavioral Price Discrimination

Catherine Tucker MIT Sloan School of Management Joint WPISP-WPIE Roundtable “The Economics of
Personal Data and Privacy: 30 Years after the OECD Privacy Guidelines” 1 December 2010 Background
Paper#1 “The Economics Value of Online Customer Data ”

There also may be costs to consumers in the form of behavioral price discrimination. Behavioral price
discrimination means that firms use past consumer actions to distinguish between customers who have
low and high willingness to pay for their product and offer them low and high prices as a consequence.
One example may be that firms may offer ads that offer discounted coupons to consumers whom they
observed browsing their products but not purchasing the product, in order to provide a final incentive
for that customer to buy the product. Therefore, consumers could be offered very different effective
prices based on their click-stream data without their knowledge. This may be harmful, especially if it
distorts consumer decisions - that is, consumers might strategically waste time exhibiting behavior (such
as browsing a website and not purchasing a product) in order to attract a discounted ad.

146
Influence on Behaviour

Companies use harvested data to induce customers into profitable behavior

Hildebrandt 2007 (Mireille Hildebrandt,Erasmus Universiteit Rotterdam, Workpackage Leader of


Profiling in FIDIS,”Profiling into the future: An assessment of profiling technologies in the context of
Ambient Intelligence” http://journal.fidis.net/fileadmin/journal/issues/1-
2007/Profiling_into_the_future.pdf)

To come to terms with potential threats we need to look deeper into the asymmetries between citizens on the one
hand and large organisations who have access to their profiles on the other hand. We are not referring to the
asymmetry of effective access to personal data but the asymmetry of access to knowledge. Especially insofar as this knowledge is protected
as part of a trade secret or intellectual property, the citizens to whom this knowledge may be applied have no access at all. Zarsky (2002-2003) has demonstrated –
by analysing a set of examples – how this lack of access can
lead to what he calls the 'autonomy trap'. Precisely because a person
is unaware of the profiles that are applied to her, she may be induced to act in ways she would not have
chosen otherwise. Imagine that my online behaviour is profiled and matched with a group profile that
predicts that the chance that I am a smoker on the verge of quitting is 67%. A second profile predicts
that if I am offered free cigarettes together with my online groceries and receive news items about the
reduction of dementia in the case of smoking I have an 80% chance of not quitting. This knowledge may
have been generated by tobacco companies, who may use it to influence my behaviour. In a way, this kind of
impact resembles Pavlov's stimulus-response training: it does not appeal to reason but aims to discipline or induce me into profitable
behaviour. My autonomy is circumvented as long as I am unaware of the knowledge that is used. Zarsky
(2002- 2003) also warns about unfair discrimination, based on refined profiling technologies that allow sophisticated market segmentation. Price discrimination may
be a good thing in a free market economy, but the fairness again depends on consumers’ awareness of the way they are categorised. In order to have a fair and free
marketeconomy some rules of the game must be established to prevent unequal bargaining positions, or else we have another market failure. In short the threats
can be summarised as follows: ƒ privacy (which must not be reduced to hiding one's personal data) ƒ security (which cannot be traded for privacy as a loss of the one
may cause the loss of the other) ƒ unfair discrimination (power relations must be balanced to provide equal bargaining positions) ƒ autonomy (our negative and
positive freedom to act must be established and maintained, manipulation on the basis of knowledge that one is not aware of violates one's autonomy)

147
Brexit
Post Brexit Uk’s economy will suffer, crime will rise and tensions will grow at
the Ireland border
Menon 2019 (Anand Menon, 9-3-2019, "Don’t buy the bluff. Here’s the truth about no-deal Brexit,"
Guardian, https://www.theguardian.com/commentisfree/2019/sep/03/no-deal-brexit-crashing-out-uk-
europe)

Sotrade with the EU will become more difficult and more costly, with those costs being potentially
catastrophic for smaller companies that do not have the margins to absorb them.¶ But beyond these direct impacts,
much is uncertain. How will households and businesses react? Will there be a broader collapse in business and consumer confidence, hitting demand and
investment, or will consumers, as they have in the past, shrug off short-term shocks? And more broadly, what will the political dynamics of no deal look like?¶ Many
of the worse possible consequences – such as severe disruption to road and air transport links – are not on the table in the short term because the EU has
unilaterally put into place temporary workarounds. Would these – some of which expire as soon as the end of December, just two months into no deal – survive a
political confrontation over the UK’s “divorce bill”? ¶ Similarly, while there is no prospect of EU citizens in the UK becoming irregular migrants overnight, the
government’s recent incoherence on what no deal means for freedom of movement has made many feel, understandably, insecure – and it is still unclear how
employers, landlords and public services will be expected to apply any new rules. The position for Britons in Europe is even more complex and uncertain.¶ One little
discussed consequence of no deal is that the
UK will immediately lose access to EU databases and other forms of
cooperation including the European arrest warrant, the Schengen information system and Europol. This
will hinder policing and security operations in a world where data is key to solving crime. Nor is it inconceivable,
say, that we will witness a rise in organised criminal activity, as gangs seek to profit from this disruption.¶ But

perhaps the biggest and most dangerous unknown is what happens on the island of Ireland. The UK government has
said it will not apply checks and tariffs at the Irish border – a stance which is at odds with its commitments under, inter alia, WTO rules. The EU, however, has made
it clear it intends to apply the rules, though whether all checks will be imposed from day one is less obvious. Both sides are likely to blame the other, with

unforeseeable political and economic consequences.¶ Over the longer term, the economy will adjust. But there will be a
significant cost. Our earlier research, which analysed the effects of trading with the EU on WTO terms, found that after 10 years this would reduce

the UK’s per-capita income by between 3.5% and 8.7%; other credible analyses come to much the same conclusion.

Hedge Funds used Gathered Information to Manipulate and Short the British
Exit Polls
Simpson 2018 (Cam Simpson,Gavin Finch,Kit Chellel, 6-25-2018, "The Brexit Short: How Hedge Funds
Used Private Polls to Make Millions," Bloomberg, https://www.bloomberg.com/news/features/2018-06-
25/brexit-big-short-how-pollsters-helped-hedge-funds-beat-the-crash)
One person with questions still to answer is Farage, a former commodities broker who also went to work for a London currency trading
company after he moved into politics. He twice told the world on election night that Leave had likely lost, when he had information suggesting
his side had actually won. He also has changed his story about who told him what regarding that very valuable piece of information.¶
Bloomberg’s account is based in part on interviews over seven months with more than 30 knowledgeable current and former polling-company
executives, consultants and traders, nearly all of whom spoke only on the condition they not be named because of confidentiality agreements.
Pollsters said they believed Brexit yielded one of the most profitable single days in the history of their
industry. Some hedge funds that hired them cleared in the hundreds of millions of dollars, while their industry
on the whole was battered by the chaos Brexit wrought in global financial markets. Although confidentiality agreements have made it difficult
to discover the identities of many of the hedge funds that bought exclusive or syndicated exit polls, at least a dozen were involved, and
potentially many more, Bloomberg found.¶ The
private exit poll that appears to have had the most clients was
conducted by Farage’s favorite pollster and friend, Damian Lyons-Lowe, whose company is called Survation. It was
sold to multiple clients and correctly predicted Leave, according to Farage and other sources familiar with the results. In
an interview with Bloomberg, Farage said he learned of Survation’s results before making at least one of
two public concessions that night, meaning there was a good chance he was feeding specious sentiment

148
into markets.¶ Survation wasn’t alone. As YouGov’s Twyman predicted a Remain victory on Sky, three of his
colleagues were watching from inside the London office of a hedge fund. In addition to the public exit
poll for Sky, YouGov earlier sold a private exit poll to this fund, which provided data to traders that
matched the results Twyman presented on television, effectively giving them an edge for betting on the
rise in the pound sparked by his comments, according to sources familiar with the events. YouGov staff code-named it
“Operation Pomegranate.” It charged the hedge fund roughly $1 million, according to knowledgeable sources. Separately, YouGov gave Sky its
poll for free. The hedge fund did extremely well, according to three sources familiar with the situation.

Elections are Gambling Grounds for the Rich, with Big Data as the ace up their
sleeve
Simpson 2018 (Cam Simpson,Gavin Finch,Kit Chellel, 6-25-2018, "The Brexit Short: How Hedge Funds
Used Private Polls to Make Millions," Bloomberg, https://www.bloomberg.com/news/features/2018-06-
25/brexit-big-short-how-pollsters-helped-hedge-funds-beat-the-crash)

how might private polls have helped traders? In at least two ways, according
If public polling up to and on the final day inflated a bubble,

to pollsters involved, hedge fund traders and consultants. First, commission a private poll that closely tracks what will be released

to the public, as in Operation Pomegranate, tipping traders in advance to how the market may move.
Second, get better data than the public has, allowing traders to see if the market’s faith in the pound is
misplaced, or the currency is overvalued. Both strategies come with some risk, but because the trader is betting against the prevailing market sentiment, the bet is cheap and the
potential payout is high—just the sort of situation hedge funds love. For traders, it doesn’t matter if the pollster’s ultimate exit-poll

prediction is wrong (as some were on Brexit night). Hedge funds’ internal models, some far more advanced than
anything in the polling industry, fed on raw data, such as turnout in specific regions, that allowed them
to make smarter bets. “They are looking for a slight edge—they don't expect you to be 100 percent accurate,” said one pollster.¶ Rokos, which had
worked with ICM and Curtice, ended up making more than $100 million, or 3 percent of its entire value,
in a single day, according to the results Bloomberg first reported in the wake of the vote. Brevan Howard, which at a minimum bought

exit-polling data from ComRes, made $160 million on June 24 alone. Brevan Howard declined to comment.¶ While the
identity of YouGov’s Operation Pomegranate hedge fund client remains unclear, knowledgeable sources
identified two clients for its pre-election polling. They are Capstone Investment Advisors and Odey Asset
Management. Capstone, then managing more than $5.2 billion, made about 1.7 percent of the value of its biggest fund off
its Brexit trades, Bloomberg reported after the vote, citing a knowledgeable source. Some of that was specifically attributed to bets placed on price swings leading up to the
referendum. Capstone declined to comment for this article. Odey’s eponymous founder is Crispin Odey, who was both a top fundraiser for Farage and a leading contributor of campaign

cash to the pro-Brexit side. His firm made about $300 million from Brexit. “There’s that Italian expression,” Odey boasted to the BBC of his Brexit bounty: “‘Al
mattino ha l'oro in bocca’—the morning has gold in its mouth.”¶ In an interview with Bloomberg, Odey said the private polling purchased from YouGov ahead of the vote was valuable, though
not definitive, because there was still a high level of uncertainty about the outcome. He said his firm didn't buy an exit poll on the day. “Everyone is going to try to improve the information

The idea of public


they have,” he said of hedge fund surveys. “That’s the arms race.” But, he said, it shouldn't be possible for some traders to pay more for better information. “

markets is that you have equality. If you don't, then one has to be worried about that.Ӧ At least six other hedge funds
were among those negotiating or shopping for polls, according to interviews with polling executives, including one who accessed his email archives for Bloomberg during an interview. These
included Arrowgrass Capital Partners, Element Capital, Maven, PointState and TSE Capital Management. The same polling executive said that at least three more—North Asset Management,
SPX Capital and Vigilant—were trying to obtain information regarding the timing of media-published polls. It's not clear which, if any, bought polling. All of these firms declined to comment or
did not respond to requests for comment. ¶ Dawn Hands, the managing director of pollster BMG, said her firm “does not comment on the detail of any research conducted privately, nor name
any of its private clients.” Gregor Jackson, research director at ICM, confirmed that the company had private clients in the Scottish and EU referendums but declined to comment further. A

Capitalizing on a wave of market-moving political volatility stemming from


ComRes spokesman also declined to comment. ¶

voter discontent across the world, some of the pollsters involved in Brexit have tried to replicate their
success beyond the U.K. Survation worked for financial services firms in the Italian election in March, when
two populist Euroskeptic parties won, according to a knowledgeable source. There could be more to come for the U.K., too, with George Soros, among others, pushing for a new EU

Prime Minister May’s government remains seized by internal divides over Brexit,
referendum.¶ Even if that doesn’t happen,

leading to predictions of a new snap election. A pollster who profited off the EU referendum said, “That would be something that would have the

149
potential to move the markets around” again, because a snap election would really be about implementing Brexit.¶ Asked for his prediction, the pollster demurred. He said he will keep his
opinions to himself until hedge funds come calling again.

150
Tax Havens

151
UK Link
UK could become a tax haven post-Brexit
Bergin 2016 (Tom Bergin, 7-3-2016, "Tax haven route won't work for post-Brexit UK, OECD says," DE,
https://de.reuters.com/article/us-britaineurope-tax/tax-haven-route-wont-work-for-post-brexit-uk-
oecd-says-idUKKCN0ZJ0MG)

The UK is already in the process of cutting its corporate tax rate to 17 percent, compared to an average
among other OECD members of around 25 percent.¶ As part of its stated aim to be the most competitive Group of 20 major
economies on tax, the UK has also introduced tax breaks that allow companies pay lower tax rates on some
income and no tax on earnings from tax haven subsidiaries.¶ To significantly improve its appeal to businesses, the UK
would need to significantly cut its tax rate or introduce a system of “generous” tax rulings, the OECD said.¶ Outside the EU, the UK
could selectively offer foreign investors one-off tax deals – something prohibited by EU law.

Tory is party is run by advocates for tax haven’s and ‘for the rich’ policy makers
Cato 2018(Molly Scott Cato, 7-24-2018, "Will a no-deal Brexit make most of us poorer – and Jacob
Rees-Mogg richer?," Guardian, https://www.theguardian.com/commentisfree/2018/jul/24/no-deal-
brexit-poorer-jacob-rees-mogg-dividend)

Rees-Mogg’s hedge fund, Somerset Capital Management, is managed via subsidiaries in the tax havens
of the Cayman Islands and Singapore. He has defended the use of such tax havens, saying “I do not believe people have any obligation to pay
more tax than the law requires.” And the laws that look set to stand in the way are planned new EU regulations aimed

at governing the behaviour of companies such as his own. He has also enthused about the potential to
slash environmental and safety laws after Britain leaves the EU. Regulations that were “good enough for India” could be good
enough for Britain, he has argued.¶ To back up this worldview is a network of self-styled thinktanks, with opaque

funding sources and a passionate commitment to classical liberal economics. Leading the charge has
been the Institute of Economic Affairs (IEA), with which Rees-Mogg is associated. Its roots lie in attempts to oppose the
Attlee-Bevan move to the left after the second world war that brought us the NHS and the welfare state. The IEA’s mission is to promote small

government and freedom for so-called “wealth creators”. Or to put it more accurately, ultra-rich individuals and corporations. The
IEA saw Brexit as an enormous opportunity: a once-in-a-generation chance to create a more flexible, open and vibrant economy with less bureaucracy and
protectionism, for which read signing up for trade deals that lower environmental and consumer standards. ¶ And Brexit
seems to be offering
enormous opportunities for Rees-Mogg personally. His European Research Group (ERG) has been
accused by fellow Tory Anna Soubry of running the country after Theresa May caved in to Brexit hardliners over her
Chequers plan.¶ Jacob Rees-Mogg is often seen as the “respectable” face of the Conservative hardline right wing. His style recalls an era when the British empire
was flourishing and merchants traded grain, but children starved, and – most importantly – there were no pesky democratic constraints to prevent the wealthy
enjoying their freedom. Far
from offering us our liberty, the Brexit he offers threatens to turn us back into
corporate chattels, stripped of our hard-won civil and democratic rights.

152
USA Link
The US is becoming the world’s most secretive tax haven
Bloomberg 2017 (Editorial Board, 12-28-2017, "The U.S. Is Becoming the World's New Tax Haven,"
Bloomberg, https://www.bloomberg.com/opinion/articles/2017-12-28/the-u-s-is-becoming-the-world-
s-new-tax-haven)

Now, however, the U.S. is becoming one of the world’s best places to hide money from the tax collector. It’s a
distinction that the country would do well to shed. ¶ In 2009, amid growing budget deficits and a tax-fraud scandal at Swiss bank UBS AG, the

Group of 20 developed and developing nations came to an agreement: They would no longer tolerate
the network of havens, shell companies and secret accounts that had long abetted tax evasion. A year later,
the U.S. passed the Foreign Account Tax Compliance Act, which required foreign financial institutions to report the identities and assets of potential U.S. taxpayers
to the Internal Revenue Service.¶ Under
threat of losing access to the U.S. financial system, more than 100 countries
-- including such traditional havens as Bermuda and the Cayman Islands -- are complying or have agreed
to comply.¶ The U.S. was expected to reciprocate, by sharing data on the accounts of foreign taxpayers
with their respective governments. Yet Congress rejected the Obama administration’s repeated requests
to make the necessary changes to the tax code. As a result, the Treasury cannot compel U.S. banks to
reveal information such as account balances and names of beneficial owners. The U.S. has also failed to adopt the so-
called Common Reporting Standard, a global agreement under which more than 100 countries will automatically provide each other with even more data than
FATCA requires.¶ While the rest of the world provides the transparency that the U.S. demanded, the U.S. is
rapidly becoming the new Switzerland. Financial institutions catering to the global elite, such as
Rothschild & Co. and Trident Trust Co., have moved accounts from offshore havens to Nevada, Wyoming
and South Dakota. New York lawyers are actively marketing the country as a place to park assets. A Russian billionaire, for example, can
put real-estate assets in a U.S. trust and rest assured that neither the U.S. tax authorities nor his home-
country government will know anything about it. That’s a level of secrecy that not even Vanuatu can
offer.¶ From a certain perspective, all this might look pretty smart: Shut down foreign tax havens and then steal their business. That would be the kind of
thinking that’s undermining America’s standing in so many areas, from trade to climate change. Instead of using its power to establish an equitable system of global
governance, it’s demanding a standard from the rest of the world that it refuses to apply to itself. That isn’t leadership.

US has the laxest countermeasures to shell companies


Washington Post 2016 (Washington Post, 4-5-2016, "How the U.S. became one of the world’s
biggest tax havens," https://www.washingtonpost.com/news/wonk/wp/2016/04/05/how-the-u-s-
became-one-of-the-worlds-biggest-tax-havens/)

A 2012 study in which researchers sent more than 7,400 email solicitations to more than 3,700
corporate service providers -- the kind of companies that typically register shell companies, such as the Corporation Trust Company at 1209 North
Orange St. -- found that the U.S. had the laxest regulations for setting up a shell company anywhere in the

world outside of Kenya. The researchers impersonated both low- and high-risk customers, including
potential money launderers, terrorist financiers and corrupt officials.

Trump is advertizing the US as the world’s most enticing tax haven


Schubert 2018 (Axel Von Schubert, 1-13-2018, "Trump’s tax reforms won’t bring back offshore money
– he is just creating another tax haven in the US," Independent,

153
https://www.independent.co.uk/voices/donald-trump-tax-dodging-offshore-havens-bermuda-bahamas-
cayman-islands-paradise-papers-a8157386.html)
The ‘Trump card’ being offered is the non-disclosure of a US company’s beneficial owner, something which has been impossible in the Bahamas and other
traditional tax havens for years. Under
the pretext of combating money laundering and terrorism, a new law known
as the Foreign Account Tax Compliance Act has been devised. But it is revealing that the USA is the only
country who has not signed the agreement and is not bound by its rules.¶ Trump has promised a
dramatic reduction of corporate taxes for US corporations which have promised to repatriate funds to the US. Whether
those funds can be forced to invest in jobs remains to be seen. So far most of the funds repatriated have
served internal company share buybacks, which do not create employment.¶ But by conveniently reducing the taxes
which will benefit the bottom line of onshore ventures, the billions stashed away offshore by many of his friends and large corporations are unlikely to be
repatriated soon.¶ The interests of Fortune 500 companies and their directors and shareholders are closely tied to offshore centres like the Bahamas and these
deep-rooted interests will make an unraveling of these tax havens highly unlikely – especially under President Trump.

The South Dakota Tax Haven offers security, low scrutiny and no taxes – that’s
why the super rich choose it.
Bullough 2019 (Oliver Bullough, 11-14-2019, "The great American tax haven: why the super-rich love
South Dakota," Guardian, https://www.theguardian.com/world/2019/nov/14/the-great-american-tax-
haven-why-the-super-rich-love-south-dakota-trust-laws)
Super-rich people choose between jurisdictions in the same way that middle-class people choose between ISAs: they want the best security, the best income and the lowest costs. That is why

If an ordinary person puts


so many super-rich people are choosing South Dakota, which has created the most potent force-field money can buy – a South Dakotan trust.

money in the bank, the government taxes what little interest it earns. Even if that money is protected
from taxes by an ISA, you can still lose it through divorce or legal proceedings. A South Dakotan trust
changes all that: it protects assets from claims from ex-spouses, disgruntled business partners, creditors,
litigious clients and pretty much anyone else. It won’t protect you from criminal prosecution, but it does prevent[s] information on
your assets from leaking out in a way that might spark interest from the police. And it shields your
wealth from the government, since South Dakota has no income tax, no inheritance tax and no capital
gains tax.¶ A decade ago, South Dakotan trust companies held $57.3bn in assets. By the end of 2020,
that total will have risen to $355.2bn. Those hundreds of billions of dollars are being regulated by a state with a population smaller than Norfolk, a part-time
legislature heavily lobbied by trust lawyers, and an administration committed to welcoming as much of the world’s money as it can. US politicians like to boast that their country is the best
place in the world to get rich, but South Dakota has become something else: the best place in the world to stay rich.¶ At the heart of South Dakota’s business success is a crucial but

money inevitably
overlooked fact: globalisation is incomplete. In our modern financial system, money travels where its owners like, but laws are still made at a local level. So

flows to the places where governments offer the lowest taxes and the highest security. Anyone who can
afford the legal fees to profit from this mismatch is able to keep wealth that the rest of us would lose,
which helps to explain why – all over the world – the rich have become so much richer and the rest of us
have not.

The US is the world’s second most secretive tax haven behind Switzerland
Bullough 2019 (Oliver Bullough, 11-14-2019, "The great American tax haven: why the super-rich love
South Dakota," Guardian, https://www.theguardian.com/world/2019/nov/14/the-great-american-tax-
haven-why-the-super-rich-love-south-dakota-trust-laws)

154
The rest of the world, inspired by this example, created [under] a global agreement called the Common Reporting Standard
(CRS). Under CRS, countries agreed to exchange information on the assets of each other’s citizens kept in each

other’s banks. The tax-evading appeal of places like Jersey, the Bahamas and Liechtenstein evaporated almost immediately, since you could no longer hide
your wealth there. How was a rich person to protect his wealth from the government in this scary new transparent world? Fortunately, there was a loophole. CRS

had been created by lots of countries together, and they all committed to telling each other their
financial secrets. But the US was not part of CRS, and its own system – Fatca – only gathers information
from foreign countries; it does not send information back to them. This loophole was unintentional, but vast: keep your
money in Switzerland, and the world knows about it; put it in the US and, if you were clever about it, no one need ever find out. The US was on its way to becoming
a truly world-class tax haven. TheTax Justice Network (TJN) still ranks Switzerland as the most pernicious tax haven in
the world in its Financial Secrecy Index, but the US is now in second place and climbing fast, having
overtaken the Cayman Islands, Hong Kong and Luxembourg since Fatca was introduced. “While the
United States has pioneered powerful ways to defend itself against foreign tax havens, it has not
seriously addressed its own role in attracting illicit financial flows and supporting tax evasion,” said the TJN in
the report accompanying the 2018 index. In just three years, the amount of money held via secretive structures in the

US had increased by 14%, the TJN said. That is the money pouring into Sioux Falls, and into the South Dakota Trust Company.

The rich opt out on taxes at the expense of the tax payers
Bullough 2019 (Oliver Bullough, 11-14-2019, "The great American tax haven: why the super-rich love
South Dakota," Guardian, https://www.theguardian.com/world/2019/nov/14/the-great-american-tax-
haven-why-the-super-rich-love-south-dakota-trust-laws)

If the richest members of society are able to pass on their wealth tax-free to their heirs, in perpetuity,
then they will keep getting richer than those of us who can’t. In fact, the tax rate for everyone else will
probably have to rise, to make up for the shortfall caused by the wealthiest members of societies opting
out, which will just make the problem worse. Eric Kades, the law professor at William & Mary Law School, thinks that South Dakota’s decision to abolish the
rule against perpetuities for the short term benefit of its economy will prove to have been a long-term
catastrophe. “In 50 or 100 years, it will turn out to have been an absolute disaster,” said Kades. “Now we’re going to have a bunch of wealthy families, and no one will be able to piss
away that wealth, it will stay in the family for ever. [wealthy families will be able to hold onto wealth]This just locks in

advantage.Ӧ So far, most of the discussion of this development in wealth management has been confined to specialist publications, where academic authors have found themselves
making arguments you do not usually find in discussions of legal constructs as abstruse as trusts. South Dakota, they argue, has struck at the very

foundation of liberal democracy. “It does seem unfair for some people to have access to ‘property plus’, usable wealth with extra protection built in beyond that
which regular property owners have,” noted the Harvard Law Review back in 2003, in an understated summation of the academic consensus that South Dakota has unleashed something
disastrous.¶ And if some people have access to privileged property, where does that leave the equality before the law that is central to how society is supposed to function? Another academic,

Perpetual trusts can (and will) facilitate enormous


writing in the trade publication Tax Notes two decades ago, put that unfairness in context: “

wealth and power for dynastic families. In the process, we leave to future generations some serious issues about the nature of our country’s democracy.”

South Dakota has no political defense to tax avoidance laws


Bullough 2019 (Oliver Bullough, 11-14-2019, "The great American tax haven: why the super-rich love
South Dakota," Guardian, https://www.theguardian.com/world/2019/nov/14/the-great-american-tax-
haven-why-the-super-rich-love-south-dakota-trust-laws)

Susan Wismer, one of just 10 Democrats among the House’s 70 members, attempted to prolong the discussion by
raising concerns about how South Dakota was facilitating tax avoidance, driving inequality and damaging democracy. Her view was dismissed as
“completely jaded and biased” by a trust lawyer sitting for the Republicans. It was a brief exchange, but it went to the heart of how tax havens
work. There is no political traction in South Dakota for efforts to change its approach, since the state does so well out of it. The victims of its
policies, who are all in places like California, New York, China or Russia, where the tax take is evaporating, have no vote.¶ Wismer is the only

155
person I met in South Dakota who seemed to understand this. [said]“Ever since I’ve been in the legislature, the trust
taskforce has come to us with an updating bill, every year or every other year, and we just let it pass
because none of us know what it is. They’re monster bills. As Democrats, we’re such a small caucus,
we’re the ones who ought to be the natural opponents of this, but we don’t have the technical expertise
and don’t really even understand what we’re doing,” she confessed, while we ate pancakes and drank coffee in a truck stop
outside Sioux Falls. “We don’t have a clue what the consequences are to just regular people from what we’re doing.”¶ That means
legislators are nodding through bills that they do not understand, at the behest of an industry that is
sucking in ever-greater volumes of money from all over the world. If this was happening on a Caribbean island, or a
European micro-principality, it would not be surprising, but this is the US. Aren’t ordinary South Dakotans concerned about what their state is
enabling?¶ “The voters
don’t have a clue what this means. They’ve never seen a feudal society, they don’t
have a clue what they’re enabling,” Wismer said. “I don’t think there are 100 people in this state who
understand the ramifications of what we’ve done.”

US Tax Haven Impacts already being felt


Reich 2019 (Robert Reich, 12-22-2019, "How Trump has betrayed the working class," Guardian,
https://www.theguardian.com/commentisfree/2019/dec/22/trump-wants-to-be-champion-of-the-
working-class-but-with-tax-cuts-for-the-rich-it-doesnt-add-up)

Armed with deductions and loopholes, America’s largest companies paid an average federal tax rate of
only 11.3% on their profits last year, roughly half the official rate under the new tax law – the lowest effective corporate tax rate
in more than eighty years.¶ Yet almost nothing has trickled down to ordinary workers. Corporations have
used most of their tax savings to buy back their shares, giving the stock market a sugar high. The typical
American household remains poorer today than it was before the financial crisis began in 2007.¶ Trump’s
tax cut has also caused the federal budget deficit to balloon. Even as pre-tax corporate profits have reached record highs, corporate tax
revenues have dropped about a third under projected levels. This requires more federal dollars for interest on the debt, leaving

fewer dollars for public services workers need.¶ The Trump administration has already announced a
$45bn cut in food stamp benefits that would affect an estimated 10,000 families, many at the lower end
of the working class. The administration is also proposing to reduce Social Security disability benefits, a potential blow to hundreds of thousands of workers.¶ The tax
cut has also shifted more of the total tax burden to workers. Payroll taxes made up 7.8% of national
income last year while corporate taxes made up just 0.9%, the biggest gap in nearly two decades. All told,
taxes on workers were 35% of federal tax revenue in 2018; taxes on corporations, only 9% .¶ Trump probably
figures he can cover up this massive redistribution from the working class to the corporate elite by pushing the same economic nationalism, tinged with xenophobia and racism, he used in
2016. As Steve Bannon has noted, the formula seems to have worked for Britain’s Conservative party.¶ But it will be difficult this time around because Trump’s economic nationalism has hurt

Manufacturing has suffered as tariffs raised prices for


American workers, particularly in states that were critical to Trump’s 2016 win.¶

imported parts and materials. Hiring has slowed sharply in Pennsylvania, Michigan and other states
Trump won, and in states like Minnesota that he narrowly lost.¶ The trade wars have also harmed rural America, which also
went for Trump, by reducing demand for American farm produce. Last year China bought around $8.6bn
of farm goods, down from $20bn in 2016. (A new tentative trade deal calls for substantially more Chinese purchases.)¶ Meanwhile,
healthcare costs continue to soar, college is even less affordable, and average life expectancy is
dropping due to a rise in deaths from suicide and opioid drugs like fentanyl. Polls show most Americans remain dissatisfied with
the country’s direction.¶ The consequences of Trump’s and the Republicans’ excessive corporate giveaways and their failure to improve the lives of ordinary working Americans are becoming
clearer by the day.

156
Impact
With the development of automation, tax havens enable the rich to make more
money from mobile capital, i.e. investments, while middle and lower classes
lose jobs and income.
Batros 2016 (Ben Batros, 5-12-2016, "To Tackle Inequality, We Need New Thinking on Tax Havens,"
Open Society Justice Initiative, https://www.justiceinitiative.org/voices/tackle-inequality-we-need-new-
thinking-tax-havens)

Most people sense that allowing big corporations and the ultra-wealthy to avoid taxes must play some
part in inequality. But when it’s seen in light of the convergence of three trends, the impact is even deeper—and more troubling—than it
first appears.¶ The first trend is the precipitous decline in corporate tax globally, both nominal and effective rates. This is

partly due to the profit shifting and tax avoidance which tax havens enable. It is also caused by tax competition, in which
jurisdictions feel pressure to reduce their corporate income tax rate or offer incentives such as tax holidays. But many argue that countries are lowering

their corporate tax rates in part to compete with those tax havens.¶ The second trend is that this
reduction in taxation of corporate income has taken place against a shift in economic returns from labor
to capital—a trend which is only likely to be amplified by accelerating technological developments. The share of national income going to
labor has hit a record low, as the benefits of economic growth in recent decades have gone
disproportionately to a small fraction of the wealthiest citizens, while the post-2008 recovery has been
characterized by a marked stagnation of wages for the vast majority of workers.¶ This transfer of income from labor to
capital while the tax burden shifts away from capital—and therefore onto labor and consumption—is concerning in its own right. However, the need to

address barriers to effective taxation of capital becomes more urgent in light of the role technology is
playing in displacing labor. As technology displaces more workers in more sectors of the economy, it
seems inevitable that the balance of return will shift further from increasingly-replaceable labor to
capital (i.e., those who can invest in, and own, the technologies in question).¶ Remember that it is the income generated from mobile capital (whether profits of
multinational corporate groups or investments of high-net-worth individuals), rather than immobile labor or consumption, that tax havens help avoid taxation in
home countries. Uber could set up a subsidiary in the Bahamas and attempt to shift its income there through intellectual property rights and licensing fees, avoiding
paying taxes in other countries. The people driving for Uber, however, cannot. And if its self-driving car initiative succeeds, then there will not even be that tax base
Tax havens will thus exacerbate the social dislocation that increased automation
of local labor for domestic governments. ¶

will inevitably bring, and they may put at risk the social safety nets required to mitigate that dislocation.
If we should, as the economist Richard Freeman argues, “worry less about the potential displacement of human labor by robots than about how to share fairly
across society the prosperity that the robots produce,” then we must recognize that that is impossible in a world where tax havens run amok. ¶ In short, the
effective taxation of capital, in the face of increased automation, will increasingly become necessary for a sustainable social contract. But unless
tax
havens are addressed, effective taxation will remain a major challenge. You might say that tax havens
are emblematic of a “rigged” economy, where the wealthy and powerful play by an entirely different set
of rules, gathering disproportionate benefits, while it is increasingly hard for ordinary people to get
ahead.

Tax Haven’s worsen economic crises, depriving recovering economies from


capital
Radu 2012 (Daniela Iuliana Radu,¶ 2012,Tax Havens Impact on the World Economy,¶ Procedia - Social
and Behavioral Sciences,¶ Volume 62,¶ ¶ Pages 398-402,¶ ISSN 1877-0428, Peer Reviewed,
https://doi.org/10.1016/j.sbspro.2012.09.064.¶
http://www.sciencedirect.com/science/article/pii/S1877042812035057)

157
Invisible, havens (referred to as "offshore") play an important role in international finance in the context of the current crisis.[with]
50% of
international trade transiting through them, these offshore centers are owner number 2 of State
obligations. For example, the Cayman Islands occupy fifth place in world finance, holding 80% of
investment funds from around the world, which manages assets of more than 1,000 billion dollars. Known for dirty money
laundry, these areas also have a close connection with the current financial crisis, fiscal regimes and
their lax by the possibility of banks and investment funds to invest in any kind of asset, including \"toxic assets\"-
real estate loans with high degree of risk, which is the only basis and the most visible of these assets. These offshore sites also deprive real

capital economy and allow multinationals to evade payment of taxes unduly high in some countries
well-known or situation, of structure optimization schemes for corporate tax purposes. In the context of the
current crisis, Governments can no longer close your eyes in front of the drainers capital organised by the "cooperative" areas where banking secrecy and impunity
are tax law. But it should not be forgotten that the Governments who now vehemently attacking these tax havens are the same Governments that in the past had
acted carelessly and have treated many aspects of regulation of the financial sector with an eye, knowing eyes and closing funds and in front of the practices of
investment banks.

We are on the Brink of the Aforementioned Financial Crisis


The Guardian 2019(Larry Elliott, 4-1-2019, "G20 must heed global debt warnings to stave off another
crisis," Guardian, https://www.theguardian.com/business/2019/apr/28/g20-must-heed-global-debt-
warnings-to-stave-off-another-crisis)
Back in July 2005, the G8 summit at the Gleneagles hotel in Scotland announced a package of aid and debt relief for the world’s poorest
countries. The event marked the high point of international development cooperation and was supposed to put the finances of low-income
nations on a permanent sustainable footing.¶ For a while, optimism seemed well founded. Public debt for those countries that qualified for help
dropped from an average of 100% of their annual income in the early 2000s to just over 30% by 2013 – freeing up resources to spend on health,
education and infrastructure projects.¶ Now warning signs are flashing that another debt crisis is approaching, with concerns being raised not
only by development campaign groups but by the International Monetary Fund and the World Bank. The IMF says 40% of low-
income countries are either in debt distress or at high risk of being so. The Bank says debt in poor countries is a “rising
vulnerability”. Explaining how the world came to be on the brink of debt crisis 2.0 is relatively simple. It all
began in the depths of the financial crisis just over a decade ago, when the response to the threat of a second Great
Depression led to interest rates being slashed, to central banks boosting the supply of money through
quantitative easing (QE), and to countries supporting growth through packages of tax cuts and public
spending.¶ The biggest such fiscal package by far was announced by Beijing and it was instrumental not only in
turning round the Chinese economy but also in hastening recovery elsewhere. China’s exceptionally high growth rates meant
it needed oil, industrial metals and raw materials, and this was a boon to those developing countries rich
in commodities.¶ Commodity prices rose just as all the money created by QE was looking for a home.
Investors had a choice. They could plump for developed economies where growth and interest rates
were low. Or they could be more daring and invest in emerging and developing countries where the
risks and rewards were higher. Many opted for the latter course and as a result lending to low-income countries increased sharply.
Much of the lending was from the private sector rather than the multilateral organisations, such as the
World Bank, and so tended to be made at higher interest rates. Poor countries, assuming that the
commodity boom would go on forever, borrowed in foreign currencies. Sometimes the money was spent on projects
designed to improve the growth capacity of their economies; too often, according to the World Bank, it [money] was spent on
current consumption.¶ However, the commodity boom was actually a bubble and, like all bubbles, it burst. Poor countries found
themselves hit by a quadruple whammy: falling demand for their exports, lower commodity prices, higher global interest rates and depreciating
exchange rates, which made their foreign-currency denominated debt more expensive to repay.¶ Not all low-income countries are in trouble
but the IMF has warned that emerging
market debt has returned to levels last seen in the early 1980s, when
overborrowing brought a crisis to Latin America.¶ Nor are high levels of indebtedness confined to the poorer parts of the
world because public debt as a share of national output in advanced countries is at its highest since the second world war. It took time for the

158
debt crisis of the 1980s to migrate from the periphery of the global economy to its core. Today there are already worries about
debt sustainability in Italy. Little wonder, then, that the IMF is concerned about the possibility that the current slowdown in the global
economy turns into something more serious.¶ If another debt crisis does erupt, the international community is not
well placed to deal with it. There is much less of a willingness to cooperate than there was in the early
2000s and a complete absence of leadership. In 2005 rich countries had solid growth and felt able to devote time to sorting
out problems beyond their shores. Now they are more concerned about domestic issues.¶ The other big problem is the lack of a
structure to deal with another debt crisis if and when it arrives. Ideally, there would be a bankruptcy procedure for
countries to match those that operate for companies and individuals, and such a scheme – a sovereign debt restructuring mechanism – was
floated in the early 2000s by the IMF’s then deputy managing director, Anne Krueger, in the wake of Argentina’s default. Intense opposition by
the US killed off Krueger’s blueprint and, despite its current concerns, the IMF has no plans to revive it. There are, though, things that could be
done to prevent and mitigate a future debt crisis. The IMF has proposed a three-step process in which countries would take greater care to
ensure any borrowing could be repaid, that there be comprehensive and transparent recording of public debts and that there should be greater
collaboration between creditors to take into account the fact that much of the recent lending has been by China. ¶ The Jubilee Debt Campaign
has gone further. It is calling for the G20, which represents the leading developed and emerging economies, to set up a public registry of loan
and debt data. All governments and multilateral institutions would commit to disclosing their loans to the registry. The UK and the US – and
other relevant jurisdictions – would insist that for a loan to a government to be enforceable in the courts it would have to be publicly disclosed
on the register within 30 days of the contract being signed.¶ This is a sensible suggestion. It would not deal with the stock of debts already
accumulated – but it would help prevent a bad situation from getting worse.

United Nations called for abandoning the dollar as the global currency. The
dollar is no longer stable. A new reserve system is needed.
Louis 2010(Louis Charbonneau, 6-29-2010, "Scrap dollar as sole reserve currency: U.N. report," U.S.,
https://www.reuters.com/article/us-dollar-reserves-un/scrap-dollar-as-sole-reserve-currency-u-n-
report-idUSTRE65S40620100629)

UNITED NATIONS (Reuters) - A new United


Nations report released on Tuesday calls for abandoning the U.S. dollar as the
main global reserve currency, saying it has been unable to safeguard value.¶ But several European officials
attending a high-level meeting of the U.N. Economic and Social Council countered by saying that the market, not
politicians, would determine what currencies countries would keep on hand for reserves .¶ “The dollar has

proved not to be a stable store of value, which is a requisite for a stable reserve currency,” the U.N.
World Economic and Social Survey 2010 said.¶ The report says that developing countries have been hit
by the U.S. dollar’s loss of value in recent years.¶ “Motivated in part by needs for self-insurance against volatility in commodity markets
and capital flows, many developing countries accumulated vast amounts of such (U.S. dollar) reserves during the 2000s,” it said.¶ The report supports replacing the
dollar with the International Monetary Fund’s special drawing rights (SDRs), an international reserve asset that is used as a unit of payment on IMF loans and is
made up of a basket of currencies.¶ “A
new global reserve system could be created, one that no longer relies on the
United States dollar as the single major reserve currency,” the U.N. report said.¶ The report said a new reserve
system “must not be based on a single currency or even multiple national currencies but instead, should
permit the emission of international liquidity — such as SDRs — to create a more stable global financial
system.”¶ “Such emissions of international liquidity could also underpin the financing of investment in long-term sustainable development,” it said.

Globalization makes the dollar extremely unstable. The current political climate
may cause countries to lose trust in the dollar
Acheson 2019 (Noelle Acheson, 8-3-2019, "Bitcoin Won't Be a Global Reserve Currency. But It's
Opening the Box," CoinDesk, https://www.coindesk.com/bitcoin-wont-be-a-global-reserve-currency-
but-its-opening-the-box)

159
Celebrating the 75th anniversary of the Bretton Woods conference is probably not high on the list of priorities for cryptocurrency enthusiasts
this month. This is an understandable oversight – the price swings, confusing product launches and whereabouts of Justin Sun are perhaps
more compelling.¶ But the birth of international economic cooperation and interoperability should be
recognized as the beginning of a process of economic reconstruction that has contributed to the global
imbalances worrying the markets today. It could also have set the scene for the solution.¶ The bulk of the U.S. stock market may
be overvalued, and yields look set to go even lower – but a large part of the current strain lurks under the surface of [comes
from] the currency market. A combination of monetary easing, trade tensions and the threat of military
action in the Middle East is a noxious cocktail for currency holders and hedgers as international
conversions get risky and costly.¶ Perhaps because of this, as well as the disquieting brandishing of financial
muscle by the U.S. administration, the chorus of questions about the role of the U.S. dollar as a global
reserve currency is [are] getting louder.¶ What’s more, it has held its leadership role for almost 100 years; the average global
reserve currency lifespan over the past five centuries is 95 years. Shifting balances are hinting that the dollar’s reign may
soon be up: its share of foreign exchange reserves is over 60%, while the weight of the U.S. economy in
global output has fallen to less than 25 percent and is likely to continue trending lower.¶ Encroaching currency
competition could well gather momentum as politics starts to trump economics.¶ Some have posited that there is a “non-zero chance” that
bitcoin would make a great reserve currency. I disagree, I believe that there is exactly zero chance that could happen. I do think, however, that
the global reserve system will radically change over the next couple of decades. Bitcoin could be a part of what emerges.¶ What gives?¶ First,
some background on the significance of Bretton Woods and why we should be paying attention.¶ In 1944, an agreement was drawn up
between delegates from 44 nations that established the U.S. dollar as the world’s reserve currency, which would be pegged to gold. The other
member nations would peg their currencies to the U.S. dollar, and the resulting relative stability between denominations would smooth world
trade and boost the post-war recovery.¶ The agreement also created the institutions of the International Monetary Fund (IMF) and the World
Bank to coordinate global currency movements and channel loans to developing nations.¶ The U.S. dollar stopped “officially”
being the global reserve currency when President Richard Nixon took the country off the gold standard
in 1971. It remained the de-facto global reserve currency, however, by dint of being the world’s largest economy and trading nation.
Countries tended to hold more dollar reserves than all other foreign currencies combined, for the ease of transacting and for their relative
stability.¶ Being the global reserve currency is a mixed blessing. While it makes it easy to borrow in international markets, it takes away power
to influence the domestic economy.¶ If
foreign debt holders start to believe that President Trump might
encourage a devaluation of the U.S. dollar (as he has often hinted he wants to do), they would start to unload, as a
devaluation would make their bonds worth less. Foreign holdings of U.S. debt currently amount to over
$6 trillion, almost 30% of the outstanding total, so even a small unloading would be a shock to the
market and would weaken the dollar’s credibility for some time to come.¶ As well as not being able to devalue when
convenient, the additional global demand for U.S. dollars stemming from its reserve currency status is keeping the dollar’s value high relative to
other currencies, exacerbating the current account deficit, now the largest in the world. And, whatever your views on Modern Monetary Theory
(which posits, among other things, that debt levels don’t matter), the vulnerability of the U.S. markets to foreign investment strategies is
disquieting.¶ So much for “America First.”¶

Head of FDIC during 2008 Financial Crisis believing we are heading towards
another financial crisis
Bair 2019 (Investopedia, 7-1-2019, Sheila Bair, headed the FDIC during the 2008 financial crisis, Since
leaving the FDIC, Bair was president of Washington College in Maryland until 2017 and has been an
advisor to various institutions, such as the China Bank Regulatory Commission. She is head of the Pew
Charitable Trust's Systemic Risk Council, a group promoting financial stability."4 Early Warning Signs of
the Next Financial Crisis," https://www.investopedia.com/investing/early-warning-signs-next-financial-
crisis/)
Bair does not have an issue with some bank deregulation, "such as easing unnecessary supervisory infrastructure on regional and community
banks."¶ However, especially regarding the "large, complex financial institutions that drove the crisis," she asserted

160
to Barron's: "To loosen capital now is just crazy. When we get to a downturn, banks won't have the cushion
to absorb the losses. Without a cushion, we will have 2008 and 2009 again."¶ An independent research
arm of the U.S. Treasury Department has found that the financial system still would be in great peril if
one or more big banks fail, despite reforms enacted after the 2008 crisis. Similarly, economics professor
Kenneth Rogoff of Harvard University believes that leading central banks around the world are
unprepared to deal with a new banking crisis.¶ 2. Soaring Private Debt¶ When asked for her opinion on what
might trigger the next financial crisis, Bair pointed to soaring private debt. She mentioned credit card debt,
subprime auto loans, loans that finance corporate leveraged buyouts, and general corporate debt. "Any type of secured lending
backed by an asset that is overvalued should be a concern," she indicated, adding, "That is what happened with
housing."¶ 3. Ballooning Federal Deficit¶ "If we keep throwing gas on flames with deficit spending, I worry about how severe the next
[economic] downturn is going to be—and whether we have enough bullets left [to fight it]," Bair opined. "I also worry when the safe-haven
status of Treasuries is questioned," she added.¶ Bair continued: "I don't think Congress has a clue that the reason they have been able to get
away with this profligacy is that we are the best-looking horse in the glue factory. But we are in the glue factory. Our fiscal situation is not a
good one."¶ 4. Student Debt¶ Bair also is alarmed about student debt, which is a staggering $1.3 trillion, Barron's says. "There
are parallels to 2008: There are massive amounts of unaffordable loans being made to people who can't
pay them, and the easy availability of those loans is leading to asset inflation," she observed.¶ A big part of the
student loan problem, Bair said, is that educational institutions raise tuition with impunity because "they have no
skin in the game, like [many lenders] in the mortgage crisis." That is, the federal government, not the colleges
themselves, bears the risk of default.¶ See how Investopedia's millions of readers worldwide feel about the securities markets, as measured by
the Investopedia Anxiety Index (IAI).¶ Reforming for Student Loans¶ Bair supports a system in which the colleges and the government split the
cost of student loans 50/50, and repayment is on a sliding scale, as a percentage of future income. She believes that philanthropies also should
get "into the mix." The reason, she said: "We need high school math teachers just like we need hedge fund managers, but we have a one-size
payment system, whether you are making $36,000 or $360,000."¶ Another matter of concern to Bair: "Student
debt also suppresses
small-business formation. Kids who would have started a business in their parents' garage can't do that
now because they owe $50,000."¶ "Prudence" vs. "Short-Termism"¶ Banks and regulators alike in China are increasingly concerned
about risk management, credit quality, and nonperforming loans, Bair says, noting that "prudence" and "sustainable growth" are becoming
watchwords.¶ She adds: "I'm struck by the difference in the tone of the political leadership—with [China's President] Xi talking about
deleveraging, constraining asset bubbles, and accepting short-term trade-offs to growth for long-term stability. Contrast that to the U.S., where
we have a move to deregulation and borrowing more. It saddens me that we are falling prey to short-termism."¶ Bitcoin and Cyber Risk¶ Bair
told Barron's that bitcoin has no intrinsic value, but neither does government-issued paper money. The market should determine its value, in
her opinion, while government should focus on disclosure, education, fraud prevention, and curbing its use to support criminal activities. She
advises people not to invest in it unless they can afford a complete loss.¶ Given all their other post-crisis concerns, Bair told Barron's that
regulators fell behind on dealing with systemic cyber risk. Right now, however, she is happy to see that they have become very focused on it.¶ A
Contrasting Viewpoint¶ In contrast to Sheila Bair's concerns, a bullish view on the banking sector has been offered by widely followed bank
analyst Dick Bove. He believes that U.S. banks are entering a new, decades-long golden age of growing profitability. Meanwhile, the KBW
Nasdaq Bank Index (BKX) is up 530% from its intra-day low on March 6, 2009, through the close on March 2, 2018, outdistancing the 304% gain
for the S&P 500 Index (SPX).

Little that current economic mechanisms can do to stop global financial crisis
Schneider 2019 (Howard Schneider, 8-25-2019, "Central bankers face political shocks, and hope to
avoid the worst," U.S., https://www.reuters.com/article/us-usa-fed-jacksonhole/central-bankers-face-
political-shocks-and-hope-to-avoid-the-worst-idUSKCN1VF0FV)
What became clear at the U.S. Federal Reserve’s central banking conference in Jackson Hole, Wyoming, over the past couple of days is that not
only do other people hold the wheel, some seem intent on steering toward trouble.¶ “We are experiencing a series of major political shocks;
we saw another example of that yesterday,” Reserve Bank of Australia Governor Philip Lowe said on Saturday, a day after China and the United
States slapped more tariffs on each other’s goods and U.S. President Donald Trump called on American companies to shut down their
operations in the Asian nation.¶ As those political shocks slow growth, Lowe said in a panel discussion, “there
is a strongly-held view
that the central bank should just fix the problem ... The reality is much more complicated,” and not something
monetary policy can likely repair.¶ His comments spoke to an uncomfortable truth that hovered over an annual symposium where the

161
mountain backdrop and two days of technical debate often seem distant from the world of realpolitik. Even
as central bankers and
economists referred to the deep connections that now tie the world’s economies together, a U.S.-driven
trade war seemed to be driving them apart and raising the specter of a broad global downturn.¶ Worse, it’s
a downturn none of the central bankers seemed confident about how to fight - coming not from a business- or financial-cycle meltdown that
they have a playbook to combat, but from political choices that threaten to crater business confidence.¶ If that’s the problem, Lowe and others
said, lower interest rates - something demanded by Trump to get an upper hand in the trade war with China - will do little to help.¶ “The
problem is in the president of the United States,” former Fed Vice Chair Stanley Fischer said at a lunch event on Friday. “How the system is
going to get around some of the sorts of things that have been done lately, including trying to destroy the global trading system, is very unclear.
I have no idea how to deal with this.Ӧ It was a rare calling out of Trump, though his presence infused other remarks. Fed Chair Jerome Powell,
handpicked by Trump to run the central bank but now an object of the president’s ire, noted in his opening speech that the Fed had no
Central banks have asked politicians for years to use
chartbook for building a new global trading system.¶ ‘LAST MOMENT’¶
fiscal policy more constructively and address structural problems plaguing economies.¶ What they’ve
gotten instead is a fast multiplying set of risks, with the U.S.-China trade war at the epicenter but also
including the possibility of a disruptive British exit from the European Union, an economic slowdown in Germany, a political collapse in Italy,
rising political tensions in Hong Kong, and longstanding international institutions and agreements under pressure.¶ European Council
President Donald Tusk described this weekend’s G7 leaders summit in the French seaside resort of Biarritz as a “last moment” for
its members - the United States, Britain, Germany, Japan, France, Italy and Canada - to restore unity.¶ Amidst all the tumult, and
with interest rates across the globe already lower than they’ve been historically, monetary policy may
be no match.¶ “There is not that much policy space and there are material risks at the moment that we all are trying to manage,” Bank of
England Governor Mark Carney said here on Friday.¶ Small countries like Sweden and Turkey, buffeted by volatile capital flows as central banks
worldwide cut rates, are now struggling to deal with the possibility that the global trading order may be changing for good. ¶ Meanwhile, large
For the U.S. central bank, if trade uncertainty drives
nations worry they will slip into a rut that may be hard to escape.¶
down business investment and starts to hurt consumer spending, it may find itself cutting rates back to
zero with the economy still muddling along, forcing Powell and his fellow policymakers to weigh whether to restart crisis-era
tools even outside a crisis or recession.¶ “There’s only so much a monetary policy action can do,” Cleveland Fed President Loretta Mester told
Reuters on the sidelines of the conference on Saturday. “You have to recognize that the U.S. economy is affected by what’s going on in the rest
of the world ... I do worry about this whole undermining of institutions globally.Ӧ In a development that has cheered some policymakers,
Germany has signaled it may deliver some fiscal stimulus to offset a manufacturing slump. But with the European Central Bank signaling it too is
ready to battle slowing growth by easing policy further, Powell’s Fed may be forced to act despite its desire to stay above the day-to-day fray of
changing trade policy.¶ “You need to respect that we are part of the global economy; the global economy is slowing, other central banks are
easing, and they are responding to a common global slowdown,” Fed Vice Chair Richard Clarida said on Friday.¶ “What monetary policy can do
is to use its tools to do the best it can to keep the economy close to full employment and stable inflation; depending upon the shock hitting the
economy and depending upon the response to that shock, the insulation may not be perfect,” Clarida said.

Tax Haven’s distort market’s, giving law breakers unfair advantage


Radu 2012 (Daniela Iuliana Radu,¶ 2012,Tax Havens Impact on the World Economy,¶ Procedia - Social
and Behavioral Sciences,¶ Volume 62,¶ ¶ Pages 398-402,¶ ISSN 1877-0428, Peer Reviewed,
https://doi.org/10.1016/j.sbspro.2012.09.064.¶
http://www.sciencedirect.com/science/article/pii/S1877042812035057)

In general, tax havens have a double monetary control system which distinguishes between residents
and nonresidents and between foreign currencies and local currencies. Residents are usually the subject of monetary
controls, and non-residents. In addition, tax havens have coins easily convertible into dollars, euros or pounds. If large corporations benefit from the imposition of
offshore centres, individuals obtain benefits through offshore banks, banks usually in jurisdictions with low taxation in the country of residence of the depositary.
The advantages enjoyed by these savers are: maintaining secrecy of banking, tax reduced or absent, easy access to deposits and protection against political and
financial instability. Tax
fraud has serious consequences for Member States ' budgets and on the system of own
resources of the EU. Lead to violations of the principle of fair and transparent taxation, distortions of competition and, thus,
significantly affect the functioning of the internal market. Honest companies experiencing competitive
disadvantages of order due to tax fraud and tax revenue losses, finally, are covered by the european
taxpayer through other forms of taxation. The fact that currently do not apply a uniform system of data collection in all Member States,

162
Under these conditions,
major differences between being national reporting standards, makes it impossible to assess the extent of this phenomenon.

the estimates made on the Community budget losses, due to the uncharge fees taxes and levies,
oscillates around the amount of EUR 250 billion (2-2,25% of GNP). The figure of 40 billion from fraud in the field of VAT, it is believed,
however, that it is realistic. Then, at least 10% of VAT receipts corresponding to the losses, 8 percent of revenue from the excise duty on alcoholic beverages, and 9
percent of revenues from the excise duty on tobacco products.

Tax Havens are a ‘financial curse’ to the inhabitants, only benefitting the rich tax
evaders
Harrington 2016 (Brooke Harrington, 7-28-2016, "Why Tax Havens Are Political and Economic
Disasters," Atlantic, https://www.theatlantic.com/business/archive/2016/07/tax-haven-curse/491411/)

becoming a tax haven has unexpected costs. Precipitous economic, political, and social declines
But as many are finding,

have occurred so often in such states that observers have coined a new term for it: “the finance curse.”
When the "finance curse" strikes a country, there is a recurrent pattern: While its democracy, economy, and culture remain formally
intact, they are increasingly oriented to and co-opted by international elites. In other words, such countries

gradually become organized around the interests of people who don't even live there, to the detriment
of those who do. The services produced by these countries protect cosmopolitans’ wealth, but the riches never flow to the local producers,
undermining their capacity for self-governance and social cohesion, as well as the development of
infrastructure and institutions.¶ This has led to increasing economic fragility for offshore financial
centers, along with political corruption and social decline, as evidenced by a rise in crime and violence. I
experienced the latter in my own research on the global wealth-management industry: In the course of visiting 18 tax havens in every major region of the world, I encountered this social decay
directly through a number of experiences, including being robbed at Pae Moana in the Cook Islands. A local fisherman I met afterwards said the rise in burglary and violent crime in the islands
began with the growth of the offshore industry. Not only the wealth it brought in, but also the new value system focused on exploitation and greed, meant that “everyone calls us the ‘Crook
Islands’ now.” The finance industry had begun to eat away at the nation’s democratic institutions: Referring to a recent political-corruption scandal, the fisherman said, “They’ve got our
government in their pockets. I hate what they've done to my country.Ӧ But as I learned, the workings of the finance curse have shaped not only the development of small post-colonial
nations like the Cook Islands, but also that of seemingly wealthy and well-established ones. For example, recent reporting on the Channel Island of Jersey has documented the crippling of the
country's economy, government, and society in one of the world's leading financial centers—a place that was once considered a "miracle of plenty" and a role model for other would-be tax

The corrosion described by the finance curse has affected even some of the wealthiest financial
havens.¶

centers, such as Luxembourg, which is the domicile of choice for $3.5 trillion worth of mutual-fund
shares and over 150 banks. As a result of a robust financial-services sector that contributes 27 percent of the country’s economic production, the Grand Duchy boasts
the highest per capita GDP in the Europe, far outstripping its nearest rivals, Norway and Switzerland. At first blush, Luxembourg would appear to be in terrific shape: a wealthy democracy,

thriving in the center of Western Europe.¶ However, as the economist Gabriel Zucman has shown, Luxembourg's role as a leading tax haven has
benefitted foreigners at the expense of locals, across the board. Over 60 percent of the country’s
workforce is comprised of foreigners, who reap virtually all the benefits of the wealth generated by the
Duchy. The society, as a result, is fracturing along expat-versus-local lines, both in economic and political terms.¶ As Zucman documents, inequality in the Grand
Duchy has skyrocketed, with poverty doubling since 1980, and real wages for ordinary Luxembourgers
stagnating for the past 20 years. Meanwhile, salaries for expat wealth managers have exploded, tripling
housing prices in Luxembourg City. However, even this new wealth has not benefitted the local economy: Due to Luxembourg’s tax policies, public institutions
such as the educational system are in "accelerated decline," mainly to the detriment of locals. The result, Zucman observes, is that Luxembourg has become more of a free-trade zone than a
state.¶ This represents a threat to European democracy. As Zucman points out, Luxembourg has full membership in the European Union based on the premise that the government represents
the citizens of the Duchy. However, having “sold its sovereignty” to multinational corporations, Luxembourg has also made itself the political arm of international finance, effectively giving

those multinationals voting and veto privileges over European public policy.¶ A similar situation has played out in Panama. One of the less-explored
angles on the Panama Papers scandal—the leak of nearly 40 years' worth of confidential client data from the wealth-management firm Mossack Fonseca—is how Panama's role as an
international tax haven has affected the country internally. Financial services, which are mostly directed at foreigners moving wealth offshore, contribute an estimated 7 percent of Panama's
GDP, and have led to a steadily growing economy for more than a decade. Coupled with dominance over the shipping trade via the Panama Canal, which brings in billions of dollars to the

Panama remains one of the most economically unequal


country's economy, Panama would seem well-positioned for prosperity.¶ Yet in reality,

states in the world: a so-called "Fourth World country," signifying an extreme degree of material
deprivation. More than one-third of its population lives in poverty. According to World Bank estimates,
25 percent of Panamanians lack basic sanitation and 11 percent suffer from malnutrition. The country’s
indigenous people of color, representing almost 13 percent of the population, have been almost

163
completely shut out from any benefits Panama has reaped from the offshore business; most still lack access to clean
water or health care.¶ This is because all the benefits of Panama's economy, including the offshore finance industry, have been tightly constrained to elites. This is particularly true of those

Panama’s growth primarily


tied to “little Manhattan”—the district of the capital city where financial firms are based. But across the board, in all areas of the economy,

benefits foreigners.¶ With this unequal sharing of Panama's wealth has come increased violence and a
decline in democratic institutions. Murder rates doubled between 2006 and 2012, and 22 percent of the
population reported being a victim of crime just within the previous four months. In 2012, a week of riots, arson, and looting
followed the government’s manipulation of the legislature to force sale of public lands. The state, which has no army, has deployed Panama’s border-defense officers to put down public
protests demanding democratic representation. The previous president has been accused by his own vice-president of taking a $30 million bribe from a foreign corporation. (The previous
president denies the allegations.) Thus, despite the country's apparent economic health and its central role in the offshore finance network, it remains a classic case of the "finance curse":

stunted by inequality, violence, and a weak democracy with recurrent tendencies toward authoritarian rule.¶ Countries affected by the "finance curse"
are essentially captured states. But captured by whom? As with Luxembourg, the case of Antigua and Barbuda—a single country composed of several islands at the
intersection of the Atlantic Ocean and the Caribbean Sea—suggests that the answer is usually foreign elites. This means high-net-worth individuals,

multinational firms, and finance professionals from abroad not only wring disproportionate benefits
from the country's economy, but come to control its political system as well.¶ The problem of state
capture was exemplified by the American financier R. Allen Stanford, who essentially bought Antigua—in
some cases through, an indictment alleged, outright bribes, but more often with a series of quid-pro-quo moves. As the country’s former prime minister said of

Stanford, "This man has a lien on our whole country." For example, The Guardian reported in 2009, Stanford gave the Antiguan

government $30 million to build a new hospital, and let the regime take credit publicly for the move,
ensuring the goodwill of voters. In return, according to The Guardian, the regime gave Stanford and his
firm enormous legal and financial concessions, enabling Stanford's personal fortune to reach $2.2
billion—nearly double the GDP of the island.¶ To complete his capture of the island’s economy and
government, Stanford became the second-largest employer in the country, and the owner of its primary
newspaper. As a result, he exerted a controlling influence over both the livelihoods and political
discourse of most Antiguans. Throughout this process, Antigua's own political leaders enriched themselves through association with Stanford, while maintaining their
positions of public authority by seeming to invest in public goods that were covertly being funded by Stanford.¶ The perils of this strategy became apparent

when Stanford was convicted of fraud and sentenced to a prison term of 110 years. When his $7 billion
investment scheme collapsed, it wasn't just a personal failure: He took an entire country down with him.
Overnight, Antigua lost 10 percent of its GDP, and—perhaps surprisingly—25 percent of its tourism revenues. It wasn't
just ruined as an offshore financial center, but tainted in such a way that its main alternative source of economic survival was damaged as well.

164
AT: New Laws Solve

Companies can get around data privacy laws that regulate the sale of data by
not selling it. This is a legal loophole that allows companies to use the data and
give it away to others

Morrison, Sara. 12-17-2019, "Facebook is gearing up for a battle with California’s new data privacy law,"
Vox, https://www.vox.com/recode/2019/12/17/21024366/facebook-ccpa-pixel-web-tracker

The California Consumer Privacy Act, which regulates data collection, doesn’t go into effect until next
month, but it may already be heading for a showdown with one of the biggest data collectors of them
all: Facebook.

When the CCPA (full text here, if you really want to dig in) goes into effect on January 1, it will be the
strictest digital privacy law in the United States to date, and the first law in the country that gives adults
some rights over the collection of their data. Companies will be required to tell California residents what
data about them is being collected, if it’s being sold, and to whom. It will give California residents the
ability to opt out of having their data sold, and in some cases let them access and delete data a company
has about them.

Though this is a state law, it will likely affect all Americans. It’s both easier and safer for companies to
apply it nationally, and it’s expected that most of them will follow Microsoft’s lead and do exactly that.
You may have noticed several websites sending you notifications about updated privacy policies
recently; this is likely why.

Facebook is taking a different tack for its web tracker, Pixel. Pixel’s name comes from its physical
appearance on a website that installs it: literally, one square pixel. But behind that pixel is a code that
that installs cookies on your browser, allowing it to track your activity across the internet. Facebook is
able to link your browser (and its activity) to your Facebook account, which gives it valuable data about
you as an individual as well any categories it has placed you in — things like your location, age, gender,
and interests.

Facebook provides this code to businesses free of charge, and those businesses can then purchase ads
based off the information that Pixel collects. But that information only goes one way; Facebook knows
who you are, but the business doesn’t. It can then purchase ads from Facebook that target, say, women
ages 30-44 who live in Los Angeles, or advertise a certain product to site visitors who interacted with
that product in some way.

That’s why you might see an ad for a shirt you placed in an online retailer’s shopping cart (but didn’t
buy) on your Facebook timeline.

165
According to the Wall Street Journal, Facebook will claim that it doesn’t sell the data that its web
trackers collect; it simply provides a service to businesses and websites that install Pixel on their sites.
Because of this, it believes its web trackers are exempt from CCPA’s regulations, which have exceptions
for data exchanged with a “service provider” that is “necessary to perform a business purpose.”

Legal experts who spoke to Recode disagreed with Facebook’s interpretation.

“CCPA allows data transfers to service providers so they can provide services and says those transfers
don’t count as selling user data,” Roger Allan Ford, a law professor at the University of New Hampshire
who specializes technology law, said. “But Facebook also seems to use the data for its own purposes,
separate from providing ad services, and can’t rely on the service provider exception for those uses. So if
Facebook does use tracking data for its own business purposes, then its argument is wrong.”

Basically, Ford is saying that if Facebook uses the data it collects from Pixels in any way other than
providing ads to the businesses it collected that data from, it can’t claim a business purpose exemption.

Ari Waldman, director of the Innovation Center for Law and Technology at New York School of Law, said
that Facebook’s attempt to get around part of the CCPA was “par for the course for this company.”

“Just because Facebook doesn’t ‘sell’ data to others (the company sells ads based on its vast data
collection), that doesn’t mean this rule doesn’t apply,” Waldman said, adding, “By not changing its
practices and arguing, with all likelihood, that the company falls under [the] ‘business purpose’
exception, Facebook is taking advantage of some ambiguity in the law to reframe the law’s
requirements to suit its own purposes.”

Jacob Snow, a technology and civil liberties attorney for the ACLU of Northern California, also doubted
that Facebook’s exemption argument would hold up.

“When a website delivers massive volumes of personal information to Facebook, that’s a sale under the
CCPA,” he said. “Facebook’s plans to disregard the law is but another example demonstrating that
industry will do anything to protect their bottom line at the expense of Californians’ rights.”

Facebook addressed this in a blog post last week that seemingly put the onus on the sites that install its
tracker to make sure their use of it complies with the CCPA: “We encourage advertisers and publishers
that use our services to reach their own decisions on how to best comply with the law. ... We will only
use our partners’ data for the business purposes described in our contracts with them.”

As for the rest of its services, Facebook also said in the post that it believes it already gives users the
ability to “easily manage their privacy and understand their choices with respect to their data,” and that
it will be posting a “supplemental notice” to further explain its data policy as the CCPA goes into effect.

Assuming Facebook sticks to its guns, the final say will most likely rest on the California attorney
general’s office, which is in charge of enforcing the CCPA and declined to comment on the record for this
story.

166
AT: Health Care

Insurance companies can use data collection to set public rates and target
specific plants to people on the marketplace. This is all legal under the ACA and
could raise the rates for millions unfairly as they don’t have any route for
petition of data or negotiating power

Allen, Marshall. 7-18-2018, "Health insurers are gathering personal data — that could raise your rates,"
STAT, https://www.statnews.com/2018/07/18/health-insurers-personal-details-raise-rates/

To an outsider, the fancy booths at last month’s health insurance industry gathering in San Diego aren’t
very compelling. A handful of companies pitching “lifestyle” data and salespeople touting jargony
phrases like “social determinants of health.”

But dig deeper and the implications of what they’re selling might give many patients pause: A future in
which everything you do — the things you buy, the food you eat, the time you spend watching TV —
may help determine how much you pay for health insurance.

With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum
up personal details about hundreds of millions of Americans, including, odds are, many readers of this
story. The companies are tracking your race, education level, TV habits, marital status, net worth.
They’re collecting what you post on social media, whether you’re behind on your bills, what you order
online. Then they feed this information into complicated computer algorithms that spit out predictions
about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey
pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the
computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental
health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a
dilapidated and dangerous neighborhood, increasing your health risks

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk
Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The
fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

167
Insurers contend they use the information to spot health issues in their clients — and flag them so they
get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But
as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of
personal information, patient advocates and privacy scholars say the insurance industry’s data gathering
runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health
Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of
Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We
have a law that only covers one source of health information. They are rapidly developing another
source.”

Patient advocates warn that using unverified, error-prone “lifestyle” data to make medical assumptions
could lead insurers to improperly price plans — for instance raising rates based on false information —
or discriminate against anyone tagged as high cost. And, they say, the use of the data raises thorny
questions that should be debated publicly, such as: Should a person’s rates be raised because algorithms
say they are more likely to run up medical bills? Such questions would be moot in Europe, where a strict
law took effect in May that bans trading in personal data.

This year, ProPublica and NPR are investigating the various tactics the health insurance industry uses to
maximize its profits. Understanding these strategies is important because patients — through taxes,
cash payments and insurance premiums — are the ones funding the entire health care system. Yet the
industry’s bewildering web of strategies and inside deals often have little to do with patients’ needs. As
the series’ first story showed, contrary to popular belief, lower bills aren’t health insurers’ top priority.

Inside the San Diego Convention Center last month, there were few qualms about the way insurance
companies were mining Americans’ lives for information — or what they planned to do with the data.

The sprawling convention center was a balmy draw for one of America’s Health Insurance Plans’
marquee gatherings. Insurance executives and managers wandered through the exhibit hall, sampling
chocolate-covered strawberries, champagne and other delectables designed to encourage deal-making.

Up front, the prime real estate belonged to the big guns in health data: The booths of Optum, IBM
Watson Health and LexisNexis stretched toward the ceiling, with flat screen monitors and some comfy
seating. (NPR collaborates with IBM Watson Health on national polls about consumer health topics.)

To understand the scope of what they were offering, consider Optum. The company, owned by the
massive UnitedHealth Group, has collected the medical diagnoses, tests, prescriptions, costs and
socioeconomic data of 150 million Americans going back to 1993, according to its marketing materials.
(UnitedHealth Group provides financial support to NPR.) The company says it uses the information to

168
link patients’ medical outcomes and costs to details like their level of education, net worth, family
structure and race. An Optum spokesman said the socioeconomic data is de-identified and is not used
for pricing health plans.

Optum’s marketing materials also boast that it now has access to even more. In 2016, the company filed
a patent application to gather what people share on platforms like Facebook and Twitter, and link this
material to the person’s clinical and payment information. A company spokesman said in an email that
the patent application never went anywhere. But the company’s current marketing materials say it
combines claims and clinical information with social media interactions.

I had a lot of questions about this and first reached out to Optum in May, but the company didn’t
connect me with any of its experts as promised. At the conference, Optum salespeople said they
weren’t allowed to talk to me about how the company uses this information.

It isn’t hard to understand the appeal of all this data to insurers. Merging information from data brokers
with people’s clinical and payment records is a no-brainer if you overlook potential patient concerns.
Electronic medical records now make it easy for insurers to analyze massive amounts of information and
combine it with the personal details scooped up by data brokers.

It also makes sense given the shifts in how providers are getting paid. Doctors and hospitals have
typically been paid based on the quantity of care they provide. But the industry is moving toward paying
them in lump sums for caring for a patient, or for an event, like a knee surgery. In those cases, the
medical providers can profit more when patients stay healthy. More money at stake means more
interest in the social factors that might affect a patient’s health.

Some insurance companies are already using socioeconomic data to help patients get appropriate care,
such as programs to help patients with chronic diseases stay healthy. Studies show social and economic
aspects of people’s lives play an important role in their health. Knowing these personal details can help
them identify those who may need help paying for medication or help getting to the doctor.

But patient advocates are skeptical health insurers have altruistic designs on people’s personal
information.

The industry has a history of boosting profits by signing up healthy people and finding ways to avoid sick
people — called “cherry-picking” and “lemon-dropping,” experts say. Among the classic examples: A
company was accused of putting its enrollment office on the third floor of a building without an
elevator, so only healthy patients could make the trek to sign up. Another tried to appeal to spry seniors
by holding square dances.

The Affordable Care Act prohibits insurers from denying people coverage based on pre-existing health
conditions or charging sick people more for individual or small group plans. But experts said patients’
personal information could still be used for marketing, and to assess risks and determine the prices of
certain plans. And the Trump administration is promoting short-term health plans, which do allow
insurers to deny coverage to sick patients.

Robert Greenwald, faculty director of Harvard Law School’s Center for Health Law and Policy Innovation,
said insurance companies still cherry-pick, but now they’re subtler. The center analyzes health insurance

169
plans to see if they discriminate. He said insurers will do things like failing to include enough information
about which drugs a plan covers — which pushes sick people who need specific medications elsewhere.
Or they may change the things a plan covers, or how much a patient has to pay for a type of care, after a
patient has enrolled. Or, Greenwald added, they might exclude or limit certain types of providers from
their networks — like those who have skill caring for patients with HIV or hepatitis C.

If there were concerns that personal data might be used to cherry-pick or lemon-drop, they weren’t
raised at the conference.

At the IBM Watson Health booth, Kevin Ruane, a senior consulting scientist, told me that the company
surveys 80,000 Americans a year to assess lifestyle, attitudes and behaviors that could relate to health
care. Participants are asked whether they trust their doctor, have financial problems, go online, or own
a Fitbit and similar questions. The responses of hundreds of adjacent households are analyzed together
to identify social and economic factors for an area.

Ruane said he has used IBM Watson Health’s socioeconomic analysis to help insurance companies
assess a potential market. The ACA increased the value of such assessments, experts say, because
companies often don’t know the medical history of people seeking coverage. A region with too many
sick people, or with patients who don’t take care of themselves, might not be worth the risk.

Ruane acknowledged that the information his company gathers may not be accurate for every person.
“We talk to our clients and tell them to be careful about this,” he said. “Use it as a data insight. But it’s
not necessarily a fact.”

In a separate conversation, a salesman from a different company joked about the potential for error.
“God forbid you live on the wrong street these days,” he said. “You’re going to get lumped in with a lot
of bad things.”

The LexisNexis booth was emblazoned with the slogan “Data. Insight. Action.” The company said it uses
442 non-medical personal attributes to predict a person’s medical costs. Its cache includes more than 78
billion records from more than 10,000 public and proprietary sources, including people’s cellphone
numbers, criminal records, bankruptcies, property records, neighborhood safety and more. The
information is used to predict patients’ health risks and costs in eight areas, including how often they
are likely to visit emergency rooms, their total cost, their pharmacy costs, their motivation to stay
healthy and their stress levels.

People who downsize their homes tend to have higher health care costs, the company says. As do those
whose parents didn’t finish high school. Patients who own more valuable homes are less likely to land
back in the hospital within 30 days of their discharge. The company says it has validated its scores
against insurance claims and clinical data. But it won’t share its methods and hasn’t published the work
in peer-reviewed journals.

McCulley, LexisNexis’ director of strategic solutions, said predictions made by the algorithms about
patients are based on the combination of the personal attributes. He gave a hypothetical example: A
high school dropout who had a recent income loss and doesn’t have a relative nearby might have higher
than expected health costs.

170
But couldn’t that same type of person be healthy? I asked

“Sure,” McCulley said, with no apparent dismay at the possibility that the predictions could be wrong.

McCulley and others at LexisNexis insist the scores are only used to help patients get the care they need
and not to determine how much someone would pay for their health insurance. The company cited
three different federal laws that restricted them and their clients from using the scores in that way. But
privacy experts said none of the laws cited by the company bar the practice. The company backed off
the assertions when I pointed that the laws did not seem to apply.

LexisNexis officials also said the company’s contracts expressly prohibit using the analysis to help price
insurance plans. They would not provide a contract. But I knew that in at least one instance a company
was already testing whether the scores could be used as a pricing tool.

Before the conference, I’d seen a press release announcing that the largest health actuarial firm in the
world, Milliman, was now using the LexisNexis scores. I tracked down Marcos Dachary, who works in
business development for Milliman. Actuaries calculate health care risks and help set the price of
premiums for insurers. I asked Dachary if Milliman was using the LexisNexis scores to price health plans
and he said: “There could be an opportunity.

The scores could allow an insurance company to assess the risks posed by individual patients and make
adjustments to protect themselves from losses, he said. For example, he said, the company could raise
premiums, or revise contracts with providers.

It’s too early to tell whether the LexisNexis scores will actually be useful for pricing, he said. But he was
excited about the possibilities. “One thing about social determinants data — it piques your mind,” he
said.

Dachary acknowledged the scores could also be used to discriminate. Others, he said, have raised that
concern. As much as there could be positive potential, he said, “there could also be negative potential.”

It’s that negative potential that still bothers data analyst Erin Kaufman, who left the health insurance
industry in January. The 35-year-old from Atlanta had earned her doctorate in public health because she
wanted to help people, but one day at Aetna, her boss told her to work with a new data set.

To her surprise, the company had obtained personal information from a data broker on millions of
Americans. The data contained each person’s habits and hobbies, like whether they owned a gun, and if
so, what type, she said. It included whether they had magazine subscriptions, liked to ride bikes or run
marathons. It had hundreds of personal details about each person.

The Aetna data team merged the data with the information it had on patients it insured. The goal was to
see how people’s personal interests and hobbies might relate to their health care costs. But Kaufman
said it felt wrong: The information about the people who knitted or crocheted made her think of her
grandmother. And the details about individuals who liked camping made her think of herself. What
business did the insurance company have looking at this information? “It was a dataset that really dug
into our clients’ lives,” she said. “No one gave anyone permission to do this.”

171
In a statement, Aetna said it uses consumer marketing information to supplement its claims and clinical
information. The combined data helps predict the risk of repeat emergency room visits or hospital
admissions. The information is used to reach out to members and help them and plays no role in pricing
plans or underwriting, the statement said.

172
Healthcare data gathered by data collection is prone to mishandling

Barber, Gregory. 11.11.2019"Google Is Slurping Up Health Data—and It Looks Totally Legal," Wired,
https://www.wired.com/story/google-is-slurping-up-health-dataand-it-looks-totally-legal/

Mimicking a Cybersecurity Analyst’s Intuition with AI

The Mandalorian Is the Only Smart Soldier in the Galaxy

Tariq Shaukat, president of industry products for Google Cloud, wrote in a blog post that health data
would not be combined with consumer data or used outside of the scope of its contract with Ascension.
However, that scope remains somewhat unclear. Shaukat wrote that the project includes moving
Ascension’s computing infrastructure to the cloud, as well as unspecified “tools” for “doctors and nurses
to improve care.”

“All work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a
robust data security and protection effort,” Ascension said in a statement. The nonprofit health system
has 2,600 hospitals primarily in the Midwest and Southern US.

Health care providers see promise in mining troves of data to develop more personalized care. The idea
is to establish patterns to better detect medical conditions before a patients’ symptoms get dire, or
match patients with the treatment most likely to help. (Hospitals win here too; more personalized care
means more efficient care—fewer unnecessary tests and treatments.)

In past efforts, Google has used anonymized data, which doesn’t require patient authorization to be
released. Earlier this fall, the company announced a 10-year research partnership with the Mayo Clinic.
As part of the deal—the details of which were not disclosed—Mayo moved its vast collection of patient
records onto the Google Cloud. From that secure location, Google is being granted limited access to
anonymized patient information with which to train its algorithms.

But even when it’s used anonymized data, the company has gotten into trouble for potential privacy
violations related to health care research. In 2017, regulators in the UK determined that a partnership
between Google DeepMind and that country’s National Health Service broke the law for overly broad
sharing of data. This past June, Google and the University of Chicago Medical Center were sued for
allegedly failing to scrub timestamps from anonymized medical records. The lawsuit claims those
timestamps could provide breadcrumbs that could reveal the identities of individual patients, a potential
HIPAA violation. Both missteps underscore how easy it is to mishandle—even accidentally—highly
regulated health information when you’re a company, like Google, that mostly works with nonmedical
data.

Google’s newest venture appears unprecedented in its scale, and also in the scope of information. It was
also foreseeable. “This fusion of tech companies that have deep AI talent with big health systems was
inevitable,” says Eric Topol, a professor at Scripps Research who focuses on individualized medicine.

173
AT: Lower prices for the consumer

The pro has this point reversed. Prices will go up as retailers use data to find the
highest point people are willing to pay, not the lowest

Gonzalez-Miranda, Maria. 5-29-2018, "How Big Data and online markets will lead to higher — not lower
— prices," MarketWatch, https://www.marketwatch.com/story/how-big-data-and-online-markets-can-
lead-to-higher-prices-2018-05-19

WASHINGTON (Project Syndicate) — Information technology is not just transforming markets; it is also
making them ubiquitous, particularly for household consumers. From pretty much anywhere in the
world, one can now search out goods and services, compare prices from multiple sellers, and give
detailed shipping and delivery instructions, all with a mouse click or a screen tap.

No doubt, this is a dream come true for anyone who grew up shopping in real, hands-on markets, with
sellers displaying their wares on store shelves, on public squares, or along dusty roads. In many cases,
routine purchases required long waits or extensive bargaining. But with online markets, savings are
generated in many dimensions, and transaction costs are sharply reduced at all stages of the process.

As any student of economics understands, this kind of situation decreases overall welfare, because
every consumer will be forced to pay the maximum of what they are willing to spend for each good or
service they purchase, keeping nothing “extra” for themselves.

Online markets have the potential to improve consumer welfare substantially, by fueling competition on
price, efficiency, and customer experience, whether through search engines or single platforms such as
Amazon AMZN, -0.70% . And if consumers spend smaller shares of their disposable income on each
purchase they make, they will have room to consume more, thus boosting overall economic activity.

But are online markets meeting this potential?

If anything, the description above is already dated. Nowadays, online retailers use consumers’ internet
activities and other personal data to deliver “targeted pricing.” To take one particularly controversial
example, airlines now use travelers’ data to customize ticket prices in ways that essentially cancel out
the savings once offered by online markets.

Indeed, if you search online for a more expensive car or a more expensive vacation, that fact will be
documented by tracking cookies or other means of online surveillance. And with these data, digital
advertisers and retailers will offer you more expensive watches, home furnishings, or airline tickets than
they would to a lower-income user searching within the same categories.

174
And in some cases, they might even offer different prices to different people for the same good or
service.

175
AT: New systems can handle the data load

Terabytes of data are collected. No system exists that can deal with it all
effectively

Peppet, Scott R. 2014. Regulating the Internet of Things: First Steps Toward Managing Discrimination,
Privacy, Security, and Consent, p. 134

These examples illustrate the larger technical problem: Internet of Things devices may be inherently
vulnerable for several reasons. First, these products are often manufactured by traditional consumer-
goods makers rather than computer hardware or software firms. The engineers involved may therefore
be relatively inexperienced with data-security issues, and the firms involved may place insufficient
priority on security concerns.

176
AT: AI Good

AI will destroy human life

177
AT:Unconcerned About Privacy
The general public does not want to give up their privacy, they are resigned to
paying the cost of the internet
ICO 2017 (Information Commissioners Office. UK. “Big Data, Artificial Intelligence, Machine Learning
and ...,” April 9, 2017. https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-
and-data-protection.pdf.)

If it were true that people are simply unconcerned about how their personal data is used, this would
mean their expectations about potential data use are open-ended, leaving a very wide margin of
discretion for big data organisations. However, research suggests that this view is too simplistic; the reality is more nuanced: The International
Institute of Communications (IIC). Research commissioned by the IIC43 showed that people’s willingness
to give personal data, and their attitude to how that data will be used, is context-specific. The context depends on a
number of variables, eg how far an individual trusts the organisation and what information is being asked for. T he Boston Consulting Group (BCG). The BCG44

found that for 75% of consumers in most countries, the privacy of personal data remains a top issue,
and that young people aged 18-24 are only slightly less cautious about the use of personal online data
than older age groups. KPMG. A global survey by KPMG45 found that, while attitudes to privacy varied (based on
factors such as types of data, data usage and consumer location), on average 56% of respondents reported being “concerned”

or ”extremely concerned” about how companies were using their personal data. 44. Some studies have pointed to a ‘privacy
paradox’: people may express concerns about the impact on their privacy of ‘creepy’ uses of their data , but in

practice they contribute their data anyway via the online systems they use. In other words they provide
the data because it is the price of using internet services. For instance, findings from Pybus, Coté and Blanke’s study of mobile phone usage by
young people in the UK46 and two separate studies by Shklovski et al. 47 , looking at smartphone usage in Western Europe, supported the idea of the privacy paradox. It has also

been argued that the prevalence of web tracking means that, in practice, web users have no choice but
to enter into an ‘unconscionable contract’ to allow their data to be used48. This suggests that people may
be resigned to the use of their data because they feel there is no alternative, rather than being
indifferent or positively welcoming it. This was the finding of a study of US consumers by the Annenberg
School for Communication49 . The study criticised the view that consumers continued to provide data to
marketers because they are consciously engaging in trading personal data for benefits such as discounts;
instead, it concluded that most Americans believe it is futile to try to control what companies can learn
about them. They did not want to lose control over their personal data but they were simply resigned to
the situation.

178
General Populace does not trust Companies with their Data

ICO 2017 (Information Commissioners Office. UK. “Big Data, Artificial Intelligence, Machine Learning
and ...,” April 9, 2017. https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-
and-data-protection.pdf.)

In the UK, a survey for Digital Catapult50 showed a generally low level of trust. The public sector was the most
trusted to use personal data responsibly, by 44% of respondents; financial services was the next most
trusted sector, but only by 29% of respondents. Other sectors had a much lower rating. On the other hand, the
survey found that a significant proportion of people were happy for their data to be shared for purposes such as education and health. These themes – a

feeling of resignation despite a general lack of trust, combined with a willingness for data to be used for socially useful purposes – were reflected in a

report from Sciencewise51 which summarised several recent surveys on public attitudes to data use.

179
People care about data privacy

Customer Data: Designing for Transparency and Trust

by Timothy Morey, Theodore “Theo” Forbath, and Allison Schoop,

Harvard Business Review, May 2015

https://s3.amazonaws.com/academia.edu.documents/49352349/CUSTOMER_DATA-
DESIGNING_FOR_TRANSPARENCY_AND_TRUST-R1505H-PDF-ENG.desbloqueado.pdf

Customer has a low awareness of them leaving trails of personal data behind

Though consumers worry about how their personal data is gathered and used, they’re surprisingly
ignorant of what data they reveal when they’re online, and most companies opt not to enlighten them.
This dynamic erodes trust in firms and customers’ willingness to share information.

To help companies understand consumers’ attitudes about data, in 2014 we surveyed 900 people in five
countries—the United States, the United Kingdom, Germany, China, and India—whose demographic mix
represented the general online population. We looked at their awareness of how their data was collected and used, how they
valued different types of data, their feelings about privacy, and what they expected in return for their data. To find out whether
consumers grasped what data they shared, we asked, “To the best of your knowledge, what personal
information have you put online yourself, either directly or indirectly, by your use of online services?”
While awareness varied by country—Indians are the most cognizant of their data trail and Germans the least—overall the survey revealed an
On average, only 25% of people knew that
astonishingly low recognition of the specific types of information tracked online.
their data footprints included information on their location, and just 14% understood that they were
sharing their web-surfing history too.

It’s not as if consumers don’t realize that data about them is being captured, however; 97% of the
people surveyed expressed concern that businesses and the government might misuse their data.
Identity theft was a top concern (cited by 84% of Chinese respondents at one end of the spectrum and
49% of Indians at the other). Privacy issues also ranked high; 80% of Germans and 72% of Americans
are reluctant to share information with businesses because they “just want to maintain [their] privacy.”

180
AT: Data is anonymous

You’re very easy to track down, even when your data has been anonymized

Jeremy Owens: Data Scientist, Jul 30, 2019

https://towardsdatascience.com/the-debate-around-data-privacy-is-missing-the-point-1fcdc4effa40

Unfortunately many organizations have shown that anonymous data isn’t as anonymous as we like to think.
While the aims these corporations and government interests have for our data may not care about our
identifying information, anyone who can get their hands on that data set, with enough skill, can pretty
easily identify the true individuals who made up that data set. That makes us all targets for identity theft,
blackmail, and a myriad other exploitative practices that no persons should have to suffer through.

181
Anonymous data doesn’t protect us

Charlotte Jee, Jul 23, 2019, MIT Technology Review https://www.technologyreview.com/s/613996/youre-


very-easy-to-track-down-even-when-your-data-has-been-anonymized/

A new study shows you can be easily re-identified from almost any database, even when your personal
details have been stripped out.
Researchers from Imperial College London and the University of Louvain have created a machine-learning model
that estimates exactly how easy individuals are to reidentify from an anonymized data set. You can check your own
score here, by entering your zip code, gender, and date of birth.

On average, in the US, using those three records, you could be correctly located in an “anonymized” database
81% of the time. Given 15 demographic attributes of someone living in Massachusetts, there’s a 99.98% chance
you could find that person in any anonymized database.

“As the information piles up, the chances it isn’t you decrease very quickly,” says Yves-Alexandre de Montjoye, a
researcher at Imperial College London and one of the study’s authors.

This isn’t the first study to show how easy it is to track down individuals from anonymized databases. A paper back
in 2007 showed that just a few movie ratings on Netflix can identify a person as easily as a Social Security
number, for example. However, it shows just how far current anonymization practices have fallen behind our
ability to break them. The fact that the data set is incomplete does not protect people’s privacy, says de Montjoye.

“The issue is that we think when data has been anonymized it’s safe. Organizations and companies tell us it’s safe,
and this proves it is not,” says de Montjoye.

182
Anonymous data are not considered Personal Data but data after
Pseudonymization still is

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free movement
of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA
relevance) (GDPR) https://eur-lex.europa.eu/eli/reg/2016/679/oj

Article (26)
The principles of data protection should apply to any information concerning an identified or identifiable natural person.
Personal data which have undergone pseudonymisation, which could be attributed to a natural person
by the use of additional information should be considered to be information on an identifiable natural
person. To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as
singling out, either by the controller or by another person to identify the natural person directly or indirectly. To ascertain whether
means are reasonably likely to be used to identify the natural person, account should be taken of all
objective factors, such as the costs of and the amount of time required for identification, taking into
consideration the available technology at the time of the processing and technological developments.
The principles of data protection should therefore not apply to anonymous information, namely
information which does not relate to an identified or identifiable natural person or to personal data rendered
anonymous in such a manner that the data subject is not or no longer identifiable . This Regulation does not
therefore concern the processing of such anonymous information, including for statistical or research
purposes.

183
Pseudonymization not good enough

Guidance on Anonymisation and Pseudonymisation Data Protection Commission, 2019 June


https://www.dataprotection.ie/sites/default/files/uploads/2019-
06/190614%20Anonymisation%20and%20Pseudonymisation.pdf

Although pseudonymisation has many uses, it should be distinguished from anonymisation, as it only
provides a limited protection for the identity of data subjects in many cases as it still allows
identification using indirect means. Where a pseudonym is used, it is often possible to identify the data
subject by analyzing the underlying or related data.

Joy, William. 2000. Wired. < http://www.wired.com/wired/archive/8.04/joy_pr.html

Biological species almost never survive encounters with superior competitors. Ten million years ago,
South and North America were separated by a sunken Panama isthmus. South America, like Australia
today, was populated by marsupial mammals, including pouched equivalents of rats, deers, and tigers.
When the isthmus connecting North and South America rose, it took only a few thousand years for the
northern placental species, with slightly more effective metabolisms and reproductive and nervous
systems, to displace and eliminate almost all the southern marsupials.
In a completely free marketplace, superior robots would surely affect humans as North American
placentals affected South American marsupials (and as humans have affected countless species). Robotic
industries would compete vigorously among themselves for matter, energy, and space, incidentally
driving their price beyond human reach. Unable to afford the necessities of life, biological humans would
be squeezed out of existence.
There is probably some breathing room, because we do not live in a completely free marketplace.
Government coerces nonmarket behavior, especially by collecting taxes. Judiciously applied,
governmental coercion could support human populations in high style on the fruits of robot labor, perhaps
for a long while.
A textbook dystopia - and Moravec is just getting wound up. He goes on to discuss how our main job in
the 21st century will be "ensuring continued cooperation from the robot industries" by passing laws
decreeing that they be "nice," and to describe how seriously dangerous a human can be "once
transformed into an unbounded superintelligent robot." Moravec's view is that the robots will eventually
succeed us - that humans clearly face extinction.

184

You might also like