You are on page 1of 122

A

Project

On

“DATA PRIVACY, RISK AND DATA PROTECTION”

A project report submitted to SYMBIOSIS INTERNATIONAL UNIVERSITY,


towards partial fulfillment of the requirements for the project

Under the guidance of: Submitted by:

Dr. Pravin Kumar Bhoyar Suneel Koul

Professor, SIMS PRN: 15020448025

Ex-MBA: 2015-2017

Date: 23/SEP/2017

1|P age
DATA PRIVACY, RISK AND DATA PROTECTION

To evaluate the overall Confidential data pertaining to

individual or to group or to a project and associate risk with

them and develop the policy to protect that data which is

confidential to us.

Report

Submitted by: Suneel Koul


MBA (E) 2015-17, Fifth semester
PRN: 15020448025

2|P age
CERTIFICATE

This is to certify that the project work “DATA PRIVACY, RISK AND DATA PROTECTION” is a bona fide

record of work done by Suneel Koul under the guidance of Dr. Pravin Kumar Bhoyar in partial

fulfillment of the requirements for the project.

Dr. Pravin Kumar Bhoyar


(Project Co-ordinator)
Symbiosis Institute of Management Studies, Pune

3|P age
Acknowledgements

I take this opportunity to express my sincere gratitude to Dr. Pravin Kumar Bhoyar for his sincere
guidance and encouragement in carrying out this project work. I sincerely thank all the colleagues of
Symantec Softwares India Pvt Ltd and fellow Student Managers who rendered their help during the
period of my project work. I would also like to thank all the respondents who gave their valuable time
for filling up the questionnaires and for giving valuable inputs during the exploratory research.

Suneel Koul

MBA (E) 2015-17, Fifth semester

PRN: 15020448025

4|P age
EXECUTIVE SUMMARY

Privacy impact assessments (PIAs) and Data protection are widely used in the European
countries, especially by government departments and agencies, local authorities, national health
service (NHS) trusts and even by companies, according to a historical data survey carried out in
past along with, present ,found that more two-thirds of respondents were conducting privacy
impact assessments.

The UK was the first country in Europe to develop and promulgate a privacy impact assessment
methodology. The Information Commissioner’s Office (ICO) published a PIA Handbook in
December 2007, followed by a revision in June 2009.

It’s aimed at organizations who are developing projects that might have implications for
people’s privacy. It will help organizations assess and identify any privacy concerns (a Privacy
Impact Assessment) and address them at an early stage, rather than leaving the solutions to bolt
on as an expensive afterthought.

The Cabinet Office accepted the value of PIA reports and stressed that they will be used and
monitored in all departments as a means of protecting personal data from July 2008 onwards.
PIAs have thus become a “mandatory minimum measure” in the UK government and its
agencies.

The European Commission introduced its proposed Data Protection Regulation in January 2012,
Article 33 of which would make PIAs mandatory for both public and private sector
organizations throughout Europe where processing operations are likely to present specific
risks to the rights and freedoms of data subjects.

One of the most important piece of legislation protecting our data at present is the Information
Technology Act (hereinafter IT Act). The IT Act makes hacking and tampering with computer
source an offence and penalizes unlawful access to data. However does not prescribe any
minimum security

Standards which the entities having control of data should comply with except in cases of
Personal sensitive information. The Information Technology.

Under IT Act, 2000

Section 43
This section provides protection against unauthorized access of the computer system by imposing
heavy penalty up to one crore. The unauthorized downloading, extraction and copying of data are
also covered under the same penalty. Clause ‘c’ of this section imposes penalty for unauthorized
introduction of computer viruses of contaminants. Clause ‘g’ provides penalties for assisting the
unauthorized access.

Section 65

5|P age
This section provides for computer source code. If anyone knowingly of intentionally conceals,
destroys, alters or causes another to do as such shall have to suffer a penalty of imprisonment or
fine up to 2 lakh rupees. Thus protection has been provided against tampering of computer source
documents.

Section 66
Protection against hacking has been provided under this section. As per this section hacking is
defined as any act with an intention to cause wrongful loss or damage to any person or with the
knowledge that wrongful loss of damage will be caused to any person and information residing in
a computer resource must be either destroyed, deleted, altered or its value and utility get
diminished. This section imposes the penalty of imprisonment of three years or fine up to two
lakh rupees or both on the hacker.

Section 70
This section provides protection to the data stored in the protected system. Protected systems are
those computers, computer system or computer network to which the appropriate government, by
issuing gazette information in the official gazette, declared it as a protected system. Any access
or attempt to secure access of that system in contravention of the provision of this section will
make the person accessed liable for punishment of imprisonment which may extend to ten years
and shall also be liable to fine.

Section 72

This section provides protection against breach of confidentiality and privacy of the data. As per
this, any person upon whom powers have been conferred under IT Act and allied rules to secure
access to any electronic record, book, register, correspondence, information document of other
material discloses it to any other person, shall be punished with imprisonment which may extend
to two years or with fine which may extend to one lakh rupees or both.

6|P age
Contents
EXECUTIVE SUMMARY ............................................................................................................................................ 5
CHAPTER-1 .............................................................................................................................................................. 9
1. INTRODUCTION ............................................................................................................................................. 9
1.1 Privacy Impact Assessment ........................................................................................................................ 15
1.2 PIA Risk Management ................................................................................................................................ 15
1.3 Methodologies .................................................................................................................................................... 16
CHAPTER-2 ............................................................................................................................................................ 18
2. Project and Technology Development Management Standards and Methodologies. ..................................... 18
2.1 Project Management Methodologies .......................................................................................................... 20
2.1.1 Project Management Body of Knowledge (PMBOK©) ............................................................................. 20
CHAPTER-3 ............................................................................................................................................................ 27
3. The Human Brain project ................................................................................................................................ 27
3.1 Data as a valuable resource for research................................................................................................................. 27
3.2 EU regulations on data protection .......................................................................................................................... 28
3.3 Individual rights versus the common good ............................................................................................................. 28
CHAPTER-4 ............................................................................................................................................................ 30
4. PRIVACY RISK MANAGERMENT ............................................................................................................. 30
4.1 The notion of privacy risk ...................................................................................................................................... 30
4.2 Level of risks: how to estimate them? .................................................................................................................... 31
CHAPTER-5 ............................................................................................................................................................ 34
5. Expression of Needs and Identification of Security Objectives (EBIOS) in the Field of privacy ................... 34
5.1 Background study: What is the context? ................................................................................................................ 34
5.2 Feared events study: What does one fear happening? ............................................................................................ 35
5.3 Risk study: What is the risk level? ......................................................................................................................... 39
5.4 Measures study: What can be done to treat risks? .................................................................................................. 41
CHAPTER-6 ............................................................................................................................................................ 48
6. Risk Management in Data Protection Regulation ........................................................................................... 48
6.1 ISO/IEC 27005:2011 Information security risk management ................................................................................ 48
6.2 NIST SP 800-39 Managing Information Security Risk .......................................................................................... 48
6.3 ISACA and COBIT ................................................................................................................................................ 52
6.4 EBIOS (Expression of Needs and Identification of Security) ................................................................................ 59
CHAPTER-7 ............................................................................................................................................................ 61
7. ISO/IEC 29100:2011 Information technology — Security techniques ........................................................... 61
CHAPTER-8 ............................................................................................................................................................ 66
8. Assessing and identifying data protection risks overview ............................................................................... 66
8.1 Completing the privacy risk register ...................................................................................................................... 68
CHAPTER-9 ............................................................................................................................................................ 68
9. Data and Data protection laws......................................................................................................................... 68
9.1 Data Classification ................................................................................................................................................. 76
9.2 Why Data Protection? ............................................................................................................................................ 83
7|P age
9.3 Why is data protection needed? .............................................................................................................................. 83
9.4 So how does data protection work? ....................................................................................................................... 84
9.5 How many countries in the world have data protection laws? ............................................................................... 85
9.6 Are data protection laws the same in all countries that have them? ....................................................................... 86
9.7 What is considered as personal information under data protection laws? .............................................................. 87
9.8 Data protection policies to safeguard Information. ................................................................................................ 87
CHAPTER-10 .......................................................................................................................................................... 88
10 Data Protection Law in India. .............................................................................................................................. 88
10.1 Data protection under foreign law ........................................................................................................................ 93
CHAPTER-11 .......................................................................................................................................................... 95
11. The EU General Data Protection Regulation .............................................................................................. 95
11.1 GDPR Timeline of Events ................................................................................................................................ 98
11.2 SOME FAQ FOR GDPR.................................................................................................................................... 100
Survey and Findings ............................................................................................................................................... 102
CONCLUSION ...................................................................................................................................................... 120

8|P age
CHAPTER-1
1. INTRODUCTION
A. What is a Privacy Impact Assessment (PIA)?

A PIA is a methodology used to assess privacy risks to living individuals in the processing of
their personal data including collection, use and disclosure of information. The reasons which
may prompt an organization to undertake a PIA are as follows:-
 Risk and Commercial Strategy Management
 Cost effectiveness
 Appropriate solutions
 Business Credibility
 Ascertaining legal compliance.

Projects with privacy implications require a full-scale privacy impact assessment (PIA) process.

A small-scale PIA or a large-scale PIA may be conducted depending on the size of the project.
Because projects may be essentially different, a methodology should be devised that fits the
specific requirements of the project, is explicit and as resource-intensive as is appropriate in the
circumstances.

B. Compliance checking and data protection audit

A PIA needs to be distinguished from a data protection audit. Normally, a PIA should not be
conducted on a project that has already been implemented. A PIA is best completed at a stage
where it can genuinely contribute to the development of a project. Carrying out a PIA on an
already existing project runs the risk of raising unrealistic expectations amongst stakeholders
during consultation, unless there is a genuine opportunity to alter the design and implementation
of a project.

A data protection audit is more appropriate for existing projects. An audit is valuable in that it
either confirms that data protection principles are being complied with, or highlights problems
that need to be addressed. A PIA aims to prevent problems from arising. A PIA is broader than an
audit of compliance.

PIAs have been designed as a self-assessment tool for organizations and the Data Protection
Office does not have a formal role in conducting them and/or approving any final report which is
produced. However, the office is available for all assistance required.

C. Who is required to complete a PIA?

There is no legal obligation for any organization to complete a PIA. However, this template has
been developed by the Data Protection Office as a Guide for all data controllers.

D. What should be the expectations and outcomes of an effective PIA process?

The aims of an effective PIA should be the:


 Identification of the project’s privacy implications;
 Assessment of those implications from the perspectives of all stakeholders;

9|P age
 Identification and assessment of privacy-enhancing alternatives;
 Unavoidable negative impacts on privacy should be capable of justification by the
business need that requires them; and
 Documentation and publication of the outcomes.

E. Why should a PIA be conducted?

 To identify privacy risks to individuals and data protection compliance liabilities for your
organization through the PIA.
 To avoid expensive, inadequate “bolt- on” solutions.

F. Privacy risks Definition of ‘privacy risks?

The massive increase in the collection, storage, use and disclosure of personal data, and the
advent of
Intrusive technologies, are potentially harmful to individual privacy.

Privacy risks may be subdivided into two categories:-


 Risks to the individual’s privacy rights, or loss, damage, misuse or abuse of their personal
information.
 Risks to the organization as a result of:
 a failure to meet public expectations on the protection of personal information;
 retrospective imposition of regulatory conditions;
 low acceptance rates or poor participation in the scheme from the public and partner
organizations;
 the costs of redesigning the system or retro-fitting solutions;
 collapse of a project or completed system;
 withdrawal of support from key supporting organizations due to perceived privacy harms;
and/ or
Failure to comply with the law, leading to:
» enforcement action from the regulator; or
» compensation claims from individuals.

G. Recognizing privacy risks


Any collection, use or disclosure of personal information is a potential risk to personal privacy.
Sometimes those risks are not obvious and as a result they are easily overlooked or not
adequately addressed to. If the project design reflects a good understanding of privacy issues, it is
possible that the participants in the consultation processes may agree to the design. However,
because of project complexities and the diversity of interests among stakeholders, the
consultation processes may sometimes create the need for parts of the project and its design to be
re-considered. This section provides some guidance on the type of risks, impacts and
vulnerabilities you might look for when designing a project or conducting a PIA.

H. Broad personal information issues, including:

 The nature of the personal information. This could include “sensitive personal data” as
defined by the Data Protection Act 2004, but also personal financial information, family
structures, personal email addresses, information about persons considered “at risk”, travel plans
etc.

10 | P a g e
 The quality of personal information. This includes characteristics of the information
itself, such as accuracy, relevance and adequacy. The more personal information moves from its
original context, the greater the likelihood it can be misinterpreted. The quality of information
also raises questions about data matching and mining, whether you are matching like with like
and the number of false matches which may be produced.

 The meaning behind terms used in personal information. This takes into account that
terms used can be context or sector specific. Variations in meaning of apparently similar terms
may give rise to misunderstandings or error which in turn could result in harm or disadvantage to
the individual. This area would also include examining metadata attached to personal
information.

 The retention, deletion and destruction of personal information. How long do your
business needs require retention of information? Are there legal obligations to dispose of or retain
data? Do you need to keep information to counter legal claims or for audit and inspection
purposes? Can your organization make better use of ‘soft deletion’, where after the original
purpose has been met, access to the information is much more tightly controlled until the
organization can permanently delete it?

 The protection of personal information. This includes the effectiveness of privacy


protections. An effective privacy protection regime requires all of the following to be in place:

 Clear specifications of privacy protections;


 Clear prohibitions against breaches of protections;
 Clear sanctions or penalties for breaches of protections;
 Mechanisms in place to detect and report breaches; and
 Resources for investigating breaches and applying sanctions.

I. Issues around identification of the individual, including:

 the multiple use of different identifiers;


 the denial of anonymity, identifying individuals where it is only necessary to authenticate
rights to benefits, access and services;
 identifiers that directly disclose personal data, for example embedded date-of-birth;
 identifiers linked with authenticators, such as credit card number plus additional details,
because that creates the risk of identity fraud and in extreme cases even identity theft; and
 The use of biometric identifiers.

J. Function creep, beyond the original context of use, in relation to the use of personal
information or the use of identifiers.

K. Registration and authentication processes, including the burden such processes impose,
their intrusiveness, and the exercise of power by government over individuals.

Surveillance, whether audio, visual, by means of data, whether electronically supported or not,
and whether the observations are recorded or not. Location and tracking, whether within
geographical space or on networks, even where it is performed incidentally, and especially where

11 | P a g e
it gives rise to a record. From the perspective of privacy protection, there are considerable
privacy benefits in decentralization rather than centralization. The benefits include:

 reducing the risk of function creep;


 enabling the application of access controls;
 encouraging a focus on relevancy;
 reducing the misinterpretation of data due to a loss of context; and
 Increasing the likelihood of prompt data destruction when it is no longer required.

Where a project involves centralizing information, it is important that there is clear justification.
Further, those who want to use information in a more speculative manner (such as ‘statistical
analysis’, ‘management reporting’ and ‘data mining’) need to be challenged for greater detail,
and to show that benefits will be achievable. Once a case for centralization has been established,
it is necessary to identify, assess and balance the disadvantages.

L. Intrusions into the privacy of the person, especially compulsory or pseudo-voluntary


(such as in employment relationships), yielding of tissue and body-fluid samples, and biometric
measurement. It is highly advisable to document the issues which are identified.

M. Persons at risk and vulnerable populations, some people, in some circumstances, face
particularly serious risks if their personal data is disclosed. This applies especially to their
physical location or data that may result in disclosure of their physical location. It may also apply
to, for example, health care or financial data. Useful generic terms for people to whom this
applies are ‘persons at risk’ and ‘vulnerable populations.

Categories of persons whose physical safety is at risk include:

 people who are under the direct threat of violence, including:


• people concealing themselves from previous criminal associates;
victims of domestic violence;
• protected witnesses;
• People who have been the subject of threats to their safety.

 celebrities, notorieties and VIPs, including:

• politicians;
• entertainers and sportspeople;
• people ‘in the public eye’, such as lottery winners; or
• Those who publicly promote controversial views.

 people in security-sensitive roles, such as:

• national security operatives;


• undercover police;
• prison wardens;
• Staff in psychiatric institutions.

12 | P a g e
Even where physical safety is not under threat, care may still be needed in respect of ‘vulnerable
populations’, some of whom may find it difficult to exercise control over their personal data.
Examples might include younger children or adults who lack capacity to provide consent. Your
organization might also want to consider the difficulties faced by individuals who are homeless
or ex-detainees. Certain health conditions might also put individuals at risk if inappropriately
disclosed.

N. Identifying privacy solutions

Once you have identified and assessed the privacy risks your project presents, you need to
consider what action you intend to take in relation to each risk.

a) Accept the risks, impacts or liabilities;


b) Identify a way to avoid the risks (a privacy impact avoidance measure); or
c) Identify a way to mitigate the risks (a privacy impact mitigation measure).

13 | P a g e
Accepting the risks

In some instances, because of the nature of the risks, impacts or liabilities, the chances of the
risks being realized or the minimal impact they may have, it might be entirely appropriate to
simply recognize and accept the privacy risks or certain aspects of the privacy risks. However,
this must not be done simply as an alternative to taking action to address risk and must be
considered carefully as an option. If considering this option, ensure that a record of the identified
risk is made, along with the reasons for accepting the risk.

Privacy impact avoidance measures

An avoidance measure is a means of dissipating a risk. It refers to the exclusion of technologies,


processes, data or decision criteria, in order to avoid particular privacy issues arising. Examples
include:
• Minimizing the collection of personal information to what is strictly necessary;
• non-collection of contentious data-items;
• active measures to stop or block the use of particular information in decision making (a good
example of this is ethnic monitoring forms being filled out anonymously when companies are
recruiting);
• active measures to preclude the disclosure of particular data-items, for example screening or
hiding of certain services which are being provided to the individual which might disclose other
personal information;
• Non-adoption of biometrics in order to avoid issues about invasiveness of people’s physical
selves.

Privacy impact mitigation measures

A mitigation measure is a feature that compensates for other privacy intrusive aspects of a design.
A mitigation measure may compensate partially or wholly for a negative impact. Examples
include:
 Minimization of personal data retention by not recording it;
 Destruction of personal information as soon as the transaction for which it is needed is
completed;
 Destruction schedules for personal information which are audited and enforced.

 Limits on the use of information which has been collected for a very specific purpose,
with strong legal, organizational and technical safeguards preventing its application to any
other purpose;

 Design, implementation and resourcing of a responsive complaints-handling system,


backed by serious sanctions and enforcement powers. Problems must be analyzed, to
devise acceptable avoidance and mitigation measures. The following suggestions are
made about the process of problem analysis:

 The differing perspectives of the multiple stakeholder groups should be reflected.

 The focus of each impact and implication should be identified. For instance, what kinds of
people or organizations will experience the various impacts, and under what
circumstances?

14 | P a g e
 The justification for the feature that gives rise to the problem should be examined. For
example, is the privacy infringement proportional to, or appropriately balanced with, any
benefits gained from the infringement? And is it clear that the claimed benefits will
actually arise?

 The circumstances in which the feature needs to be applied should be questioned. Is it


appropriate for the data to be collected, used or disclosed in every instance, or can the
data handling in question be limited to particular situations in which it is demonstrably
relevant?

1.1 Privacy Impact Assessment

The use of PIA in the UK dates back to at least December 2007, when the ICO published the first
PIA Handbook in Europe. The Handbook was based on research conducted by an
internationally distinguished team led by Loughborough University. Among the PIA analysts in
this team were Professor Colin Bennett (University of Victoria, B.C., Canada) and privacy and
surveillance expert Roger Clarke, a consultant and Professor in Australia. The research team
studied and produced reports on PIA practice and methodology in Australia, Canada, Hong
Kong, New Zealand and the United States in order to identify best practices that could inform
the ICO Handbook, the principal author of which was Clarke. The ICO issued a second
edition of the Handbook in June 2009. It is now working on a third edition, to provide some
research upon which the new version can draw. We understand that the new PIA guidance
will be somewhat shorter and more streamlined than its predecessors. Based on the present
study as well as previous research conducted, especially in the context of the EC -funded
PIAF project as well as our contacts with industry, we concur that a more streamlined guide
is warranted.

1.2 PIA Risk Management

The ICO saw PIA as an element in risk management, as the Handbook makes clear. It says that
“organizations may f i n d it appropriate to plan a PIA within the context of risk
management. It also says that the government “will check that they have been carried out as an
integral part of the risk management assessment.

Better integration of PIA with risk management practices has been an issue with other data
protection authorities, as the following paragraphs show, and for quite some time too. In one of
the earliest papers on PIA.

15 | P a g e
1.3 Methodologies

The research on which this report is based uses various approaches and methodologies.

We conducted a literature review of the various project and risk management standards and
methodologies analyzed in this report. An Internet search located 26 UK privacy impact
assessment reports. Its purpose was to determine which project and risk management
standards and methodologies are being used in the UK, whether the recipient organizations have
conducted any PIAs

16 | P a g e
17 | P a g e
CHAPTER-2
2. Project and Technology Development Management Standards and
Methodologies.
This chapter describes popular project management standards and methodologies in use
in the UK and abroad. For each methodology, we provide an overview followed by a
table in which we “interrogate” the methodology using a set of questions derived from
the PIA Handbook touch points. The following table shows how we have converted the
touch points into a set of questions.

Touch points extracted from the ICO Questions for project management
PIA Handbook methodology based on touch points
1 PIAs must comply with (more than just Does the PM methodology include
Data protection) legislation. Private Provisions about compliance with
sector organizations will also have to legislation and any relevant industry
consider industry standards, codes of standards, code of conduct, internal
conduct and privacy policy statements. policy, etc.?
2 PIA is a process. Is the PM methodology regarded as a
process
3 A PIA could consider: Or is itthesimply
Does about producing
PM methodology a only
address
1. privacy of personal information; report?
Information privacy protection or
2. privacy of the person; does it address other types of
3. privacy of personal behavior; and privacy as well?
4. Privacy of personal
communications.
4 PIA should be undertaken when it is Does the PM methodology say that it
Possible to influence the development of should
a project. Be undertaken when it is still
5 Responsibility for the PIA should rest at possible
Does theto PMinfluence the place
methodology
The senior executive level. development offor
Responsibility theitsproject?
use at
the senior executive level?
6 The organization should develop a plan Does the PM methodology call for
For the PIA and its terms of reference. It Developing a plan and terms of
should develop a consultation strategy reference? Does it include a
appropriate to the scale, scope and nature consultation strategy appropriate to
of the project. the scale, scope and nature of the
7 A PIA should include an environmental Does project?the PM methodology call for
Scan (information about prior projects of conduct
a similar nature, drawn from a variety of Of an environmental scan (information
sources). about prior projects of a similar
8 The organization should determine Does nature,the drawn from a include
PM methodology variety of
sources)?
whether a small-scale or full-scale PIA Provisions for scaling its
Is needed. application according to the
scope of the project?

18 | P a g e
A PIA should seek out and engage Does the PM methodology call for
Stakeholders internal and external to the Consulting all relevant stakeholders,
organization. The assessor needs to internal and external to the
make sure that there is sufficient diversity organization, in order to identify and
among those groups or individuals being assess the project’s impacts from their
consulted, to ensure that all relevant perspectives?
perspectives are
Touch points extracted from the ICO Questions for project management
PIA Handbook methodology based on touch points
represented, and all relevant
Information is gathered.
10 The organization should put in place Does the PM methodology include
Measures to achieve clear Provisions for putting in place
communications between senior measures to achieve clear
management, the project team and communications between senior
representatives of, and advocates for, management, the project team and
the various stakeholders. stakeholders?
11 The PIA should identify risks to Does the PM methodology call for
Individuals and to the organization. Identification of risks to individuals
and to the organization?
12 The organization should identify less Does the PM methodology include
Privacy-invasive alternatives. It should provisions for identifying protection
identify ways of avoiding or minimizing measures and/or design solutions to
the impacts on privacy or, where negative avoid or to mitigate any negative
impacts are unavoidable, clarify the impacts of the project or, when
business need that justifies them. negative impacts are unavoidable,
does it require justification of the
13 The organization should document the business
Does the need for them? include
PM methodology
PIA process and publish a report of its Provisions for documenting the
outcomes. process?
14 A PIA report should be written with the Does the PM methodology include
Expectation that it will be published, or provision
at least be widely distributed. The report for making the resulting document
should be provided to the various public
parties involved in the consultation. If (Whether redacted or otherwise)?
information collected during the PIA
process is commercially or security
sensitive, it could be redacted or placed in
confidential appendices, if justifiable.
15 The PIA should be re-visited in each Does the PM methodology call for a
New project phase. review
16 A PIA should be subject to third-party Does If there
theare
PMany changes in the
methodology project?
include
Review and audit, to ensure the Provisions for an audit to ensure
organization implements the PIA that the organization implements
recommendations or, if not all, that it all recommendations or, if not all,
has provided adequate justification for that it has provided adequate
not implementing some justification for not implementing
recommendations. some recommendations?

19 | P a g e
By developing a set of questions based on the PIA Handbook touch points to interrogate the
project management methodology, we can determine whether there are sufficient
commonalities between the PIA process and the project management process so that a PIA
could be conducted in tandem with the project management process without disrupting it.
Further, if there are a sufficient number of commonalities, then we assume that integration of
PIA into the project management process will be possible without much difficulty. If there are
an adequate number of touch points, we assume that it will be easier to convince project
Managers that they should take account of – or integrate – PIA in their project management
process.

Even if there are not so many touch points, there is still a possibility of integrating PIA in the
project management process through one or more “open doors” – i.e., points in the project
management process where or when it would be possible to conduct a PIA.

2.1 Project Management Methodologies

While project management methodologies continually evolve, and a small proportion of


organizations (4%, according to the PWC 2012 global survey of companies) use an in-house
developed methodology, there are a few dominant (and emerging in dominance) project
management approaches, which we describe here.

2.1.1 Project Management Body of Knowledge (PMBOK©)

With its origins as a white paper, and later expanded as the PMI (Project Management Institute)
Project Management Body of Knowledge in the PMI-published PM Network periodical in 1987,
this standard was approved as an ANSI (American National Standards Institute) standard in
1999. On a global basis, 41 per cent of organizations responding to a survey by
PriceWaterhouseCoopers report that PMBOK is the dominant project management
methodology used globally for managing all types of projects. As an indicator of the broad
scope of adoption, PMI reports that more than 650,000 people in 185 countries are members of
PMI and credential holders in one of the areas related to PMBOK.

This standard encompasses a broad range of principles, process groups and knowledge areas for
project management. The processes and knowledge developed and described under this standard
have been written about and amended over several iterations by PMI volunteers, who have
brought expertise from their work in the project management profession. The PMBOK© Guide
acknowledges as well the “plan-do-check-act” cycle, as originally defined by Shewhart in the
1930s and further modified by Deming in the 1950s, as an underlying concept for the
interaction amongst these processes.

Figure 2.1: Plan-Do-Check-Act Cycle

20 | P a g e
The process groups (many of which are directly paralleled in ISO 21500, the development to
which PMI contributed) include those described below.

 initiating processes, which are associated with the initial definition or authorization of
projects or project phases,
 planning processes, which aim to define and/or refine goals and objectives and plan
actions needed to achieve them,
 executing processes, where people and resources are brought together to complete the
work that has been planned,
 monitoring and controlling processes, which are focused upon measuring and
checking progress against the developed plan, and
 Closing processes that end the project or project phase in an orderly fashion, with a
focus upon acceptance of the work performed.

Nine knowledge areas of PMBOK are required for project managers and applied (to a greater or
lesser degree) across the five process groups described above. The knowledge areas
defined and described in the standard include:

Project Integration Management. This knowledge area focuses upon the integration
of processes amongst the project management process groups. Within this knowledge area
are described the development of the project charter, preliminary project scope and the
overall project management plan.

Project Scope Management. This knowledge area includes processes that aim to define
the work of the project and ensure it encompasses all (but only) the work required to complete
the project, as well as to control the scope over the course of the project through an integrated
change control process. The scope of work is defined through a work breakdown structure
(WBS) that deconstructs the work and identifies deliverables

Project Time Management. This knowledge area comprises processes aimed at developing and
managing the overall project schedule, including activity definition and sequencing, estimating
resource and activity duration, and analysis required to develop a schedule from these inputs.

Project Cost Management. This knowledge area includes those processes that support planning,
estimating and controlling project costs. The over-arching aim served by these processes is to
develop the project within its budget. This knowledge area includes concepts of life-cycle
costing, along with value engineering techniques to improve decision-making within the project’s
life in order to optimize quality and performance.

Project Quality Management. This knowledge area includes those processes that provide
for the implementation of quality policies, objectives and responsibilities, implementing the
quality system utilized by the organization, and specifically organizes this through quality
planning, quality assurance and quality control activities. The standard describes and defines
approaches to implement various quality standards and to monitor results to ensure they meet the
quality standards. It provides for continuous improvement through the application of a cyclical
"plan-do- check-act" cycle or other quality improvement initiatives (e.g., TQM, Six Sigma).

Project Human Resource Management. This knowledge area includes processes often referred
to as “soft skills”. The processes include those aimed at organizing and managing the project
team, from human resource planning, defining roles and responsibilities, and staff management
21 | P a g e
planning to acquiring, developing and managing the project team. The processes include
quantitative planning efforts as well as guidance for negotiating for resources, team building,
conducting performance appraisals and other soft management skills.

Project Communications Management. This knowledge area comprises processes to link


people and information within the project in order to ensure success of the project. Of the
various principles and processes included in this knowledge area, managing stakeholders is of
particular interest. The standard includes discussion of positive and negative stakeholders to
highlight the need to understand the perspectives of each, though the general focus of the
processes is upon the users whose inputs are directly sought to identify issues and initiate change
requests.
Process Groups.

Initiating Planning Executing Monitoring &


Closing
Integration Controlling
Scope
Time
Cost
Quality
Human Resource
Communication
Risk
Procurement

Figure 2.2: Process Groups and Knowledge Areas

Project Risk Management. The processes included in the knowledge area are those connected
to planning for, identification of, responding to, monitoring and controlling risk within a project.
Risks are qualitatively and quantitatively analyzed, and risk probabilities and impacts defined. A
risk breakdown structure (RBS) is defined as an output of these processes. Given the uncertain
nature of risk, numerous strategies for identifying and controlling risks are described.

Project Procurement Management. This knowledge area includes the processes for acquiring
or purchasing the products or services needed from sellers outside the project team, and
includes activities for planning purchases and acquisitions and contracting, selecting sellers,
performing contract administration and ultimately closing out contracts.

The methodology provides detailed, structured approaches to address each of the process areas
within the context of each knowledge area, detailing steps to be completed and documents to be
produced. In addition to the PMBOK© Guide, specific separate practice standards are provided
for specific tools, techniques or processes identified in the PMBOK© Guide, including those for

22 | P a g e
Project Risk Management, Earned Value Management, Project Configuration Management,
Work Breakdown Structures, Scheduling, and Project Estimating. In addition,
foundational standards are provided for construction projects and government-based projects as
extensions of PMBOK©.

Of the nine knowledge areas, several should be particularly noted as they may apply to the
integration of PIA:

Project Integration Management. As this knowledge area focuses upon the integration of
processes, and privacy impact assessments may be viewed as looking across the entirety of a
project, introduction of privacy and data protection goals may be determined to be relevant
within the project charter and/or scope.

Project Scope Management. Specific goals for privacy and the conduct of a privacy impact
assessment (or a cyclical implementation of privacy impact assessments over the course of
multiple project phases) could be introduced in the scope of the project as developed and
managed in this knowledge area.

Project Communications Management. Specific processes for engaging stakeholders in the


project as it relates to privacy impact assessment goals should be addressed through the
communication management knowledge area.

Project Risk Management. Privacy and data protection related risks are assessed via the PIA.
This knowledge area would be appropriate for introducing and defining the tools and techniques
associated with project risk management.

The documents which are produced by the project management professional, and are the focus of
the PMBOK© Guide, are the Project Charter (formally authorizing the project), the Project
Scope Statement (stating the work to be done and deliverables expected), and the Project
Management Plan (indicating how the work will be done).

The PMP accreditation associated with PMBOK© is the most widely held certification for
project managers on a global basis. The certification is issued by the Project Management
Institute (PMI), which also publishes the related standards as A Guide to the Project Management
Body of Knowledge (PMBOK© Guide), currently in its 7th edition (2017).

Questions for project management Evidence from PMBOK©


methodology based on touch points methodology
1 Does the PM methodology include The PMBOK© Guide does not
Provisions about compliance with specifically provide for processes
legislation and any relevant industry to assure compliance with
standards, code of conduct, internal policy, regulatory or other issues, but does
etc.? identify the need to incorporate
such provisions in the
Process of developing the project
charter as a determinant of project
success.

23 | P a g e
2 Is the PM methodology regarded as a The methodology is a process-
Process or is it simply about producing a driven
report? Approach, which is flexibly
applied across all types and
3 Does the PM methodology address only phases
There isofnoprojects.
explicit focus upon
Information privacy protection or does it privacy.
address other types of privacy as well?
4 Does the PM methodology say that it This is not addressed by the
Should be undertaken when it is still Methodology.
possible to influence the development of
the project?
5 Does the PM methodology place The methodology encourages the
responsibility for its use at the senior inclusion of various types of
Questions for project management Evidence from PMBOK©
methodology based on touch points methodology
Executive level? stakeholders, including executive
levels
Of management,
particularly when initiating
the project and gaining
authorization as well as in
scope definition and
acceptance.

6 Does the PM methodology call for The methodology is heavily reliant


Developing a plan and terms of reference? upon
Does it include a consultation strategy Developing a detailed plan,
appropriate to the scale, scope and nature engaging stakeholders, and
of the project? ensuring effective
7 Does the PM methodology call for conduct communication
There across
is no explicit theupon
focus
Of an environmental scan (information project.
Performing an environmental
about prior projects of a similar nature, scan; however, as a part of the
drawn from a variety of sources)? risk management aspects,
identification of risk would
include a risk assessment and
probability analysis that would
include lessons learned from
other projects and sources.

8 Does the PM methodology include This is not addressed by the


Provisions for scaling its application Methodology.
according to the scope of the project?
9 Does the PM methodology call for There is a particular focus within
Consulting all relevant stakeholders, the
internal and external to the organization, in Context of the Project
order to identify and assess the project’s Communications Management
impacts from their perspectives? knowledge area on managing
stakeholders and managing
change to the project scope
within that context.
24 | P a g e
10 Does the PM methodology include Yes.TheProject
Provisions for putting in place measures to Communications
achieve clear communications between Management knowledge area
senior management, the project team and addresses the principles and
stakeholders? processes appropriate for clear
11 Does the PM methodology call for communications
Yes, amongst
the Project Risk these
Management
Identification of risks to individuals and to groups.
Knowledge area addresses the
the organization? identification of risks. Broadly,
this looks at all types of risks to
the project and its goals, but
also at risks that may emerge
from a wide range of sources
12 Does the PM methodology include (technical,
The environmental,
methodology does notetc.).
provisions for identifying protection explicitly aim
measures and/or design solutions to avoid To look for negative impacts of
or to mitigate any negative impacts of the the project, but it is expected that
project or, when negative impacts are both positive and negative
unavoidable, does it require justification of stakeholders to the project should
the business need for them? be engaged within the processes.
That is, those stakeholders who
are concerned about negative
13 Does the PM methodology include Impacts will be expected
This methodology is toheavily
Provisions for documenting the process? identifyupon
reliant areas of concern.
Developing written
deliverables that define and
describe the plan and the
outcomes of the work
performed.

14 Does the PM methodology include There is no provision for making


Provision for making the resulting Documents public. Such
document public (whether redacted or standards would need to be
otherwise)? defined at an organizational level.

15 Does the PM methodology call for a As there is no explicit call for PIA
Review if there are any changes in the within
project? The methodology, there is likewise
no call for a review. However, the
processes recognize the cyclical
nature of a project with an
integrated change control
Process, which may include its
own criteria for initiation of a
review of privacy issues based
upon the nature of changes.

25 | P a g e
16 Does the PM methodology include No, there is no provision for audit
Provisions for an audit to ensure that the of
organization implements all Changes prescribed by a PIA
recommendations or, if not all, that it has within the methodology, but it
provided adequate justification for not may be that the change control
implementing some recommendations? process should include provisions
for such follow-on validation.

Conclusion and Recommendation

Privacy impact assessments have well-defined goals and can be very effectively integrated
within the PMBOK framework. The main focal point for integration should be within the
Project Risk Management knowledge area, and the PIA should be presented as an available tool
for assessment of privacy risk. In addition, privacy and data protection should be introduced,
along with regulatory and legislative factors as an environmental consideration when
developing the project charter and scope, and in the context of change control.

26 | P a g e
CHAPTER-3
3. The Human Brain project
The Human Brain Project (HBP) is a European initiative to come to a better understanding of the
human Brain, and to enable advances in neuroscience, medicine and future computing
technologies. The vision Of the HBP is to “gain profound insights into what makes us human,
build revolutionary computing Technologies and develop new treatments for brain disorders.

The HBP is one of two so-called flagship projects funded by the EU. It was launched in
October 2013, and it is planned to run for a 10-year period. The project has a total budget of
over 1 billion Euros, and it includes collaborators in more than 20 countries in Europe and
beyond.

The HBP is building six ICT platforms for scientific research. In this background material we
mainly talk about the Medical Informatics Platform (MIP). The MIP allows researcher to ask
questions of personal health data stored in European hospitals. The HBP would like to know how
people in Europe think about the use of their data in research.

3.1 Data as a valuable resource for research

Data is a word we use to describe information or knowledge that is represented in such a way that
it allows for storage, usage and processing. Data could be for example, your address, age, gender,
education, blood pressure, sexual orientation and so forth. In themselves, individual pieces of
data might not say a lot about a person, a group or a country. Pieced together however, one starts
to be able to make predictions on the basis of certain correlations between individual data points
or data sets. Such correlations could be used to gain knowledge about the risk of disease in
groups of individuals with certain behavior. The more data one has to begin with, the more
powerful one’s predictions will be.

Researchers are interested in health data, because it provides their research with a lot of power
for prediction and pattern recognition. With more data, researchers hope to be able to gain new
insights into disease, and they hope that such insights will contribute to better healthcare
practices. The access and use of a person’s data is regulated via national law, and EU law and
guidelines. In the following section, we will explain EU regulations and opinions on data
protection.

27 | P a g e
3.2 EU regulations on data protection

In the process of creating, storing and using data there are three different roles: Data subject, data
controller, and the data processor. The data subject is for example the individual about whom the
data is collected. The data controller is the party that stores and controls the data. The data
controller is legally responsible for any breaches of data security or harms coming from use of
the data they control. The data processor is the party that may receive and use data from the data
controller. (European Parliament, 1995; Stationery Office, 1998) It is worth noting, that EU data
regulation is presently undergoing change.

The relevant EU regulations on data protection make sure that no person or institution is allowed
to hold or process your personal data, unless you have given permission for it or if it is required
by other laws (European Parliament, 1995, 2015). However, this protection does not apply when
personal data is considered to have been anonymized. Anonymization is considered successful,
when it would take more effort and resources to retrieve individual data subjects than what can be
reasonably expected.

For example, if re-identifying a data subject from a dataset only requires a normal home
computer, and some simple software, data cannot be considered anonymized. However, if it
would require an office full of scientists, and one of the fastest supercomputers in the world to
perform complex calculations for several weeks, the amount of effort is probably beyond what
can be expected from an attacker; especially when the dataset contains relatively insensitive data.

For research purposes, it is worth noting, that researchers also need to have their research
approved by local research ethics committees before they can carry out their research. In addition
to legal protections of data, there are also moral grounds for protecting health data used in
research. In the next section we go through the key arguments.

3.3 Individual rights versus the common good

Data is being created around us all the time, and the variety of data that exists about most
individuals is extensive. This data does not only range from weight, diagnoses and age in a
clinical record, there exists a lot of data about other people in other places, for example all your
emails, or the people you have called the last year, where you went to school, who your employer
is and everything that relates to you on social media. Because some of this data can be very
sensitive and private the protection of this data, that was just discussed, is considered part of a
person’s right to privacy. This type of privacy is called data privacy.

In this section, we introduce general ethical issues related to research that uses personal data.
The first section develops the argument from the starting point of individual rights, while the
second section starts from considerations of societal benefits.

a) The value of privacy

Apart from considering privacy important in itself, an important common perspective is that
privacy is important because it protects data subjects from potentially negative consequences of
other people having and using their data. An important aspect of such negative consequences is
unlawful discrimination, for example when you are not accepted for a job because you have a
high risk at developing dementia at a young age, when you are not allowed to take out insurance

28 | P a g e
because you suffer from mental instabilities, or when you can’t get a loan because of your
religion (Rose, 2015; The Danish Council of Ethics, 2015).

Naturally, not all types of information about a person are equally sensitive. However, information
about a person’s health, and other data used and produced in research, are typically considered
sensitive information.

This is a type of information that can affect a person’s ability to change the course of their life, to
get a job or to form new relations. The EU also considers data about race, ethnic origin, political,
religious –or philosophical beliefs, or information about memberships of unions, health or sexual
orientation as sensitive information that requires special protection.

b) Autonomy and the right to self-determination

In medical research, an individual’s right to make one’s own life choices is secured through
informed consent. This right is considered to be important because it allows people live their
lives based on their own values. For example, people might have different ideas about the types
of research they would like to support. Following this line of thinking, individuals should have
the opportunity to decide what types of research their data is used for.

Furthermore, participation in research is not always without risks, while the outcome of the
research may not necessarily benefit the participants.

The general idea of informed consent is that every time a researcher wishes to use data from
individual persons for research the researcher has to inform the test subject of all the relevant
details of a research project, and then ask for the individual’s permission to use their data for the
research. At present, this is exactly how informed consent is structured.

This means that scientists need their data subjects to sign an informed consent form for every
separate study they do, which is generally regarded as a quite a hassle; limiting the amount of
available data and slowing down research.

29 | P a g e
CHAPTER-4

4. PRIVACY RISK MANAGERMENT

Risk management is used in many areas (information security, safety, and finance, insurance).
This chapter provides an implementation of this approach in the context of privacy. The
methodology presented below is fully compliant with international standards for risk
management. It naturally fits into global risk management approaches.

4.1 The notion of privacy risk

In the area of privacy, the only risks to consider are those that processing of personal data pose to
privacy. Those risks are composed by one feared event (what do we fear?) and all the threats that
make it possible (how can this occur?)

Feared events: what has to be avoided?

 Processes: those of the processing (its features as such, in so far as they deal with
personal data) and those required by [Act-I&L] in order to inform the data subjects (Article 32),
obtain their consent (if appropriate, Article 7) and allow the exercise of the rights of opposition
(Section 38), access (Article 39), correction and deletion (Article 40);
 Personal data: Those directly concerned by the processing and those concerned by the
processes required by [Act-I&L].

We wish to avoid the following situations


 Unavailability of legal processes: they do not or no longer exist or work;
 Change in processing: it deviates from what was originally planned (diversion of the
purpose, excessive or unfair collection...);
 Illegitimate access to personal data: they are known by unauthorized persons;
 Unwanted change in personal data: they are altered or changed;
 Disappearance of personal data: they are not or no longer available.

Indeed, occurrence of such events would have impacts on the privacy of data subjects, human
identity, human rights or civil liberties.

The feared event describes the situation and the potential impacts in the considered context.

Examples of feared events


 Data on the habits of employees are illegally collected and used by their superiors to
direct research evidence to fire them.
 Coordinates are retrieved and used for commercial purposes (spam, targeted
advertising…). Identities are spoofed to perform illegal activities on behalf of data subjects, the
latter facing criminal prosecution.
 Following an unwanted modification of health data, patients are inadequately taken care
of, worsening their condition and even causing disability or death.

30 | P a g e
 Applications for social benefits disappear, thus depriving the beneficiaries of the said
benefits and forcing them to repeat their administrative formalities.

Threats: what we have to protect from

For a feared event to occur, there must be one or more risk sources causing it, accidentally or
deliberately. Risk sources may include:

 Persons who belong to the organization: user, computer specialist…


 Persons from outside the organization: recipient, provider, competitor, authorized third
party, government organization, human activity surrounding…
 Non-human sources: computer virus, natural disaster, flammable materials, epidemic,
rodents…

Risk sources will act, accidentally or deliberately, on the various information system
components, on which the primary assets rely. These supporting assets may include:

 Hardware: computers, communications relay, USB drives, hard drives…


 Software: operating systems, messaging, databases, business applications…
 Networks: cable, wireless, fiber optic…
 People: users, administrators, top management…
 Paper media: printing, photocopying…
 Paper transmission channels: mail, workflow…

The action of the risk sources on supporting assets may happen through different threats:

 Function creep: supporting assets are diverted from their intended context of use without
being altered or damaged;
 Espionage: supporting assets are observed without being damaged;
 Exceeded limits of operation: supporting assets are overloaded, over-exploited or used
under conditions not permitting them to function properly;
 Damage: supporting assets are partially or completely damaged.
 Changes: supporting assets are transformed;
 Property losses: supporting assets are lost, stolen, sold or given away, so it is no longer
possible to exercise property rights.

Examples of threats

 A malicious attacker injects unexpected queries in the form of a website.


 A competitor, visiting incognito, steals a portable hard drive.
 A staff member removes tables from a database by mistake.
 Water damage destroys the computer servers and telecommunications

4.2 Level of risks: how to estimate them?

31 | P a g e
A risk is a scenario that describes how risk sources might exploit supporting assets vulnerabilities
leading to cause an incident on primary assets and impacts on privacy. The risk level is estimated
in terms of severity and likelihood. Severity represents the magnitude of a risk.

It essentially depends on the level of identification of personal data and the level of consequences
of the potential impacts. Likelihood represents the feasibility of a risk to occur. It essentially
depends on the level of vulnerabilities of the supporting assets facing the level of capabilities of
the risk sources7 to exploit them. The following figure makes the synthesis of the above-
mentioned notions:

The following figure makes the synthesis of the above-mentioned notions:

The privacy risk management approach

Using a risk management method is the safest way to ensure objectivity and relevance of the
choices to make when setting up a processing of personal data.

To assess the risks, feared events have to first be identified and estimated in terms of severity.
Then, for those whose severity is high, threats that could lead to the feared events have to be
identified and their likelihood estimated. The assessed risks can therefore be treated through
proportionate measures. The risks thus assessed can then be treated using commensurate
measures.

32 | P a g e
The approach consists in studying:
1. The context of the processing of personal data,
2. The feared events in this particular context,
3. The possible threats (if needed),
4. The risks involved (if needed),
5. The appropriate measures to treat them.

33 | P a g e
CHAPTER-5

5. Expression of Needs and Identification of Security Objectives (EBIOS) in the


Field of privacy

This chapter describes the approach to be taken in order to analyze the risks posed to privacy by
the processing of personal data. It describes how to use the [EBIOS] method in the specific
context of data protection.

EBIOS (Expression of Needs and Identification of Security Objectives) is a method for analysis,
evaluation and action on risks relating to information systems. It generates a security policy
adapted to the needs of an organization. The method was created in 1995 and is now maintained
by the ANSSI, a department of the French Prime Minister.
The five steps of the EBIOS method are:
 Circumstantial study - determining the context;
 Security requirements;
 Risk study;
 Identification of security goals; and
 Determination of security requirements.

EBIOS is primarily intended for governmental and commercial organizations working with the
Defense Ministry that handle confidential or secret defense classified information. It enables
well informed security actions to be undertaken. The objective is to assess and prepare for
possible future situations (in the case of a newly created information system), and identify and
respond to deficiencies (when the system is operating) in order to refine the security
arrangements.

In its first version, EBIOS was focused on “security objectives redaction”. Since 2000, DCSSI
became aware of improvements in international standards (ISO in particular) and “engaged
EBIOS adaptation to this criteria”. It might also be viewed as a way to avoid France’s
introspective approach to information security, responding to the limitations of French methods
that are not recognized abroad and are unsuited to international markets. However, the method's
documentation only appears to be available in French.

In 2002, international comparisons placed EBIOS among the three best methods for analyzing
ISS risks. Many organizations in the public and private sectors use the method to conduct their
own ISS risk analyses.

5.1 Background study: What is the context?

The aim at this stage is to gain a clear view of the scope under consideration by identifying all the
useful information for risk management by answering the following questions:
34 | P a g e
Which primary assets need to be protected?

 Which processing operation is concerned?


 What is its purpose (see Articles 615 and 9 of [Act-I&L])?
 Who is it intended for?
 What business process is executed by this processing operation?
 Which data subjects are affected by this processing operation?
 How will the legal processes be implemented?
 What kinds of personal data will undergo processing?
 What kinds of personal data will be used by the legal processes?

What supporting assets are used for the primary assets?

 Which kinds of hardware (computers, routers, electronic media, etc.)?


 Which kinds of software (operating systems, messaging systems, databases, business
applications, etc.)?
 What are the kinds of computer communications networks (cables, Wi-Fi, fiber optics,
etc.)?
 Who are the individuals involved?
 Which kinds of supporting paper assets (printouts, photocopies, etc.)?
 Which paper transmission channels (mail, workflow, etc.)?

What are the relevant sources of risk that might affect the specific context of the processing
operation under consideration?

 Which internal individuals are to be considered (users, administrators, developers,


policymakers, etc.)?
 Which external individuals are to be considered (customers, recipients, providers,
competitors, activists, curious persons, malicious individuals, government organizations,
surrounding human activity, etc.)?
 Which non-human sources are to be considered (damaging event, malicious software
from an unknown source, natural phenomenon, natural or health disasters, etc.)?

5.2 Feared events study: What does one fear happening?

The aim of this step is to obtain a detailed and prioritized list of all feared events that may affect
the processing operation under consideration. An example is provided in the table on page 14.

Clarifying feared events requires identifying their potential impacts. In other words, what
consequences could each feared event have on the identity and privacy of data subjects and
human rights or civil liberties if:
35 | P a g e
 The legal processes were unavailable?
 The processing operation was modified?
 An unauthorized person accessed personal data?
 Personal data were modified?
 Personal data disappeared?

These feared events are ranked by determining their severity based on the level of identification
of personal data and the prejudicial effect of these potential impacts.

First of all, the level of identification of all personal data (identified beforehand) must be
assessed. In other words, how easy is it to identify data subjects?

a) Negligible: Identifying an individual using their personal data appears to be virtually


impossible (e.g. searching throughout the French population using only an individual's first
name).

b) Limited: Identifying an individual using their personal data appears to be difficult but is
possible in certain cases (e.g. searching throughout the French population using an individual's
full name).

c) Significant: Identifying an individual using their personal data appears to be relatively


easy (e.g. searching throughout the French population using an individual's full name and date of
birth).

d) Maximum: Identifying an individual using their personal data appears to be extremely


easy (e.g. searching throughout the French population using an individual's full name, date of
birth and mailing address).

Next, the prejudicial effect of each feared event should be estimated. In other words, how much
damage would be caused by all the potential impacts?

a) Negligible: Data subjects either will not be affected or may encounter a few
inconveniences, which they will overcome without any problem (time spent reentering
information, annoyances, irritations, etc.).

b) Limited: Data subjects may encounter significant inconveniences, which they will be able
to overcome despite a few difficulties (extra costs, denial of access to business services, fear, lack
of understanding, stress, minor physical ailments, etc.).

c) Significant: Data subjects may encounter significant consequences, which they should be
able to overcome albeit with serious difficulties (misappropriation of funds, blacklisting by
banks, property damage, loss of employment, subpoena, worsening of state of health, etc.).

d) Maximum: Data subjects may encounter significant, or even irreversible, consequences,


which they may not overcome (financial distress such as substantial debt or inability to work,
long-term psychological or physical ailments, death, etc.).

36 | P a g e
The value of the level that best matches the potential impacts identified is then selected. Any
existing or planned measures that make these potential impacts less harmful should be listed as
justification as shown in the table.

Finally, the severity is determined by adding the respective personal data level of identification
and prejudicial effects of potential impacts values obtained and locating the sum in the table
below:

Option: The severity level thus obtained may be raised or lowered by including additional
factors. For example, a large number of data subjects (which can open the door to a massive
damaging event) may raise the level of severity by one. A large number of interconnections
(especially with foreign sites) or recipients (which facilitates the correlation between originally
separated personal data) might also be considered as an aggravating factor. Conversely, a very
small number of data subjects or very few or no interconnections or recipients might lower the
severity level by one.

The aim of this step is to obtain a detailed, prioritized list of all threats24 that may allow feared
events to occur. It is possible to leave out threats relating to feared events of negligible (1) or
limited (2) severity. An example is provided in the table.

Since a threat is a possible action by risk sources on supporting assets, the supporting assets
should be identified and estimated for each threat.

First, the vulnerabilities of the supporting assets are estimated for each threat. In other words, to
what degree can the properties of supporting assets be exploited in order to carry out a threat?
37 | P a g e
 Negligible: Carrying out a threat by exploiting the properties of supporting assets does
not appear possible (e.g. theft of paper documents stored in a room protected by a badge reader
and access code).

 Limited: Carrying out a threat by exploiting the properties of supporting assets appears to
be difficult (e.g. theft of paper documents stored in a room protected by a badge reader).

 Significant: Carrying out a threat by exploiting the properties of supporting assets


appears to be possible (e.g. theft of paper documents stored in offices that cannot be accessed
without first checking in at reception).
 Maximum: Carrying out a threat by exploiting the properties of supporting assets appears
to be extremely easy (e.g. theft of paper documents stored in a lobby).

The value of the level that best matches the supporting asset vulnerabilities identified is then
selected. Any existing or planned measures that reduce the vulnerabilities of supporting assets
should be listed as justification as shown in the table.

Finally, the likelihood of the threats is determined by adding the values obtained for the
vulnerabilities of the supports and the capabilities of the risk sources and locating the sum in the
table below:

Table 3 – Determining the likelihood of each threat

Option: The likelihood thus obtained may be raised or lowered by including additional factors.
For example, access to the Internet, exchanges of data with foreign sites, interconnections with
other systems and a high degree of system heterogeneity or variability may raise the likelihood
by one level. Conversely, a homogeneous, stable system that has no interconnections and is
closed off from the Internet may lower the likelihood by one level.

Tool

The result of this step can be added to the feared events table from the previous step:

38 | P a g e
5.3 Risk study: What is the risk level?

This step may be skipped if the severity level is negligible (1) or limited (2).

The aim of this step is to obtain a risk map in order to determine the order in which they should
be treated.

Since a risk consists of a feared event and all the threats that may allow it to occur:
Its severity equals that of the feared event,
Its likelihood equals the highest likelihood value of the threats associated with the feared event.

The risks can then be mapped:

39 | P a g e
Option: Objectives may be set based on where risks are located on the map (in order of priority):

1. Risks with a high severity and likelihood, absolutely must be avoided or reduced by
implementing security measures that reduce both their severity and their likelihood. Ideally, care
should even be taken to ensure that these risks are treated by independent measures of prevention
(actions taken prior to a damaging event), protection (actions taken during a damaging event) and
recovery (actions taken after a damaging event).

2. Risks with a high severity but a low likelihood must be avoided or reduced by
implementing security measures that reduce either their severity or their likelihood. Emphasis
must be placed on preventive measures.

3. Risks with a low severity but a high likelihood must be reduced by implementing
security measures that reduce their likelihood. Emphasis must be placed on recovery measures.
4. Risks with a low severity and likelihood may be taken, especially since the treatment
of other risks should also lead to their treatment.

Risk is an inherent part of all human activities so, not surprisingly, assessing risk and making
decisions about how to avoid or minimise it are activities fundamental to human existence.
Whether evaluating whether to walk down an unfamiliar street at night or undergo a medical
procedure, the process of assessing and managing risk is so fundamental and engrained that
individuals do it intuitively and often without any conscious awareness. Not surprisingly, risk
management also has become a critical component of most institutional activities as well.
Deciding what to buy or sell, whom to hire and where to locate are just a few examples of the
many decisions that are based on an evaluation of the risks and benefits involved. As
PricewaterhouseCoopers has noted in its Practical Guide to Risk Assessment, identifying and
managing risk are “increasingly important to the success and longevity of any business”. 1 In
recent years, many countries have enacted laws and regulations requiring or encouraging more
40 | P a g e
formal risk management. Today formal, documented risk assessments and other risk management
tools are required in an expansive range of laws ranging from workplace safety to financial
reporting. Along with these legal requirements has come a professional practice of risk
management, including specialised research, international and sectoral standards, a common
vocabulary and agreed-upon principles and processes. Data protection has long relied on risk
management as a critical tool for ensuring that data are processed appropriately and that the
fundamental rights and interests of individuals are protected effectively. Risk management has
become an increasingly prominent feature of legal requirements over the past two decades. Even
beyond those legal requirements, however, organisations have employed risk management as a
logical, familiar and effective tool for protecting privacy. Risk management does not alter rights
or obligations, nor does it take away organisational accountability. To the contrary, it is an
integral part of accountability and what accountable organisations should be doing. It has proven
a valuable tool for calibrating accountability, prioritising action, raising and informing awareness
about risks, and identifying appropriate mitigation measures. Furthermore, it is especially
valuable as a step towards greater interoperability in the face of divergent national and sectoral
legal requirements, helping organisations to manage compliance on a more global basis as they
work with regulators to identify mutually accepted approaches and values, thus driving common
outcomes, despite the lack of common legal rules. Data protection regulators themselves are also
increasingly employing risk management as a way of targeting scarce resources where they are
most needed and can have the greatest impact. Yet risk management in data protection, whether
undertaken by businesses or regulators, has often been informal and unstructured and failed to
take advantage of many of the widely accepted principles and tools of risk management in other
areas. In addition, risk management in the field of data protection has suffered from the absence
of any widely accepted framework of harms or negative impacts and so, at best, has been
idiosyncratic and, at worst, has not taken into account the full range of risks to individuals. As a
result, despite many examples of specific applications, risk management still does not achieve its
full potential as a critical tool in data protection practice and law. In January 2014, the Centre for
Information Policy Leadership launched a multiyear project on the role of risk management in
data protection. This project elaborates on the Centre’s earlier work on organisational
accountability, particularly in seeking to develop the analytical framework and tools needed to
implement key aspects of accountability. The Centre’s risk project is designed to help “bridge the
gap between high- level privacy principles on the one hand, and compliance on the ground on the
other”. 2 In its first paper, A Risk-based Approach to Privacy: Improving Effectiveness in
Practice, the project sought to understand “what is meant by privacy risks to individuals (and
society) and to create a practical framework to identify, prioritise and mitigate such risks so that
principle-based privacy obligations can be implemented appropriately and effectively”. 3 In this
paper, the project addresses the role of risk management—the systematic process of identifying
and assessing risks, avoiding or mitigating them where possible, and then accepting and
managing the remaining risks—in data protection as implemented into legal requirements,
interpreted by regulators and put into practice by responsible organisations. This paper highlights
the growing consensus around risk management as an essential tool for effective data protection,
and addresses key considerations that affect the role of risk in data protection law and practice. A
draft of this paper was provided to the participants in the Centre’s second workshop on the
Privacy Risk Framework and the Risk-based Approach to Privacy, held in Brussels on 18
November 2014. This final version reflects both the thoughtful comments of those participants4
and the wide-ranging discussion at the workshop.

5.4 Measures study: What can be done to treat risks?

41 | P a g e
The aim of this step is to build a protection system that (i) allows risks to be treated in a
commensurate manner, that (ii) complies with [Act-I&L] and (iii) is consistent with the data
controller's requirements (legal, financial, technical, etc.).

First of all, risk-treatment measures must be determined. This is done by linking existing or
planned measures (identified earlier in the study or the applicable guidelines) to the risk(s) they
help to treat. Subsequent measures are added until the risk level is finally considered acceptable.

This consists in determining additional measures that will cover:


 The primary assets: measures designed to prevent security breaches, to detect such
breaches or to restore security (informing data subjects, keeping personal data to a minimum,
anonymization of personal data, etc.).
 Then, if the above is insufficient, the potential impacts: measures designed to prevent
the consequences of risks from occurring, to identify and limit their effects or to curb them
(making of backups, integrity checks, management of personal data breaches, etc.).
 Then, if the above is insufficient, the risk sources: measures designed to prevent risk
sources from acting or making a risk real, to identify and limit their impact or to cause them to
backfire (physical and logical access control, activity tracking, management of third parties,
protection against malicious codes, etc.);
 Finally, if the above is insufficient, the supporting assets: measures designed to prevent
the exploitation of vulnerabilities, to detect and limit threats that do occur or to restore the normal
operating condition (reducing the vulnerabilities of software, hardware, individuals, paper
documents, etc.).

Notes:
The higher the capabilities of the risk sources, the more robust measures must be in order to
withstand them. Moreover, any incidents that may have already occurred (especially personal
data breaches) as well as any difficulties in implementing certain measures, may be used to
improve the security system.
Measures specified must be formally set out, implemented, regularly audited and continually
improved.

Finally, explanations about why residual risks may be accepted should be given. These
explanations may be based on the new severity and likelihood levels and on the benefits offered
by the processing operation identified previously (risk-benefit analysis) by applying the
following rules:

 Risks with a high severity and likelihood must not be taken.


 Risks with a high severity but a low likelihood may be taken only if it is demonstrated
that their severity cannot be reduced and if their likelihood is negligible.
 Risks with a low severity but a high likelihood may be taken only if it is demonstrated
that their severity cannot be reduced and if their likelihood is negligible.
 Risks with a low severity and likelihood may be take

42 | P a g e
It may be acceptable to depart from these rules, but only if it is demonstrated that the benefits of
processing greatly outweigh the risks.

Note:

Serious risks may thus be taken if their likelihood is sufficiently low. Certain risks may also be
taken if processing makes it possible to save human lives.

Tool:

The result of this step, which consists in presenting the measures selected to treat each risk and in
re-estimating the severity and likelihood of each risk, may be summarized in a table such as the
one below

Table 5 – Selected risk-treatment measures

Description of the selected risk-treatment measures


 Keep personal data to a minimum Personal data required for processing are identified.
It is demonstrated that each item of data is essential.
 Inform data subjects Internet users are informed, via the website's order form and in the
same font as the rest of the page, of the data controller's identity; the purpose of the processing;
whether the information collected is required or optional; the consequences of failing to provide
information; the recipients of this information; their rights and the person whom they should
contact in order to enforce them; and the planned forms of transmission of this information.
 Back up personal data on the server are backed up incrementally every day and
completely each week. The supporting storage assets are encrypted and stored in a fireproof
cabinet. A backup recovery test is performed once a year.

Threats that may jeopardize confidentiality

The following table presents the generic threats that can lead to:
i. Illegitimate access to personal data,
ii. Compromise of processing (if this feared event is considered).

43 | P a g e
Threats that may jeopardize integrity

The following table presents the generic threats that can lead to:
i. Changes in processing,
ii. Unwanted changes of personal data,
iii. Alterations to legal processes (if this feared event is considered).

44 | P a g e
Threats that may jeopardize availability

The following table presents the generic threats that can lead to:
i. Unavailability of legal processes,
ii. Disappearance of personal data,
iii. Unavailability of processing (if this feared event is considered).

45 | P a g e
46 | P a g e
47 | P a g e
CHAPTER-6
6. Risk Management in Data Protection Regulation
Companies are subject to hundreds of laws and regulations requiring that they identify, assess
and manage risks. Many of these requirements, for example, Sarbanes-Oxley and the broad
obligations on publicly traded companies to identify and disclose in their quarterly or annual
filings material risks, are longstanding. Others, such as Basel III and the numerous national
requirements imposed on financial institutions to assess and avoid or otherwise respond to risks
to their solvency, have been enacted or strengthened more recently.

Today, whether as a result of legal requirements, professional or self-regulatory obligations, or


internal risk management policy, the types of risk assessments routinely performed within
organizations include:

 Strategic risk assessment


 Operational risk assessment
 Compliance risk assessment
 Internal audit risk assessment
 Financial statement risk assessment
 Fraud risk assessment
 Market risk assessment
 Credit risk assessment
 Customer risk assessment
 Supply chain risk assessment
 Product risk assessment
 Security risk assessment
 Information technology risk assessment

6.1 ISO/IEC 27005:2011 Information security risk management

It provides guidance on information security risk management. It provides a set of definitions


for terms such as consequence, control, event, external context, internal context. It is especially
useful to the note differences between terms such as risk analysis, risk assessment and risk
evaluation. Risk assessment, for examples, includes risk identification, analysis and evaluation.

It provides some background on information security risk management, which, according to


the standard, should be an ongoing, iterative process, which examines the external and
internal context (an environmental scan), assesses the risks, and makes recommendations on
how to treat those risks. It says stakeholders should be consulted and kept informed with
regard to decisions on how to treat risks. Employees should also be educated about the risks
and how the organization is dealing with them. In addition, the process should be
documented.

6.2 NIST SP 800-39 Managing Information Security Risk

Managing Information Security Risk (SP 800-39, 2011), published by the US National Institute of
Standards and Technology (NIST), is congruent with, and complementary to, NIST 800-30 (2012)
and guidance on other areas of organizational risk management as part of an Enterprise Risk
Management (ERM) Programme. ISO 31000 is cited. Although the writing is wholly new (albeit with
48 | P a g e
some repetition of diagrams), there are considerable overlaps with 800-30, although the latter focuses
more on risk assessment and 800-39 is more holistic and emphasizes other aspects of risk
management. Neither of these NIST publications embraces privacy or data protection as an important
element, and almost completely ignore it. Because of this close relationship between the two
documents, many details of 800-30 that area described elsewhere in this report will not be repeated
here. However, 800-39 develops or emphasizes certain elements, explains certain items at greater
length, or introduces a number of new and partly different ones. The following are probably the most
important different emphases:

 Governance and governance models


 the “risk executive (function)”
 Risk tolerance and uncertainty
 Enterprise and information security architectures
 Trust and trust models
 Organizational culture
 The relationship among key concepts
 Risk responding and monitoring following assessment
 Roles and responsibilities.

The main purpose, as in 800-30, is information security. Many types of organisational risk are
identified: “program management risk, investment risk, budgetary risk, legal liability risk, safety risk,
inventory risk, supply chain risk, and security risk”. Privacy risk is absent. “Risk” is defined for
present purposes as “information security risk from the operation and use of organizational
information systems including the processes, procedures, and structures within organizations that
influence or affect the design, development, implementation, and ongoing operation of those
systems.” The document emphasises that this must be a matter for senior executives and leaders, and
not confined to a technical “stovepipe” in the organisation, separate from general management.
Senior personnel are therefore given risk management responsibilities and are to be accountable for
their risk management decisions.

There is also an emphasis on “tools, techniques, and methodologies” to be identified for assessing,
developing courses of action, and determining the sufficiency, correctness and effectiveness of risk
responses. As in 800-30, 800-39 analyses the processes and activities at the three organisational tiers,
and adopts the fourfold frame-assess-respond-monitor risk-management process concept. A new
concept is that of risk executive (function).

This is established at the top (organisational) tier as a crucial part of the governance and decision-
making structure for risk management; it “serves as the common risk management resource for senior
leaders/executives, mission/business owners, chief information officers, chief information security
officers, information system owners, common control providers, enterprise architects, information
security architects, information systems/security engineers, information system security
managers/officers, and any other stakeholders having a vested interest in the mission/business success
of organizations.

Risk tolerance is an important element of risk framing, and indicates “the level of risk or degree of
uncertainty that is acceptable to organizations”, constraining risk management decisions and shaping
oversight, the rigour of the risk assessment, and the responsive strategies adopted. The document
explains enterprise and information security architectures at length in its discussion of Tier 2
(mission/business process). These architectures have much to do with the organisation’s resilience to
threats. In particular, the information security architecture “incorporates security requirements from
49 | P a g e
legislation, directives, policies, regulations, standards, and guidance”. The description of enterprise
architecture includes “privacy” as one of the risk-reduction aims for the full, organisation-wide
integration of management processes, but this is not explained.

The concepts of trust and trustworthiness are deemed important factors in risk decision-making, with
“trust” defined as “a belief that an entity will behave in a predictable manner in specified
circumstances. The entity may be a person, process, object or any combination of such components.”
An Appendix sets out a number of trust models as alternative ways for organizations to obtain levels
of trust needed to form partnerships and collaborations and to share information. Trustworthiness
relates to assurance about IT products and systems in the 101 face of threats, and susceptibility to
attack shapes the acceptability of levels of risk. Organisational culture (values, beliefs and norms
influencing behaviour and action) is a dimension that 800-39 treats at length, as it affects many if not
all the other elements of risk management. Where the cultures of two organisations differ, or where
parts of the same organisation have different cultures, these “disconnects” may be palpable in terms
of information-sharing: “An example of an internal disconnect can be observed in a hospital that
emphasizes different cultures between protecting the personal privacy of patients and the availability
of medical information to medical professionals for treatment purposes.” We may note that this is an
almost isolated mention of “privacy” in 800-39, and that the example is a classic data protection issue
that PIA would encounter in its analysis of an organisation’s processes. But 800-39 offers no guide to
the resolution of such clashes of culture and the information-sharing decisions that are implicated. A
section on the relationship among all the key risk concepts (governance, risk, tolerance, trust, culture
and investment strategy) then follows, showing their inter-relationship and the importance of the risk
executive (function)’s cognizance of this.

NIST 800-39 moves on to discuss the process for managing risk through the familiar stages of
framing, assessing, responding and monitoring, describing each with more fine-grained sub-
processes. This analysis goes beyond 800-30’s focus on risk assessment to describe more fully the
stages of responding to risk and risk monitoring, including several steps in each. There is a large
Appendix that delineates the roles and responsibilities of key organisational participants. Although
they are not here referred to as “stakeholders”, many if not all of them are elsewhere so described.
These roles include: CEO, risk executive (function) – an individual or a group, CIO, information
owner/steward, senior information security officer, authorizing official, authorizing official
designated representative, common control provider, information system owner, information system
security officer, information security architect, information system security engineer, and security
control assessor. If, through an “open door”, a PIA were to be grafted into the risk management
process covered by 800-39, these personnel and their differing but overlapping responsibilities, and
perhaps their differing cultures (and what those cultures might indicate with regard to information
processes that bear upon privacy) would have to be factored into the PIA routine.

Touch point questions Evidence from NIST 800-39


1 Does the RM methodology include It mentions legislation but also includes
provisions about compliance with “directives, policies, regulations, standards,
legislation and any relevant industry and guidance”.
standards, code of conduct, internal
policy, etc.?
2 Is the RM methodology regarded as a It is a process.
process or is it simply about producing
a report?

50 | P a g e
3 Does the RM methodology address NIST 800-39 barely mentions privacy and
only information privacy protection orthe example it mentions is of information
does it address other types of privacy as
privacy. Broadening could perhaps be done
well? within the scope of the RM, but adopting a
conception of privacy that went beyond
information security would be a prerequisite
for the organisation.
4 Does the RM methodology say that it The RM exists at all stages of a project and
should be undertaken when it is still Continuously.

Touch point questions Evidence from NIST 800-39


possible to influence the development
of the project?
5 Does the RM methodology place The RM involves responsibilities (activities)
responsibility for its use at the senior at several levels. Top-tier responsibility is
executive level? heavily discussed but responsibilities are
also set forth in many other places and
among many other roles.
6 Does the RM methodology call for Not so explicitly for this RM, but
developing a plan and terms of holistically. There is a security plan. There is
reference? Does it include a also internal consultation between senior
consultation strategy appropriate to the executives and the “risk executive
scale, scope and nature of the project? (function)” about the risk-assessment
process (e.g., framing, etc.).
7 Does the RM methodology call for The Guide mentions many other NIST risk,
conduct of an environmental scan security and other publications, as well as
(information about prior projects of a ISO and other standards.
similar nature, drawn from a variety of
sources)?
8 Does the RM methodology include This scale does not seem to apply to RM,
provisions for scaling its application except perhaps in terms of risk aggregation,
according to the scope of the project? which is only mentioned in 800-39 but more
fully discussed in 800-30.
9 Does the RM methodology call for There are frequent mentions of
consulting all relevant stakeholders, “stakeholders”, and the roles that are
internal and external to the delineated describe who they are and what
organisation, in order to identify and their responsibilities are. Their perspectives
assess the project’s impacts from their are implicitly recognised. Presumably they
perspectives? would be a PIA’s “stakeholders” as well, but
there are also external ones (other
organisations).
10 Does the RM methodology include Communication is not separately and
provisions for putting in place measures explicitly discussed, but is mentioned and is
to achieve clear communications implicit in RM processes, especially
between senior management, the regarding role-coordination.
project team and stakeholders?
11 Does the RM methodology call for This RM is almost exclusively non-
identification of risks to individuals and individual in focus.
to the organisation?
51 | P a g e
12 Does the RM methodology include Alternative actions to mitigate risk are
provisions for identifying protection discussed as part of risk response, but not
measures and/or design solutions to concerning any privacy impact.
avoid or to mitigate any negative
impacts of the project or, when negative
impacts are unavoidable, does it require
justification of the business need for
them?
13 Does the RM methodology include Documentation is mentioned in a number of
provisions for documenting the places, particularly in describing the role of
process? the “common control provider”.

Touch point questions Evidence from NIST 800-39


14 Does the RM methodology include Nothing is mentioned about publication.
provision for making the resulting
document public (whether redacted or
otherwise)?
15 Does the RM methodology call for a Continuous monitoring is important to RM.
review if there are any changes in the
project?
16 Does the RM methodology include Review of risk management decisions is part
provisions for an audit to ensure that of maintaining the RM.
the organisation implements all
recommendations or, if not all, that it
has provided adequate justification for
not implementing some
recommendations?

Conclusions and recommendations

This is an elaborate document that, read together with NIST SP 800-30, gives a highly detailed
and elaborate descriptive guide to risk management in all its stages, procedures, structures and
thought-processes. As with 800-30, but perhaps to a lesser extent, there may be “touch points”,
“open doors”, and other affordances in NIST 800-39 and in the PIA Handbook that could
be worth developing. Although hardly any mention is made of privacy, the specific focus of 800-
39 on security risk should not rule this out, especially if 800-30 is implemented in conjunction
with it and if the latter can be oriented more firmly towards PIA.

If PIA can be inserted into the security concerns of 800-39, PIA responsibility could be grafted
onto the role of “risk executive (function)” in the governance and decision-making structure for
risk management. The emphasis on organisational culture, and the example of cultural
“disconnect” between attitudes towards data-sharing, could be a doorway for helping
organisations resolve such dilemmas through the analysis that PIA would bring to these
situations. In addition, the “stakeholder” framework could be adapted to PIA purposes

6.3 ISACA and COBIT

ISACA (originally known as Information Systems Audit and Control Association) originated
in 1969 as the EDP Auditors Association. Since those origins, the members of ISACA, who

52 | P a g e
serve in a variety of IT-related positions, are found in 190 chapters in over 180 countries, and
currently exceed 100,000 in number. ISACA established a research affiliate, the IT
Governance Institute (ITGI), in 1998. The focus of the organisation is upon developing
knowledge around information systems assurance, control, and security, as well as
governance of IT and related risk and compliance issues.

ISACA developed and administers several certifications, including

 The Certified Information Systems Auditor (CISA),


 Certified Information Security Manager (CISM),
 Certified in Risk and Information Systems Control (CRISC), and
 Certified in the Governance of Enterprise IT (CGEIT).

COBIT (Control Objectives for Information and related Technology), originally published in
1996 and now released in version 5, is a process framework for IT and encompass
frameworks for value of IT business investments (Val IT) and for risk management
(Risk IT). COBIT, like other IT governance frameworks, focuses upon the efficient and
effective use of IT assets, and includes the following key areas: strategic alignment,
value delivery, risk management, resource management, and performance management.

IT Governance Model

COBIT itself is a framework and does not aim to provide in-depth guidance on every aspect of
managing and governing IT. COBIT refers users of the framework to other more detailed
standards such as ITIL (for service delivery), CMM (for solution delivery), ISO 17799 (for
information security) and PMBOK or PRINCE2 (for project management). Over time, more
than 40 international IT standards, frameworks, guidelines, etc. have been consulted for the
development of COBIT, including notably those published by COSO, OGC, ISO, SEI, PMI, and
ISF. The COBIT framework ties together business requirements with IT processes and IT
resources:

Business IT IT
requirements processes resources

53 | P a g e
Effectiveness Domains Applications
Efficiency Processes Information
Confidentiality Activities Infrastructure
Integrity People
Availability
Compliance
Reliability

The process model for COBIT comprises four domains with 34 generic processes aimed at
“managing the IT resources to deliver information to the business according to business and
governance requirements”. The four domains are 1) plan and organise, 2) acquire and
implement, 3) deliver and support, and 4) monitor and evaluate. The COBIT framework
provides a process description, control objectives, management guidelines and a maturity
model for each distinct process within these domains.

The process description indicates which IT process is controlled, how it satisfies business
requirements, and how it is achieved and measured. The process is decomposed into a series
of specific activities. The management guidelines define which processes provide inputs, and
which outputs are created by the process. A RACI (Responsible, Accountable, Consulted, or
Informed) chart is provided for each activity in the process and goals and metrics for the
process are established.

Within the “plan and organise” domain, 10 processes are described. They concern defining a
strategic IT plan; defining the information architecture; determining the technological
direction; defining the IT processes; organisation and relationships; managing the IT
investment; communicating management aims and direction; managing IT human resources;
managing quality; assessing and managing IT risks; and managing projects. Key areas where
privacy and data protection elements may be introduced are within the following activities:

PO2.3 - Data Classification Scheme


PO2.4 - Integrity Management
PO4.8 - Responsibility for Risk, Security and Compliance
PO6.2 - Enterprise IT Risk and Control Framework
All activities associated with PO9 Assess and Manage IT Risks
PO10.4 - Stakeholder Commitment
The domain of “acquire and implement” includes seven processes: they include identifying
automated solutions; acquiring and maintaining application software; acquiring and maintaining
technology infrastructure; enabling operation and use; procuring IT resources; managing
changes; and installing and accrediting solutions and changes. Key areas where privacy and data
protection elements may be introduced are within the following activities:

AI1.2 - Risk Analysis Report AI2.1 - High-level Design AI2.2 - Detailed Design
AI2.3 - Application Control and Auditability

54 | P a g e
AI3.2 - Infrastructure Resource Protection and Availability
AI6.2 - Impact Assessment, Prioritization and Authorization

The “deliver and support” domain comprises 13 processes. These processes include defining and
managing service levels; managing third-party services; managing performance and capacity;
ensuring continuous service; ensuring systems security; identifying and allocating costs;
educating and training users; managing service desk and incidents; managing the configuration;
managing problems; managing data; managing the physical environment; and managing
operations. Key areas where privacy and data protection elements may be introduced are within
the following activities:

DS2.3 - Supplier Risk Management


All activities associated with process DS5 - Ensure Systems Security
DS11.1 - Business Requirements for Data Management
DS11.2 - Storage and Retention Arrangements
DS11.6 - Security Requirements for Data Management
The fourth domain, “monitor and evaluate”, comprises four processes that include monitoring
and evaluating IT performance; monitoring and evaluating internal control; ensuring compliance
with external requirements; and providing IT governance. Key areas where privacy and data
protection elements may be introduced are within the following activities:

ME3.1 - Identification of External Legal, Regulatory and Contractual Compliance


Requirements
ME3.2 - Optimization of Response to External Requirements ME3.3 - Evaluation of Compliance
with External Requirements ME3.4 - Positive Assurance of Compliance
ME3.5 - Integrated Reporting.
The COBIT framework has developed over the past decade and a half, with the most recent
update to COBIT published in 2012 as COBIT 5. COBIT 5 now encompasses the additional
Risk IT and Val IT frameworks, whose relationship to COBIT are shown in Figure 3.3 below.

55 | P a g e
Of particular interest in this context is Risk IT, which was originally published in 2009, based
upon the then current version of COBIT (4.1). “The Risk IT framework is based on the
principles of enterprise risk management (ERM) standards/frameworks such as COSO ERM
and AS/NZS 4360 (soon to be complemented or replaced by ISO 31000) and provides insight
on how to apply this guidance to IT.” The process model presented under Risk IT includes
three domains: risk governance, risk evaluation, and risk response. In turn, each of these
domains includes three defined processes

Risk governance Risk evaluation Risk response


RG1 Establish and maintain a RE1 Collect data RR1 Articulate risk
common risk view RE2 Analyse risk RR2 Manage risk
RG2 Integrate with ERM RE3 Maintain risk profile RR3 React to events
RG3 Make risk-aware business
decisions

The following examines COBIT and Risk IT within the context of how they relate to the key
touch points for PIA, and how and where PIA may fit into the framework as it currently
exists.

56 | P a g e
Touch point questions Evidence from COBIT
1 Does the RM methodology include In COBIT, Process ME3.3 – Evaluation of
Provisions about compliance with Compliance with External Requirement
legislation and any relevant industry provides for this type of review.
standards, code of conduct, internal
policy, etc.?
2 Is the RM methodology regarded as a It is a framework that supports the
Process or is it simply about producing Application of other risk management
a report? methodologies, and provides in that context a
strategic approach to risk, which is cyclical
in nature.
3 Does the RM methodology address only It is expansive and addresses a broad range
information privacy protection or does of risks that may be applicable. Privacy is
it address other types of privacy as not specifically identified, but is included
well? within the approaches taken for ensuring
compliance.
4 Does the RM methodology say that it It is aimed at tying business value to IT
should be undertaken when it is still processes, including those related to risk
possible to influence the development management. As such, risks are
of the project? contemplated in the earliest stages of a
project or programme and continually
evaluated and responded to.
5 Does the RM methodology place Yes. IT risk management defined within the
responsibility for its use at the senior COBIT and Risk IT frameworks is driven by
executive level? a governance model that relies upon a
definition of risk appetite/tolerance at
strategic levels in the organisation (i.e.,
Board or most senior level), and integrates
with enterprise-level risk management.
6 Does the RM methodology call for COBIT calls for strategic planning in the
developing a plan and terms of Plan and Organize domain, and Risk IT
reference? Does it include a establishes activities to be pursued in the
consultation strategy appropriate to the Risk Governance domain, each involving a
scale, scope and nature of the project? broad range of stakeholders.
7 Does the RM methodology call for While there is no explicit call for an
conduct of an environmental scan environmental scan, one of the four domains,
(information about prior projects of a “Monitor and Evaluate”, primarily focuses
similar nature, drawn from a variety of upon external regulatory and compliance
sources)? issues, and should typically lead to such a
generalised environmental scan.
8 Does the RM methodology include No.
provisions for scaling its application
according to the scope of the project?

57 | P a g e
Touch point questions Evidence from COBIT
9 Does the RM methodology call for Within the “Plan and Organize” domain, the
Consulting all relevant stakeholders, Activity PO10.4 is aimed at ensuring all
internal and external to the organisation, stakeholders are engaged and provide inputs
in order to identify and assess the to the definition and execution of the project.
project’s impacts from their
perspectives?
10 Does the RM methodology include Process PO6 (within the “Plan and Organize”
Provisions for putting in place measures Domain), Communicate Management Aims
to achieve clear communications and Direction, includes the activity PO6.5,
between senior management, the project Communication of IT Objectives and
team and stakeholders? Direction. This activity ensures that all
stakeholders are provided with an awareness
and understanding of business and IT
objectives and direction.
11 Does the RM methodology call for It defines the processes related to the
Identification of risks to individuals and Identification of risk within the PO9 “Assess
to the organisation? and Manage IT Risks” process and its related
activities. In addition, these processes are
defined in more detail in the related Risk IT
framework.
12 Does the RM methodology include It calls for high-level and detail design
provisions for identifying protection (AI2.1 and AI2.2) to be completed within the
measures and/or design solutions to context of the organisation's technological
avoid or to mitigate any negative direction and information architecture, which
impacts of the project or, when negative standards should be defined to avoid
impacts are unavoidable, does it require negative impacts.
justification of the business need for
them?
13 Does the RM methodology include Numerous artefacts are expected to be
Provisions for documenting the process? Produced within the framework, enabling
communication of outputs from one process
as inputs to other processes, creating
effective linkages of the business and IT
processes within the various domains.
14 Does the RM methodology include No. There is no discussion of
Provision for making the resulting Communication outside of the defined
document public (whether redacted or stakeholders.
otherwise)?
15 Does the RM methodology call for a Risk management is viewed as a continuous
Review if there are any changes in Cycle and is applied to both projects and
the project? ongoing IT services.

58 | P a g e
16 Does the RM methodology include In the “Monitor and Evaluate” domain, the
Provisions for an audit to ensure that the Activity ME3.4, Positive Assurance of
organisation implements all Compliance, is aimed at ensuring that “any
recommendations or, if not all, that it corrective actions to address any compliance
has provided adequate justification for gaps have been taken by the responsible
not implementing some process owner in a timely manner.”
recommendations?

Conclusions and recommendations

For the purpose of identifying a window for inclusion of PIAs within the COBIT framework,
our assessment leads us to believe that many of the key elements of PIA are implicitly
included in the framework, especially with respect to the processes in the “Monitor and
Evaluate” domain, which calls for adherence with external compliance and regulatory factors.
Moreover, as a framework, where COBIT relies upon other standards such as ITIL, ISO
31000, COSO, and others, inclusion of PIA within those other standards will necessarily roll-
up into the processes observed by COBIT user organisations.

As an alternative approach, it may be valuable to develop a white paper or case study


identifying linkages between PIA and COBIT, working with ISACA to introduce them into
their certification programmers or simply for dissemination within their global membership.

6.4 EBIOS (Expression of Needs and Identification of Security)

This risk management method was created in 1995 by the Agence Nationale de la Sécurité
des Systèmes d'Information (ANSSI),108 the French Network and Information Security
Agency (FNISA), and was first released in 1997.109 Since then, there have been two major
updates: in 2004 and in 2010. Among other improvements, the revisions have introduced
better compatibility with international standards on information security management and risk
management, namely ISO 27001, ISO 27005, ISO Guide 73 and ISO 31000.

To date, the EBIOS method is only available in French; however, an English version is
awaiting approval and should be available soon. As such, EBIOS is mainly used in France,
where it is recommended for public administrations and for private companies that are
carrying out contracts for the Defence Ministry or that have strong needs in terms of
information security.

EBIOS is also used abroad in French-speaking countries, and ENISA has drawn on
EBIOS. The use of EBIOS is suitable for various types of structure, ranging from small and
medium-sized companies and local authorities to multi-national companies as well as
international organisations. Since 2006, EBIOS has been supported by the “Club EBIOS”,
which is a user group, independent of ANSSI, formed by public and private sectors
organisations as well as individual experts.

EBIOS is a high-level method for risk management. It is mainly an information security


method; however, due to its modular and flexible approach to risk management, it is general
and powerful enough to be used in other sectors as well.

59 | P a g e
It is a kind of tool-box which comes as a set of two main documents. The 97-page Risk
Management Method gives an overview of risk management and then focuses on
information security (Chapter 1). It explains what EBIOS is and how it works (Chapter 2),
and describes each of the activities that make up the approach (Chapter 3). A demonstration of
the coverage of international standards.

Within EBIOS, an information security risk is a combination of the following four elements:
 A threat source,
 A threat,
 A vulnerability,
 An impact.

Thus, EBIOS focuses on the identification of those four elements as well as on the proposal of
various scenarios that combine them in likely ways. Through this, EBIOS allows the risk
manager to assess and treat risks. It also provides all the necessary elements for
communication within the organisation and its partners as well as the validation of risk
treatment.

EBIOS is an iterative method suitable for producing many types of deliverables ranging from
an organisation’s information security policy and a security strategy to a risk map or a
treatment plan. Since its last release in 2010, EBIOS has been restructured into five modules
to comply with the requirements of ISO 27001, ISO 27005 and ISO 31000. Figure 3.4 below
shows the organisation of those modules as a five-step process.

60 | P a g e
CHAPTER-7

7. ISO/IEC 29100:2011 Information technology — Security techniques


This standard provides a framework for protecting personally identifiable information (PII).

It defines PII as any information that can be used to identify a PII principal (a person or a “data
subject”, to use EC terminology) or that might be linked to a PII principal, either directly or
indirectly. It defines privacy principles in terms of PII, so the standard does not address all types
of privacy. Organizations can use the framework to help define their “privacy safeguarding
requirement”. The framework describes such requirements and lists privacy principles based on
other well-known guidance documents. The standard can also support other privacy
standardization activities, such as privacy risk assessments and controls.

The standard comprises five sections, one annex and a bibliography. Section 2, on definitions,
includes an interesting note that equates a privacy impact assessment with a privacy risk
assessment, which it defines as the “overall process of risk identification, risk analysis and risk
evaluation with regard to the processing of personally identifiable information”. The definition
does not include stakeholder consultation or even finding solutions to privacy risks.

It focuses on the basic elements of a privacy framework. It discusses actors and roles,
interactions, recognizing PII, privacy safeguarding requirements, privacy policies and controls.
It identifies four types of actors involved in processing PII, namely, the PII principals,
controllers, processors and third parties. It says a privacy principal does not always have to be
identified by name. These different actors (stakeholders) can interact with each other in a variety
of ways. The standard includes a table with several different scenarios showing possible
information flows between the PII stakeholders (actors). It clarifies how information can be
considered PII, e.g., if the information has an identifier that refers to a person, and it regards as

61 | P a g e
PII any information that distinguishes one person from another (e.g., biometric data). The
standard makes the point that it may be possible to identify someone even if there is no
single attribute that uniquely identifies her. A combination of two or three or more attributes
may be enough uniquely to identify the person. It provides a long list of example attributes
that can be used to identify a person.

Privacy safeguarding requirements may arise whenever an organization processes PII – e.g.,
in the collection, processing and storage of PII and in the transfer of PII to others, including
others in third countries. The standard encourages organizations to identify privacy
safeguarding requirements whenever they design an ICT system that will be used to process
PII. It says the privacy risk management process comprises five main elements:

 Establishing the context


 Assessing risks
 Treating risks
 Communications and consultation
 Monitoring and reviewing risks and controls.

At this point, the standard refers again to PIA, which it describes as that part of risk
management that focuses on ensuring compliance with legislation and assessing the privacy
implications of any new or modified programs. It says that privacy safeguarding requirements
and PIAs should be part of the organization’s risk management framework, and describes
privacy risk management as a process. That process should take into account various factors,
including legal and regulatory, contractual, business, and others. Among the other factors are
the privacy preferences of PII principals. The standard indirectly refers to “privacy by design”
(PbD) when it says that ICT system designers should take into account the likely privacy
preferences of the PII principals. Organizations should respond to the privacy safeguarding
requirements with a set of privacy controls as an outcome of their privacy risk assessment and
treatment. The controls should be embedded in the organization’s approach to PbD and in its
information security management framework. The standard also says that top management
should be involved in the establishment of the organization’s privacy policy. Distinguishing
between an internal and an external privacy policy, the standard says that the organization
should document the controls used to enforce the policy.

It provides a list of privacy principles that were abstracted from those promulgated by various
countries and international organizations. It says the privacy principles are to guide the
design, development and implementation of privacy policies and controls. ISO 27005
formulates 11 privacy principles, as follows:

Consent and choice means the PII principal must have a freely given, specific and
knowledgeable choice (opt-in) about the processing of her PII. A PII principal should be able
withdraw her consent without penalty.

Purpose legitimacy and specification means ensuring that the purpose(s) complies with
applicable law, and communicating the purpose with the PII principal before the organisation
collects the information.

62 | P a g e
Collection limitation means limiting the collection of PII to that which has a legal basis and
to not more than necessary for the specified purpose(s). The standard says organisations should
justify and document the PII they collect.

Data minimisation means minimising the PII processed and the number of people who
have access to such data. Use, retention and disclosure limitation means a limit to that
necessary to fulfil specific, explicit and legitimate purposes, and retaining such data only as
long as necessary to meet the specified purpose.

Accuracy and quality mean that the data controller must ensure that the PII is accurate and
relevant for the specified purpose.

Openness, transparency and notice mean that the data controller should provide PII
principals with clear and easily accessible information about its policies, procedures and
practices in regard to the processing of PII. The data controller should also inform the PII
principals about who might be provided with the PII and whom they can contact at the
controller’s address if they have questions or want to access their data.

Individual participation and access means enabling the PII principals to access, review and
correct their PII, provided their identity is authenticated.

Accountability m e a n s that the organization should document and communicate to


stakeholders its privacy policies and practices. It also means that someone in the organization
is held responsible for implementing the privacy policies and practices. If the organization
transfers PII to a third country, it must ensure by means of contractual arrangements, for
example, that the recipient will provide comparable privacy protection. If there is a data
breach, the organization must inform the relevant stakeholders about the breach and what it is
doing to resolve it. Accountability also means there must be redress procedures in place.

Information security means protecting PII to ensure its integrity, confidentiality and
availability, and protect it against unauthorized access, use or loss.

Privacy compliance means ensuring that the processing meets data protection and privacy
safeguards (legislation and/or regulation), and enabling the conduct of audits. It also
means that the organization should conduct privacy risk assessments to ensure, among other
things, that the organisation complies with laws and regulations and safeguarding
requirements.

Touch point questions Evidence from ISO 29100:2011


1 Does the RM methodology include Yes, it says controllers should be aware of
provisions about compliance with all legal and regulatory requirements
legislation and any relevant industry
standards, code of conduct, internal
policy, etc.?

63 | P a g e
2 Is the RM methodology regarded as a On privacy safeguarding
process or is it simply about producing a requirements refers to the privacy risk
report? management process. A note also refers to
PIA, which is a process.
3 Does the RM methodology address only This standard is focused on personally
information privacy protection or does it identifiable information (PII).
address other types of privacy as well?
4 Does the RM methodology say that it Not exactly, but it does say that the design
should be undertaken when it is still of any ICT system involving the process of
possible to influence the development of PII should be preceded by an identification
the project? of the relevant privacy safeguarding
requirements.
6 Does the RM methodology call for No, it does not talk about developing a plan

Touch point questions Evidence from ISO 29100:2011


Developing a plan and terms of reference? Terms of reference. It does, however,
Does it include a consultation strategy refer to consultation with stakeholders.
appropriate to the scale, scope and nature
of the project?
7 Does the RM methodology call for It says privacy risk management involves
conduct of an environmental scan establishing the context.
(information about prior projects of a
similar nature, drawn from a variety of
sources)?
8 Does the RM methodology include No.
provisions for scaling its application
according to the scope of the project?
9 Does the RM methodology call for It refers to consulting interested parties and
consulting all relevant stakeholders, obtaining consensus.
internal and external to the organisation,
in order to identify and assess the
project’s impacts from their perspectives?
10 Does the RM methodology include It refers to communicating with PII
provisions for putting in place measures principals and others.
to achieve clear communications between
senior management, the project team and
stakeholders?
11 Does the RM methodology call for It refers to identification of PII risks from
identification of risks to individuals and the perspective of the organisation.
to the organisation?

64 | P a g e
12 Does the RM methodology include Yes, it calls protection measures “privacy
provisions for identifying protection safeguarding requirements”.
measures and/or design solutions to avoid
or to mitigate any negative impacts of the
project or, when negative impacts are
unavoidable, does it require justification
of the business need for them?
13 Does the RM methodology include Yes, the organization
provisions for documenting the process? Should document its privacy policy (both
internal and external policies). It also says
privacy controls should be documented. It
says organisations should document the type
of PII collected as well as the justification
for doing so.
14 Does the RM methodology include Not specifically, although it does mention
provision for making the resulting Communicating with stakeholders. Further,
document public (whether redacted or one of the privacy principles focuses on
otherwise)? openness, transparency and notice. There, it
says the organization should provide
stakeholders with clear information about
its PII policies and practices, the purpose
for which it is processing PII, how to
contact the controller, the choices open to
PII principals, access to their data, the

65 | P a g e
Conclusions and recommendations

ISO29100 is not a privacy risk management methodology per se, so it is a bit unfair to assess
it as such. Its primary focus and value is on privacy terminologies and, especially, privacy
principles. In section 5, wherein the privacy principles are identified and discussed, there is
operational guidance, as the foregoing indicates. It has many “touch points” in common with
the ICO PIA Handbook. As it is not, strictly speaking, a risk management methodology or
process, it offers no “open doors” wherein a PIA could be conducted. However, it does refer
to the privacy risk management process (notably in the section dealing with privacy
safeguarding requirements) wherein there are “open doors”, e.g., in regard to establishing the
context, assessing and treating risks, communicating and consulting with stakeholders, and
monitoring and review.

CHAPTER-8

8. Assessing and identifying data protection risks overview

A privacy risk register is a tool that allows you to collate, record, track and manage all your data
protection, information security and privacy risks information in one place. This Overview
guides you through the process of creating a privacy risk register.

In order to formulate an effective privacy risk register, you must first identify the risks your firm
faces. You can do this by completing a risk assessment.

There is no established format for a risk assessment, but it would make sense to consider:

 What personal data do you receive and/or hold?

 How do you process data?

 For what purposes do you process data?

 Do you transfer or share data and, if so, to whom and how?

 How does data move within your organization?

 Do you transfer data outside the EEA?

 How do you ensure data remains accurate and up-to-date?

 How long do you keep data?

 How do you destroy data?

66 | P a g e
You should also consider the findings from any internal audits or investigations into any data
security breaches and sector specific data from the Information Commissioner's Office (ICO) on
the main causes of data protection breaches: see Data security incident trends.

Precedent Data protection risk assessment guides you through the process of assessing your
risks, using the above criteria. For each risk you identify in the risk assessment, you are given the
option to:

 Record an action point to address the risk immediately—this would be suitable for simple
risks that are capable of quick and simple resolution, or

 Add the risk to your privacy risk register—which you should do for risks that cannot be
addressed quickly

Scoring each risk

At the conclusion of your Data protection risk assessment, you will have a list of risks to add to
your privacy risk register.

Assigning a score to each risk on your register will help you priorities your privacy risks and
respond accordingly.

There is a widely accepted definition of risk, i.e.:

Risk = impact x probability

So, for each risk you have identified, consider two questions:

1. If that risk materialized, how bad Assign a score to your answer1 = high impact, 2 =
would it be, ie what the impact is? medium impact, 3 = low impact
2. How likely is it that the risk will Assign a score to your answer1 = high probability,
materialize, ie what’s the probability? 2 = medium probability, 3 = low probability

You then multiply the ‘impact’ score by the ‘probability’ score to produce a risk rating (ie final
score) for that particular risk. The higher the score, the more concerned you should be.

67 | P a g e
8.1 Completing the privacy risk register

. Once you have

(i) Identified your risks and


(ii) Assessed and scored each risk using the above risk matrix, you can then complete your
privacy risk register.

Precedent: Privacy risk register takes the form of an Excel spreadsheet and consists of three tabs:

 A sample Privacy risk register—this has been populated with a number of risks to
demonstrate how the privacy risk register is intended to be used

 A blank Privacy risk register, which you can populate with any risks you identify for your
business

 Drafting Notes—this explains how to assign a numerical value to each risk, using a 3 x 3
risk matrix

Is it mandatory to have a privacy risk register?

There is no requirement to formulate a data protection risk register, but ICO guidance on privacy
impact assessments (PIAs) suggests this is a good idea.

A data protection risk assessment is not the same as a PIA. The former identifies data protection,
information security and privacy risks across your business. The latter identifies the privacy risks
associated with a discrete project, e.g. to introduce a new HR system.

CHAPTER-9

9. Data and Data protection laws

What is confidential data?


As you might assume, the term "confidential data" is different in every organization, but there is
one thing that is always right about all organizations, confidential data is the data that we want to
keep inside our organization.
Confidential data could have many faces: employee payroll, project blueprints, commercial plans
and much more. Most of the time, the Costumer/CISO has a pretty good idea of what is
confidential for the corporation. The Customer should provide keywords like project names,
specific watermarks of confidential information etc. Another way of identifying what is
confidential to the customer is by using pre-made policies (also known as solution packs).
Solution packs are packs of rules and policies that contain general objects that most of the time
are considered confidential for example:
Social security number, credit card number, regulatory obligations, words (Confidential, for
internal use only) and more. Another advantage of solution packs is that they are designated to
the customer's industry. There are many types of solution packs, for example: "Telecom solution

68 | P a g e
pack" contains information like phone IMEI and regulations that are obligatory to the telecom
industry.
After we find out what is confidential for our costumer, we need to help him or her protect that
data, and then the following question comes to mind:
How can we find our Confidential Data?
There are a few methods that should be considered. These methods are mostly in regard to DLP
capabilities and they will not be correct when approaching another vendor's solution.

1) Consult with the Customer/CISO - we need to be in touch with employees that have a cross
organizational view and approach. Most of the time knowledgeable personnel can tell us where
60%-80% of the confidential data in the organization is stored.

2) Use DLP's network monitoring ability - DLP has the ability to "tap in" to the heart of the
network using a Mirror Port (also known as SPAN Port). The network monitor has the ability to
analyze all of the network traffic. It would give a good indication of the knowledge running on
our network, the type of transformation method (Instant Messaging, mail, file copying and more)
and the destination of the data. When we receive big amounts of data, we can create rules and
policies with this data. When implementing DLP in the organization, it is suggested to install
Network Monitor in order to study the network. The amount of learning time needed is different
between each client and is defined by the network bandwidth in use.

3) Use DLP Network Discover/Protect - DLP has the ability to scan a verity of components
(Data Bases, SharePoint's, Storage, File Servers, Endpoint Clients and more) in order to find
confidential data that is laying around on the corporate network.

Security breaches rocked 2015. Sensitive data from high-profile organizations ranging from
banks and multinational conglomerates to illicit online dating services fell into the hands of
hackers, affecting millions of customers and employees. It was a terrible year for data privacy
and security, and a wake-up call for chief technology officers and corporate legal departments
everywhere.
Data volume has been growing exponentially, dramatically increasing opportunities for theft and
accidental disclosure of sensitive information. In the past, the amount of data doubled every four
years. According to technology research firm IDC, it now doubles every two years, and by 2020
the digital universe — the data we create and copy annually will reach 44 zettabytes, or 44
trillion gigabytes. These facts, along with increases in the portability of data, employee mobility
and penalties for failing to comply with strict data protection regulations raise the question:
“What more can organizations do to protect themselves and their stakeholders?” An integral part
of the answer may be data loss prevention (DLP).
DLP identifies, monitors and protects data in use, data in motion on your network, and data at
rest in your data storage area or on desktops, laptops, mobile phones or tablets. Through deep

69 | P a g e
content inspection and a contextual security analysis of transactions, DLP systems act as
enforcers of data security policies. They provide a centralized management framework designed
to detect and prevent the unauthorized use and transmission of your confidential information.
DLP protects against mistakes that lead to data leaks and intentional misuse by insiders, as well
as external attacks on your information infrastructure.
In the wake of recent security events, interest in the technology has exploded. In its “Forecast
Overview: Information Security, Worldwide, 3Q15 Update” report, Gartner predicted that DLP
will be among the fastest-growing security segments through 2019, with a combined annual
growth rate of nearly 10 percent.
The loss of sensitive data and other forms of enterprise information can lead to significant
financial losses and reputational damage. While companies are now well-aware of these dangers
and data protection has become a hot topic, many organizations aren’t very familiar with
content-aware technologies, and don’t fully understand the business case for DLP initiatives.
With this context in mind, we have outlined 10 reasons your organization needs data loss
prevention.
1. You aren’t sure where your company’s confidential data is being stored, where it’s being sent
and who is accessing it.
DLP technology provides IT and security staff with a 360-degree view of the location, flow and
usage of data across the enterprise. It checks network actions against your organization’s security
policies, and allows you to protect and control sensitive data, including customer information,
personally identifiable information (PII), financial data and intellectual property. With a
thorough understanding of this data, your organization can set the appropriate policies to protect
it, and make risk-prioritized decisions about what assets need to be protected and at what cost.

2. Your company has a plan for protecting data from external intruders, but does not protect
against theft and accidental disclosure of sensitive information by employees and partners.

Not all data loss is the result of external, malicious attacks. The inadvertent disclosure or
mishandling of confidential data by internal employees is a significant factor. DLP can detect
files that contain confidential information and prevent them from leaving via the network. It can
block sensitive data transfers to Universal Serial Bus (USB) drives and other removable media.
DLP also offers the ability to apply policies that safeguard data on a case-by-case basis. For
example, if a security event is detected, access to a specific workstation can be blocked instantly.
Policies can also quarantine or encrypt data in real-time response to events.

3. You are concerned about the liability, negative exposure, fines and lost revenue associated
with data breaches.

Data breaches have been making headlines with alarming frequency. They can wreak havoc on
an organization’s bottom line through fines, bad publicity, loss of strategic customers and legal
action. According to PWC’s 2015 Global State of Information Security Survey, organizations

70 | P a g e
reported 2014 financial losses stemming from security incidents that were 93 percent higher than
2013. In fact, the number of global incidents is growing faster than the number of global
smartphone users and the global GDP combined!

4. You are concerned about your next audit and want to maintain compliance with complex
regulations.

More than 50 countries have enacted data protection laws that require organizations in both the
public and private sectors to safeguard sensitive information. Penalties for noncompliance with
strict privacy regulations and breach notification laws continue to grow. Requirements reach
beyond the simple provision of written policies to prove compliance. Technology controls are
becoming necessary to achieve compliance in certain areas. DLP provides these controls, as well
as policy templates and maps that address specific requirements, automate compliance, and
enable the collection and reporting of metrics.

5. You need to protect proprietary information against security threats caused by enhanced
employee mobility and new communication channels.

Many employees are turning to social networking, instant messaging and other Web 2.0
applications to keep up with consumer trends. DLP helps to prevent the accidental exposure of
confidential information across these unsecure lines of communication while at the same time
keeping them open for appropriate uses. With the proliferation of mobile devices and employees
working remotely, corporate data increasingly resides both in and outside of the organization.
Wherever data lives in transit on the network, at rest in storage, or in use on a laptop or
smartphone, DLP can monitor it and significantly reduce the risk of data loss.

6. You would like to monitor your organization for inappropriate employee conduct and maintain
forensic data of security events as evidence.

Insiders represent a significant risk to data security. An employee who emails a work-related
document to his personal account in order to work over the weekend may have good intentions.
However, he or she poses a tremendous threat when there is confidential data involved. DLP
technology offers 360-degree monitoring that includes email (both corporate accounts and
webmail), instant messages, keystrokes typed, documents accessed and software applications
used. It also allows you to capture and archive evidence of incidents for forensic analysis. With
DLP, you can limit and filter Web surfing, and control which applications employees can access.
It is an invaluable tool in the effort to stop dangerous or time-wasting activities, and helps to
detect problems before they can damage your business.

71 | P a g e
7. You are uncertain of your organization’s level of protection for confidential data in cloud
applications and storage.

Large amounts of data are being moved to applications in the cloud—an environment in which it
is not apparent where data will be physically stored and processed. Protecting sensitive
information in virtual and cloud models is critical. DLP recognizes confidential data and
automates its encryption at rest, in motion and in use, preventing its transmission to third-party
infrastructures.

8. Your organization would like to proactively prevent the misuse of data at endpoints, both on
and off the corporate network.

DLP technology monitors all endpoint activity—whether on smartphones, tablets, laptops or


desktops, on the corporate network or off. It can block emails or attachments containing
confidential data, enforce policies on the transfer of data to removable media devices such as
USB thumb drives, and even prevent activities such as printing, copying and pasting. DLP offers
complete data visibility and control, ensuring that employees, third-party vendors, contractors
and partners are prevented from leaking your data—intentionally or inadvertently.

9. You would like to automate corporate governance as a means of improving compliance while
saving time and resources.
DLP capabilities for the enforcement and automation of corporate policies and processes can
help improve technical and organizational efficiencies, promote compliance, and provide
methods for more comprehensive information governance. DLP provides up-to date policy
templates and maps that address specific requirements, automate compliance, and enable the
collection and reporting of metrics. When a policy need is identified, DLP can make the change
as simple as enabling an appropriate policy template on your system.
10. You would like to gain a competitive advantage, in both brand value and reputation.
When organizations fail to take the necessary steps to identify sensitive data and protect it from
loss or misuse, they are risking their ability to compete. Whether it’s a targeted attack or an
inadvertent mistake, confidential data loss can diminish a company’s brand, reduce shareholder
value, and irreparably damage the company’s reputation. DLP facilitates the protection of
valuable trade secrets and other vital intelligence, and helps to prevent the negative publicity and
loss of customers that inevitably follow data breaches.
When all these threats and data theft is covered in the products like Data loss prevention,
customer satisfaction becomes integral part of the whole process of innovation, which leads them
to delight. Need of product is the need to end their worries. If the worries and anxieties are taken
care by the product based companies based on the current situation and data threat analysis.
Customer will think of purchasing other product from the same vendor and may also request to
integrate them in their environment. We all know that the word of mouth spreads faster and in IT

72 | P a g e
security industry it spreads like fire and different organizations will come together to have these
products on board.
Many are surprised by how many of these 10 reasons apply to their business. Many organizations
don’t fully understand the benefits DLP offers. Developing a comprehensive data loss prevention
strategy shouldn’t be an afterthought. When properly deployed, DLP can transform sensitive
data into an operational asset, and help to prevent your organization from making the
wrong kind of headlines.

73 | P a g e
Data Strategies
Every organization fears losing its critical, confidential, highly restricted or restricted data. Fear
of losing data amplifies for an organization if their critical data is hosted outside their premises,
say onto a cloud model. To address this fear or issue that organizations face, a security concept
known as “Data Loss Prevention” has evolved, and it comes in product flavors in the market.
The most famous among them are Symantec, McAfee, Web-sense, etc. Each DLP product is
designed to detect and prevent data from being leaked. These products are applied to prevent all
channels through which data can be leaked.

IT strategy for Data loss prevention product is to penetrate it in to small scale companies as well.
This way it becomes compliance to have the Data loss prevention in the company’s network for
data integrity and confidentiality. It requires strong IT leadership like Chief information officer
and chief technology officer to work closely with business, budget and legal departments as well
as other groups in the organization to have the plan of streamlined budget for all the companies
who wish to buy the products or customize the product based on the needs, so that maximum
people will take benefit of such technology.

There is strategy of integrating the Data loss prevention with other vendor products

Data loss prevention is the way to secure your valuable data .Data has been increasing in terms
complexity and confidentiality, more advance technologies are embedded in to the product to
maximize its utilization for the industry who value their data, and want to improve on finding the
data breach in current environment. Data loss prevention is also being introduced in the country
were governing laws for data privacy are robust and strict. They are also participating in this
quest globally now.

DLP products come with inbuilt policies that are already compliant with compliance standards
like PCI, HIPPA, SOX, etc. Organizations just need to tune these policies with their
organizational footprint. But the most important thing in DLP strategy is to identify the data to
protect, because if an organization simply puts DLP across the whole organization, then a large
number of false positives will result. The below section covers the data classification exercise.

Identify Sensitive Data

The first thing every organization should do is to identify all the confidential, restricted, and
highly restricted data across the whole organization and across the three channels, i.e. for data in-
transit, in-store and in-use. DLP products work with signatures to identify any restricted data
when it is crossing boundaries. To identify the critical data and develop its signatures, there is a
term in DLP products known as fingerprinting. Data is stored in various forms at various
locations in an organization and it requires identifying and fingerprinting. Various products
comes with a discovery engine which crawl all the data, index it and made it accessible through
an intuitive interface which allows quick searching on data to find its sensitivity and ownership
details.

SWOT Analysis Data Loss Prevention

74 | P a g e
Strengths

 Loyal customers
 Market share leadership
 Discovers confidential data
 Data Safety.
 Data analysis.
 Available for all platforms like Windows, Mac and Linux machines.
 End user – Fortune 500 companies.

Weaknesses

 Works on High end servers

 Requires Down time for upgrade

 Needs Highly Skilled people

 Manual intervention of administrators

 Requires Auditing team

Opportunities

 More Advanced and simplified technology


 Modernization of Skilled people
 Learning procedure
 Awareness of Data leak

Threats

 Competition
 Product substitution
 Similar Softwares and free

75 | P a g e
Many threats, especially from a security perspective, are fairly easy to delineate. If the
organization is subject regulations such as PCI DSS, HIPAA or SOX, the cost of non-compliance
can be astronomical. The costs of reputational damage often far outweigh the fines for non-
compliance. And the fines for non-compliance are stiff.

Data loss prevention is setting up the benchmark for customer satisfaction in terms of their
needs. Needs are basically evaluated by the security auditing team and the Governing bodies
which have enough of information on data and data confidentiality. There are financial,
Educational, Hospitality, Manufacturing, Health care, Insurance, Media & entertainment
,Pharmaceuticals , Retail , Telecom etc. solution pack for the respective industry. These solution
pack contains the policies, roles, reports, protocols and incidents statues that support a particular
industry or organization based on the lines of business they are doing.
Customer satisfaction is measured based on the needs of end user or organizational needs to
safeguard their confidential data.

9.1 Data Classification

Data classification, in the context of information security, is the classification of data based on its
level of sensitivity and the impact to the University should that data be disclosed, altered or
destroyed without authorization. The classification of data helps determine what baseline
security controls are appropriate for safeguarding that data. All institutional data should be
classified into one of three sensitivity levels, or classifications:

A. Restricted Data

Data should be classified as Restricted when the unauthorized disclosure, alteration or


destruction of that data could cause a significant level of risk to the University or its
affiliates. Examples of Restricted data include data protected by state or federal privacy
regulations and data protected by confidentiality agreements. The highest level of security
controls should be applied to Restricted data.

B. Private Data

Data should be classified as Private when the unauthorized disclosure, alteration or


destruction of that data could result in a moderate level of risk to the University or its
affiliates. By default, all Institutional Data that is not explicitly classified as Restricted or
Public data should be treated as Private data. A reasonable level of security controls
should be applied to Private data.

C. Public Data

76 | P a g e
Data should be classified as Public when the unauthorized disclosure, alteration or
destruction of that data would results in little or no risk to the University and its affiliates.
Examples of Public data include press releases, course information and research
publications. While little or no controls are required to protect the confidentiality of Public
data, some level of control is required to prevent unauthorized modification or destruction
of Public data.

Classification of data should be performed by an appropriate Data Steward. Data Stewards are
senior-level employees of the University who oversee the lifecycle of one or more sets of
Institutional Data.
Data Collections
Data Stewards may wish to assign a single classification to a collection of data that is common in
purpose or function. When classifying a collection of data, the most restrictive classification of
any of the individual data elements should be used. For example, if a data collection consists of a
student's name, address and social security number, the data collection should be classified as
Restricted even though the student's name and address may be considered Public information.
Reclassification
On a periodic basis, it is important to reevaluate the classification of Institutional Data to ensure
the assigned classification is still appropriate based on changes to legal and contractual
obligations as well as changes in the use of the data or its value to the University. This evaluation
should be conducted by the appropriate Data Steward. Conducting an evaluation on an annual
basis is encouraged; however, the Data Steward should determine what frequency is most
appropriate based on available resources. If a Data Steward determines that the classification of a
certain data set has changed, an analysis of security controls should be performed to determine
whether existing controls are consistent with the new classification. If gaps are found in existing
security controls, they should be corrected in a timely manner, commensurate with the level of
risk presented by the gaps.

77 | P a g e
Calculating Classification
The goal of information security, as stated in the University's Information Security Policy, is to
protect the confidentiality, integrity and availability of Institutional Data. Data classification
reflects the level of impact to the University if confidentiality, integrity or availability is
compromised.
Unfortunately there is no perfect quantitative system for calculating the classification of a
particular data element. In some situations, the appropriate classification may be more obvious,
such as when federal laws require the University to protect certain types of data (e.g. personally
identifiable information). If the appropriate classification is not inherently obvious, consider each
security objective using the following table as a guide. It is an excerpt from Federal Information
Processing Standards (FIPS) publication 199 published by the National Institute of Standards and
Technology, which discusses the categorization of information and information systems.

POTENTIAL IMPACT

Security Objective LOW MODERATE HIGH

Confidentiality The unauthorized The unauthorized The unauthorized


Preserving authorized disclosure of disclosure of disclosure of
restrictions on information could be information could be information could be
information access expected to have expected to have expected to have
and disclosure, a limited adverse a serious adverse a severe or
including means for effect on effect on catastrophic adverse
protecting personal organizational organizational effect on
privacy and operations, operations, organizational
proprietary organizational assets, organizational assets, operations,
information. or individuals. or individuals. organizational assets,
or individuals.

Integrity The unauthorized The unauthorized The unauthorized


Guarding against modification or modification or modification or
improper information destruction of destruction of destruction of
modification or information could be information could be information could be
destruction, and expected to have expected to have expected to have
includes ensuring a limited adverse a serious adverse a severe or
information non- effect on effect on catastrophic adverse
repudiation and organizational organizational effect on
authenticity. operations, operations, organizational
organizational assets, organizational assets, operations,
or individuals. or individuals. organizational assets,
or individuals.

78 | P a g e
Availability The disruption of The disruption of The disruption of
Ensuring timely and access to or use of access to or use of access to or use of
reliable access to and information or an information or an information or an
use of information. information system information system information system
could be expected to could be expected to could be expected to
have a limited have a serious have a severe or
adverse effect on adverse effect on catastrophic adverse
organizational organizational effect on
operations, operations, organizational
organizational assets, organizational assets, operations,
or individuals. or individuals. organizational assets,
or individuals.

As the total potential impact to the University increases from Low to High, the classification of
data should become more restrictive moving from Public to Restricted. If an appropriate
classification is still unclear after considering these points, contact the Information Security
Office for assistance.
Predefined Types of Restricted Information
The Information Security Office and the Office of General Counsel have defined several types of
restricted data based on state and federal regulatory requirements. They're defined as follows:

1. Authentication Verifier

An Authentication Verifier is a piece of information that is held in confidence by an


individual and used to prove that the person is who they say they are. In some instances,
an Authentication Verifier may be shared amongst a small group of individuals. An
Authentication Verifier may also be used to prove the identity of a system or service.
Examples include, but are not limited to:

o Passwords
o Shared secrets
o Cryptographic private keys

2. Covered Financial Information

79 | P a g e
See the University's Gramm-Leach-Bliley Information Security Program.

3. Electronic Protected Health Information ("EPHI")

EPHI is defined as any Protected Health Information ("PHI") that is stored in or


transmitted by electronic media. For the purpose of this definition, electronic media
includes:

o Electronic storage media includes computer hard drives and any removable and/or
transportable digital memory medium, such as magnetic tape or disk, optical disk,
or digital memory card.
o Transmission media used to exchange information already in electronic storage
media. Transmission media includes, for example, the Internet, an extranet (using
Internet technology to link a business with information accessible only to
collaborating parties), leased lines, dial-up lines, private networks and the physical
movement of removable and/or transportable electronic storage media. Certain
transmissions, including of paper, via facsimile, and of voice, via telephone, are
not considered to be transmissions via electronic media because the information
being exchanged did not exists in electronic form before the transmission.

4. Export Controlled Materials

Export Controlled Materials is defined as any information or materials that are subject to
United States export control regulations including, but not limited to, the Export
Administration Regulations (EAR) published by the U.S. Department of Commerce and
the International Traffic in Arms Regulations (ITAR) published by the U.S. Department of
State. See the Office of Research Integrity and Compliance's FAQ on Export Control for
more information.

5. Federal Tax Information ("FTI")

FTI is defined as any return, return information or taxpayer return information that is
entrusted to the University by the Internal Revenue Services. See Internal Revenue Service
Publication 1075 Exhibit 2 for more information.

6. Payment Card Information

80 | P a g e
Payment card information is defined as a credit card number (also referred to as a primary
account number or PAN) in combination with one or more of the following data elements:
o Cardholder name
o Service code
o Expiration date
o CVC2, CVV2 or CID value
o PIN or PIN block
o Contents of a credit card’s magnetic stripe
Payment Card Information is also governed by the University's PCI DSS Policy and
Guidelines (login required).

7. Personally Identifiable Education Records

Personally Identifiable Education Records are defined as any Education Records that
contain one or more of the following personal identifiers:

o Name of the student


o Name of the student’s parent(s) or other family member(s)
o Social security number
o Student number
o A list of personal characteristics that would make the student’s identity easily
traceable
o Any other information or identifier that would make the student’s identity easily
traceable
See Carnegie Mellon’s Policy on Student Privacy Rights for more information on what
constitutes an Education Record.

8. Personally Identifiable Information

For the purpose of meeting security breach notification requirements, PII is defined as a
person’s first name or first initial and last name in combination with one or more of the
following data elements:

81 | P a g e
o Social security number
o State-issued driver’s license number
o State-issued identification card number
o Financial account number in combination with a security code, access code or
password that would permit access to the account
o Medical and/or health insurance information

9. Protected Health Information ("PHI")

PHI is defined as "individually identifiable health information" transmitted by electronic


media, maintained in electronic media or transmitted or maintained in any other form or
medium by a Covered Component, as defined in Carnegie Mellon’s HIPAA Policy. PHI is
considered individually identifiable if it contains one or more of the following identifiers:

o Name
o Address (all geographic subdivisions smaller than state including street address,
city, county, precinct or zip code)
o All elements of dates (except year) related to an individual including birth date,
admissions date, discharge date, date of death and exact age if over 89)
o Telephone numbers
o Fax numbers
o Electronic mail addresses
o Social security numbers
o Medical record numbers
o Health plan beneficiary numbers
o Account numbers
o Certificate/license numbers
o Vehicle identifiers and serial numbers, including license plate number
o Device identifiers and serial numbers
o Universal Resource Locators (URLs)

82 | P a g e
o Internet protocol (IP) addresses
o Biometric identifiers, including finger and voice prints
o Full face photographic images and any comparable images
o Any other unique identifying number, characteristic or code that could identify an
individual
Per Carnegie Mellon’s HIPAA Policy, PHI does not include education records or
treatment records covered by the Family Educational Rights and Privacy Act or
employment records held by the University in its role as an employer.

10. Controlled Technical Information ("CTI")

Controlled Technical Information means "technical information with military or space


application that is subject to controls on the access, use, reproduction, modification,
performance, display, release, disclosure, or dissemination" per DFARS 252.204-7012.

9.2 Why Data Protection?

Individuals, as citizens and consumers need to have the means to exercise their right to
privacy and protect themselves and their information from abuse. This is particularly the case
when it comes to our personal information. Data protection is about safeguarding our
fundamental right to privacy, which is enshrined in international and regional laws and
conventions.

Data protection is commonly defined as the law designed to protect your personal information,
which is collected, processed and stored by “automated” means or intended to be part of a filing
system. In modern societies, to empower us to control our information and to protect us from
abuses, it is essential that data protection laws restrain and shape the activities of companies and
governments. These institutions have shown repeatedly that unless rules restrict their actions,
they will endeavor to collect it all, mine it all, and keep it all, while telling us nothing at all.

9.3 Why is data protection needed?


Every time you use a service, buy a product online, register for email, go to your doctor, pay
your taxes, or enter into any contract or service request, you have to hand over some of your
personal information. Even without your knowledge, information about you is being generated
and captured by companies and agencies you are likely to have never knowingly interacted with.
The only way citizens and consumers can have confidence in both government and business is

83 | P a g e
through strong data protection practices, with effective legislation to help minimize needless
monitoring by officialdom and regulate surveillance by companies.
Since the 1960s and the expansion of information technology capabilities, business and
government organisations have been storing this personal information in databases. Databases
can be searched, edited, cross-referenced and data shared with other organisations and across the
world. Once the collection and processing of data became widespread, people started asking
questions about was happening to their information once it was turned over. Who had the right to
access the information? Was it kept accurately? Was it being collected and disseminated without
their knowledge? Could it be used to discriminate or abuse other fundamental rights?
From all this, and growing public concern, data protection principles were devised through
numerous national and international consultations. The German region of Hesse passed the first
law in 1970, while the US Fair Credit Reporting Act 1970 also contained some elements of data
protection. The US led development of the 'fair information practices' in the early 1970s that
continue to shape data protection law today. The UK also established a committee around the
same time to review threats by private companies and came to similar conclusions.
National laws emerged soon afterwards, beginning with Sweden, the US, Germany and France.
Further momentum was added in 1980 when the Organisation for Economic Cooperation and
Development (OECD) developed its privacy guidelines that included 'privacy principles', and
shortly thereafter the Council of Europe's convention came into force.
While over 100 countries now have laws, in many countries there is still a great need for stronger
legal safeguards to give citizens and consumers’ confidence in what is done to their personal
information by government and business. Although most countries have accepted data
protection is necessary in selected sectors they have not yet developed comprehensive data
protection law that applies to all business sectors and to government.

9.4 So how does data protection work?

Where a comprehensive data protection law exists, organisations, public or private, that collect
and use your personal information have the obligation to handle this data according to the data
protection law. This law is based on a number of basic principles. Briefly, these principles
require that:
 there should be limits to what is collected: there should be limits on the collection of
personal information, and it should be obtained by lawful and fair means, with the
knowledge or consent of the individual

 the information should be correct: personal information should be relevant to the


purposes for which it is used, should be accurate, complete and up to date;

 there must be no secret purposes: the purposes for which the information is to be used
should be specified at least at the time of collection and should only be used for those
agreed purposes;

84 | P a g e
 there must be no creeping purposes: personal information can only be disclosed, used, or
retained for only the original purposes, except with the consent of the individual or under
law, and accordingly it must be deleted when no longer necessary for that purpose;

 the information must be secure: reasonable security safeguards are used to protect
personal information from loss, unauthorized access, destruction, use, modification or
disclosure;

 no secret organisations, sources, or processing: we must be made aware of the collection


and use of our information, we should know the purpose for its use, and we must know
about the organisation that is the data controller;

 individuals have rights to be involved: we should be able to have access to our


information, and we must have the right to challenge the information held and to seek its
deletion, rectification, completion or modification;
 Organisations must be held to account: the organisation that collects and manages your
information must be accountable for providing the above principles and rights.

Data protection rules need to be enforced by a regulator or authority, often called a Privacy
Commissioner. The strength of the powers invested in these authorities varies from country to
country and so does its independence from Government. These powers, for example, can include
the ability to conduct investigations, act on complaints and impose fines when they discover an
organisation has broken the law.

Apart from enforcement through regulatory means, we also believe that technologies can play a
strong role in ensuring data protection rules are followed. Through technological means and
careful design, it is possible to limit data collection, to mathematically restrict further data
processing, to assuredly limit unnecessary access, amongst other privacy measures.

Laws can influence and when necessary compel such developments. Though their adoption has
been slow, as companies and governments are resistant to limit their future capabilities or
aspirations to mine our information, even as they are legally supposed to limit purpose creep.

9.5 How many countries in the world have data protection laws?

Over 100 countries around the world have enacted comprehensive data protection legislation,
and several other countries are in the process of passing such laws. Other countries may have
privacy laws applying to certain areas, for example for children or financial records, but do not
have a comprehensive law. For instance, while an early leader in the field of data protection, the
US Privacy Act 1974 applies only to the Federal Government, and subsequent laws applies to
specific sectors, but there is no comprehensive law to date.

The strongest and most comprehensive laws are in the countries of the European Union and
European Economic Area that have implemented the 1995 Data Protection Directive. This is
currently undergoing difficult process of revision in Brussels.

85 | P a g e
Canada is another leading example with two separate pieces of legislation applying at the
national level to government and industry, with additional laws at the provincial level as well.
For more information on data protection laws, broken down by country, check out
the comprehensive reports published over the years by Privacy International.

9.6 Are data protection laws the same in all countries that have them?

No, and increasingly this is part of the problem. As our information travels around the world
through borderless networks, our data may end up in countries that have different laws of
varying strength or no law at all, meaning we’d have no remedies if our rights are abused. In
essence, depending on what services you use, different pieces of your data will be in various
countries.
Data protection law has become not only a vehicle for protecting citizens and consumers, it has
become a gateway to trade. Various international conventions and guidelines have been
established in order to ensure that information can circulate around the world without causing too
much damage to ‘data subjects’ and that businesses do not base themselves in countries with the
weakest laws. The OECD Guidelines on the Protection of Privacy, first agreed in 1980 and
revised in 2013, were the pioneer in establishing the data protection principles, adopted by many
countries in their legislation.

A driving motivation for the OECD Guidelines was to enable protection of privacy while
enabling data to flow across borders, and opening up markets.
The international instrument with most teeth however is the Council of Europe 1981
Convention for the Protection of Individuals with regard to the Automatic Processing of Personal
Data. This has the force of law for the countries that have signed up to it. Countries from
outside Europe can sign-up to it, but unfortunately only Uruguay has done so far.

The EU's 1995 Directive standardized laws to some extent across European Union member
states, partly to enable trade within the European market. The Directive required that data could
only be sent to foreign jurisdictions if those countries had adequate laws with protections in
place. One notable exception however is the US which has repeatedly failed to implement a
comprehensive law, and the 1974 Privacy Act only applies to the Federal Government, and only
protects US citizens and residents.
As an attempt at a quick fix, there’s a separate agreement on personal information transfers
between the EU and the US – called the Safe Harbor agreement.

This arrangement has been heavily criticized by both Privacy International and the European
Commission itself, as it is a voluntary and self-regulatory system which is not adequately
implemented and not sufficiently enforced. Though the Obama administration has promised to
extend the Privacy Act to European citizens and has repeatedly mentioned introducing a
comprehensive law, no meaningful action has yet occurred. It is therefore highly problematic
that much of the world's information passes through and exists under the jurisdiction of US law,
where non-Americans have no rights at all.

The EU and Council of Europe are trying to update their instruments to consider new challenges
to privacy, and to strengthen protections. These laws were drafted before the rise of internet
giants and marketing associations with significant lobbying capabilities; and before the rise of
the anti-terrorism policy agenda. As such, government agencies and companies have been

86 | P a g e
working hard to undermine these legal instruments. For instance, over 3000 amendments were
introduced in the European Parliament when the draft General Data Protection Regulation was
being discussed, some of them introduced by members of the European Parliament who
had copied and pasted the amendments from industry lobbyists briefings. The interests in
undermining data protection are stronger than ever.

9.7 What is considered as personal information under data protection laws?

Personal information means any kind of information (a single piece of information or a set of
information) that can personally identify an individual or single them out as an individual. The
obvious examples are somebody’s name, address, national identification number, date of birth or
a facial image. A few perhaps less obvious examples include vehicle registration plate numbers,
credit card numbers, fingerprints, a computer’s IP address, CCTV video footage, or health
records.

You can be singled out from other people even if your name is not known; for example online
profiling companies assign a unique number and use tracking techniques to follow you around
the net and build a profile of your behaviour and interests in order to present you with
advertisements. Some personal information is considered more sensitive than other, and
therefore subject to stricter rules; this includes your racial or ethnic origin, political views,
religion, health, and sex life. Such information cannot be collected or used at all without your
specific consent.

9.8 Data protection policies to safeguard Information.

Data protection policies help us in ways listed under

 Ensuring that we comply with the eight data protection principles, as listed below.

 Meeting our legal obligations as laid down by the Data Protection Act 1998

 Ensuring that data is collected and used fairly and lawfully

 Processing personal data only in order to meet our operational needs or fulfil legal
requirements

 Taking steps to ensure that personal data is up to date and accurate

 Establishing appropriate retention periods for personal data

 Ensuring that data subjects' rights can be appropriately exercised

 Providing adequate security measures to protect personal data

 Ensuring that a nominated officer is responsible for data protection compliance and
provides a point of contact for all data protection issues

87 | P a g e
 Ensuring that all staff are made aware of good practice in data protection

 Providing adequate training for all staff responsible for personal data

 Ensuring that everyone handling personal data knows where to find further guidance

 Ensuring that queries about data protection, internal and external to the organisation, is
dealt with effectively and promptly

 Regularly reviewing data protection procedures and guidelines within the organisation.

CHAPTER-10

10 Data Protection Law in India.


People are increasingly making their personal information available publically. Today there is an
unprecedented amount of personal data available with Government and Private Sector Players.
Digital India, Aadhaar and Demonetization drives have added to the already growing pool of
personal data with various public and private players to pursue their activities. Indian law does
not define personal data. The same has been defined by EU’s general data protection guidelines
[REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE
COUNCIL] as any information relating to an identified or identifiable natural person. From this
definition it is clear that personal information includes biometric and economic information as
well.

Publically available personal information pose a greater risk for Indians because majority of
population is illiterate and there is no law mandating data protection. Individuals are repeatedly
transmitting their personal information for various activities. Aspects such as the purpose for
collecting personal information, how will this information be used, security mechanisms put in
place for protecting such information , for how long will this information be stores, what will be
the procedure for destroying such information etc are not known by the individual nor have these
aspects been defined uniformly in any law. India’s has no specific legislation focusing on data
protection. A few principles of data protection are scattered through IT Act, Guidelines issued by
RBI, TRAI etc.

Any kind of processing of personal data should be fair and transparent. Providers of personal
information should be made aware of risks, rules, safeguards and rights in relation to the
processing of personal data and how to exercise their rights in relation to such processing.
Particularly, the specific purposes for which personal data is processed should be explicit and
legitimate and determined at the time of the collection of the personal data. Personal data should
be processed in a manner that ensures appropriate security and confidentiality of the personal
data, including for preventing unauthorized access to or use of personal data and the equipment
used for the processing. Basic principles guiding processing of Personal data are as follows:-

Lawfulness, fairness and transparency. There should be a general policy of openness about
developments, practices and policies with respect to personal data. Means should be readily

88 | P a g e
available of establishing the existence and nature of personal data, and the main purposes of their
use, as well as the identity and usual residence of the data controller.

Personal data should be collected for specified, explicit and legitimate purposes and not further
processed in a manner that is incompatible with those purposes. The purposes for which personal
data are collected should be specified not later than at the time of data collection and the
subsequent use limited to the fulfilment of those purposes or such others as are not incompatible
with those purposes and as are specified on each occasion of change of purpose.

Collection of Personal Data should be adequate, relevant and limited to what is necessary in
relation to the purposes for which they are processed. This is also known as the principle of Data
minimization. There should be limits to the collection of personal data and any such data should
be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of
the data subject.

The agency collecting personal data should ensure accuracy of data-delete/rectify inaccurate
data. Data Quality Principle entails that personal data should be relevant to the purposes for
which they are to be used, and, to the extent necessary for those purposes, should be accurate,
complete and kept up-to-date

Personal data should be kept in a form which permits identification of data subjects for no longer
than is necessary for the purposes for which the personal data are processed; personal data may
be stored for longer periods insofar as the personal data will be processed solely for archiving
purposes in the public interest, scientific or historical research purposes or statistical purposes

Personal data should be processed in a manner that ensures appropriate security of such data,
including protection against unauthorized or unlawful processing and against accidental loss,
destruction or damage, using appropriate technical or organisational measures. This principle of
integrity and confidentiality entails that personal data should not be disclosed, made available or
otherwise used for purposes other than those specified except:

(a) With the consent of the data subject; or


(b) By the authority of law.

As per the Security Safeguards Principle, personal data should be protected by reasonable
security safeguards against such risks as loss or unauthorized access, destruction, use,
modification or disclosure of data.

A data controller should be accountable for complying with measures which give effect to the
principles stated above.
Data protection rules should be applicable to all entities and persons handling personal data –
both private and public sector bodies. The There is no rationale as to why principles such as
openness, purpose limitation, use limitation, etc. should not be applicable to public bodies
generally. Certain specialized functions such as those related to crime and investigation, national
security, taxation should be exempted from the general obligations and should be subject to
specific rules.

89 | P a g e
It can be seen from above that protection of personal data and Right to privacy are intrinsically
linked. Only a strong emphasis on the right to privacy can ensure that personal data is not shared
or leaked incessantly without any checks. It is duty of the State to ensure individual autonomy.
However, in recent times the very concept of the individual autonomy is also at risk. Right to
privacy has its roots in the law of tort under which any unlawful invasion of privacy gave a cause
of action for damages. The right to privacy has two aspects involved
(1) Unlawful invasion to privacy affords a tort action for damages resulting from an unlawful
invasion of privacy and
(2) The constitutional recognition given to the right to privacy which protects personal privacy
against unlawful governmental invasion. The first aspect of this right must be said to have been
violated where, for example, a person’s name or likeness is used, without his consent, for
advertising or non-advertising purposes or for that matter, his life story is written whether
laudatory or otherwise and published without his consent as explained hereinafter.

In recent times, however, this right has acquired a constitutional status. It is not enumerated as a
fundamental right but has been read into Article 21. The Indian courts have to be thanked for the
right to privacy’s development and evolution through the years. The first decision on right to
privacy was Kharak Singh v. State of U.P. Since then the concept has evolved with every
invasion to privacy.

One of the most important piece of legislation protecting our data at present is the Information
Technology Act (hereinafter IT Act). The IT Act makes hacking and tampering with computer
source an offence and penalizes unlawful access to data. However does not prescribe any
minimum security
Standards which the entities having control of data should comply with except in cases of
Personal sensitive information. The Information Technology

(Reasonable security practices and procedures and sensitive personal data or information) Rules,
2011 defines personal sensitive information as Sensitive personal data or information of a person
means such personal information which consists of information relating to;—
(i) password;
(ii) financial
Information such as Bank account or credit card or debit card or other payment instrument
details;

(iii) Physical, physiological and mental health condition;


(iv) Sexual orientation;
(v) Medical records and history;
(vi) Biometric information;
(vii) Any detail relating to the above clauses as provided to body corporate for
providing service; and
(viii) Any of the information received under above clauses by body corporate for
processing, stored or processed.

Maintaining of data bases is not as much difficult task as maintaining its integrity, so in this era
the most concerned debate is going on to innovate a perfect method of data protection. With the
advancement in technological development, there took place a transition in the standard of
crimes. In the present era most of the crimes are being done by the professionals through the
easiest medium i.e. computers and electronic gadgets. Just by the single click, the criminals are

90 | P a g e
able to get the secured information. The lust of information is acting as a catalyst in the growth
of cybercrimes.

It is the very big headache for the business houses, financial institutions and the governmental
bodies so as to give adequate protection to their huge databases. In the absence of any particular
stringent law relating to data protection, the miscreants are gaining expertise in their work day by
day.

Though this world simplified our life style but it left certain anomalies in procurement of its
object which resulted in involuntary disclosure of data. This can be analyzed from theses
illustrations:
1. On every login to the e-mail account in the cyber cafes, the electronic trail of password
remained left there unsecured.

2. On every use of credit card for purchasing purpose, the trail of brand preference, place of
shopping etc. left behind.

3. On every login to internet, there left behind an electronic trail enabling website owners and
advertising companies to get access to the preference and choices of the users by tracking them.

4. Employees are under seizing, as employers routinely use software to access employee’s e-mail
and their move.

5. Phone call signals of the police are easily tracked by the Naxalites enabling them to know
about the police plans.

6. Source code theft is the most preferred act of the miscreants.

7. Unsolicited e-mails are also a usual practice of gathering personal information of the users.
8. Movement across the web can be tracked by placing cookies and then retrieving such a way
that allows building detailed profile of the user’s interest, spending habits and lifestyle.

9. Through hacking, the hackers can whimsically alter anyone’s account.

Thus it can be easily pointed out that how easy we are providing room to the miscreants to
enhance and simplify their acts and how safe is it to avail the services of the digital world.

91 | P a g e
Data protection under Indian Law
Our constitution has provided the law relating to privacy under the scope of Article 21. Its
interpretation is found insufficient to provide adequate protection to the data. In the year 2000,
effort has been made by our legislature to embrace privacy issues relating to computer system
under the purview of IT Act, 2000. This Act contains certain provisions which provide
protection of stored data. In the year 2006, our legislature has also introduced a bill known as
‘The Personal Data Protection Bill’ so as to provide protection to the personal information of the
person.

Under IT Act, 2000

Section 43
This section provides protection against unauthorized access of the computer system by
imposing heavy penalty up to one crore. The unauthorized downloading, extraction and copying
of data are also covered under the same penalty. Clause ‘c’ of this section imposes penalty for
unauthorized introduction of computer viruses of contaminants. Clause ‘g’ provides penalties for
assisting the unauthorized access.

Section 65
This section provides for computer source code. If anyone knowingly of intentionally conceals,
destroys, alters or causes another to do as such shall have to suffer a penalty of imprisonment or
fine up to 2 lakh rupees. Thus protection has been provided against tampering of computer
source documents.

Section 66
Protection against hacking has been provided under this section. As per this section hacking is
defined as any act with an intention to cause wrongful loss or damage to any person or with the
knowledge that wrongful loss of damage will be caused to any person and information residing
in a computer resource must be either destroyed, deleted, altered or its value and utility get
diminished. This section imposes the penalty of imprisonment of three years or fine up to two
lakh rupees or both on the hacker.

92 | P a g e
Section 70
This section provides protection to the data stored in the protected system. Protected systems are
those computers, computer system or computer network to which the appropriate government, by
issuing gazette information in the official gazette, declared it as a protected system. Any access
or attempt to secure access of that system in contravention of the provision of this section will
make the person accessed liable for punishment of imprisonment which may extend to ten years
and shall also be liable to fine.

Section 72

This section provides protection against breach of confidentiality and privacy of the data. As per
this, any person upon whom powers have been conferred under IT Act and allied rules to secure
access to any electronic record, book, register, correspondence, information document of other
material discloses it to any other person, shall be punished with imprisonment which may extend
to two years or with fine which may extend to one lakh rupees or both.

Law of contract

These days’ companies are relying on the contract law as a useful means to protect their
information. The corporate houses enters into several agreements with other companies, clients,
agencies or partners to keep their information secured to the extent they want to secure it.
Agreements such as ‘non circumvention and non-disclosure’ agreements, ‘user license’
agreements, ‘referral partner’ agreements etc. are entered into by them which contains
confidentiality and privacy clauses and also arbitration clauses for the purpose of resolving the
dispute if arises. These agreements help them in smooth running of business. BPO companies
have implemented processes like BS 7799 and the ISO 17799 standards of information security
management, which restrict the quantity of data that can be made available to employees of BPO
and call centers.

Indian Penal code


It imposes punishment for the wrongs which were expected to occur till the last decade. But it
failed to incorporate within itself the punishment for crimes related to data which has become the
order of the day.

The Personal Data Protection Bill, 2006

Upon the footprints of the foreign laws, this bill has been introduced in the Rajya Sabha on
December 8th 2006. The purpose of this bill is to provide protection of personal data and
information of an individual collected for a particular purpose by one organization, and to
prevent its usage by other organization for commercial or other purposes and entitle the
individual to claim compensation or damages due to disclosure of personal

10.1 Data protection under foreign law

93 | P a g e
Many countries other than India have their data protection laws as a separate discipline. They
have well framed and established laws, exclusively for the data protection.

U.K Law

U.K. parliament framed its Data Protection Act (DPA) in the year 1984 which thereafter repealed
by the DPA of 1998. This Act is basically instituted for the purpose of providing protection and
privacy of the personal data of the individuals in UK. The Act covers data which can be used to
identify a living person. This includes names, birthday, anniversary dates, addresses, telephone
numbers, fax numbers, e-mail addresses etc. It applies only to the data which is held or intended
to be held, on computers or other equipment’s operating automatically in response to instructions
given for that purpose or held in a relevant filing system.

As per the Act, the persons and organizations which store personal data must register with the
information commissioner, which has been appointed as the government official to oversee the
Act. The Act put restrictions on collection of data. Personal data can be obtained only for one or
more specified and lawful purposes, and shall not be further processed in any manner
incompatible with that purpose or purposes. The personal data shall be adequate, relevant, and
not excessive in relation to the purpose or purposes for which they are processed.

U.S Law
Though both U.S and the European Union focus on enhancing privacy protection of their
citizens, U.S takes a different approach to privacy from that of the European Union. US adopted
the sectoral approach that relies of mix of legislation, regulation, and self-regulation. In U.S, data
are grouped into several classes on the basis of their utility and importance. Thereafter,
accordingly a different degree of protection is awarded to the different classes of data.

Several Acts were also passed in order to stabilize the data protection laws in the United States.
The privacy Act was passed in the year 1974 which provided for establishing standards for when
it is reasonable, ethical and justifiable for government agencies to compare data in different
databases. Another Electronic Communications Privacy Act was passed for restricting the
interception of electronic communications and prohibiting the access to stored data without the
consent of the user or the communication service.

Further the Children's Online Privacy Protection Act was passed by the US Congress in October
1998 requiring website operators to obtain parental consent before obtaining personal
information from children, and a Consumer Internet Privacy Protection Act required an ISP to
get permission of the subscriber before disclosing his personal information to third parties.

However, the existing federal laws is not suffice to cover the broad range of issues and
circumstances that make the new digital environment a threat to personal privacy. Furthermore,
the US Government has been reluctant to impose a regulatory burden on Electronic Commerce
activities that could hamper its development and has looked for an answer in self-regulation.

Data or information of any individual without his consent and for matters connected with the Act
or incidental to the Act. Provisions contained in this Act are relating to nature of data to be
obtained for the specific purpose and the quantum of data to be obtained for that purpose. Data
controllers have been proposed to be appointed to look upon the matters relating to violation of

94 | P a g e
the proposed Act.

Conclusion

On comparing the Indian law with the law of developed countries the proper requirement for the
Indian law can be analyzed. Data are not of same utility and importance; it varies from one
another on the basis of utility. So we require framing separate categories of data having different
utility values, as the U.S have. Moreover the provisions of IT Act deal basically with extraction
of data, destruction of data, etc. Companies cannot get full protection of data through that which
ultimately forced them to enter into separate private contracts to keep their data secured. These
contracts have the same enforceability as the general contract.

Despite the efforts being made for having a data protection law as a separate discipline, our
legislature have left some lacuna in framing the bill of 2006. The bill has been drafted wholly on
the structure of the UK Data Protection Act whereas today’s requirement is of a comprehensive
Act. Thus it can be suggested that a compiled drafting on the basis of US laws relating to data
protection would be more favourable to the today’ requirement.

Being one of the most concerned topics of discussion in the modern era, legislatures are required
to frame more stringent and comprehensive law for the protection of data which requires a
qualitative effort rather than quantitative.

CHAPTER-11

11. The EU General Data Protection Regulation


The aim of the GDPR is to protect all EU citizens from privacy and data breaches in an
increasingly data-driven world that is vastly different from the time in which the 1995 directive
was established. Although the key principles of data privacy still hold true to the previous
directive, many changes have been proposed to the regulatory policies; the key points of the
GDPR as well as information on the impacts it will have on business can be found below.

Increased Territorial Scope (extra-territorial applicability)


Arguably the biggest change to the regulatory landscape of data privacy comes with the extended
jurisdiction of the GDPR, as it applies to all companies processing the personal data of data
subjects residing in the Union, regardless of the company’s location. Previously, territorial
applicability of the directive was ambiguous and referred to data process 'in context of an
establishment'. This topic has arisen in a number of high profile court cases. GPDR makes its
applicability very clear - it will apply to the processing of personal data by controllers and
processors in the EU, regardless of whether the processing takes place in the EU or not. The
GDPR will also apply to the processing of personal data of data subjects in the EU by a
controller or processor not established in the EU, where the activities relate to: offering goods or
services to EU citizens (irrespective of whether payment is required) and the monitoring of
behaviour that takes place within the EU. Non-Eu businesses processing the data of EU citizens

95 | P a g e
will also have to appoint a representative in the EU.

Penalties
Under GDPR organizations in breach of GDPR can be fined up to 4% of annual global
turnover or €20 Million (whichever is greater). This is the maximum fine that can be imposed for
the most serious infringements e.g. Not having sufficient customer consent to process data or
violating the core of Privacy by Design concepts. There is a tiered approach to fines e.g. a
company can be fined 2% for not having their records in order (article 28), not notifying the
supervising authority and data subject about a breach or not conducting impact assessment. It is
important to note that these rules apply to both controllers and processors -- meaning 'clouds' will
not be exempt from GDPR enforcement.

Consent
The conditions for consent have been strengthened, and companies will no longer be able to
use long illegible terms and conditions full of legalese, as the request for consent must be given
in an intelligible and easily accessible form, with the purpose for data processing attached to
that consent. Consent must be clear and distinguishable from other matters and provided in an
intelligible and easily accessible form, using clear and plain language. It must be as easy to
withdraw consent as it is to give it.

Data Subject Rights

Breach Notification
Under the GDPR, breach notification will become mandatory in all member states where a data
breach is likely to “result in a risk for the rights and freedoms of individuals”.

96 | P a g e
This must be done within 72 hours of first having become aware of the breach. Data processors
will also be required to notify their customers, the controllers, “without undue delay” after first
becoming aware of a data breach.

Right to Access

Part of the expanded rights of data subjects outlined by the GDPR is the right for data subjects to
obtain from the data controller confirmation as to whether or not personal data concerning them
is being processed, where and for what purpose. Further, the controller shall provide a copy of
the personal data, free of charge, in an electronic format. This change is a dramatic shift to data
transparency and empowerment of data subjects.

Right to be Forgotten

Also known as Data Erasure, the right to be forgotten entitles the data subject to have the data
controller erase his/her personal data, cease further dissemination of the data, and potentially
have third parties halt processing of the data. The conditions for erasure, as outlined in article 17,
include the data no longer being relevant to original purposes for processing, or a data subjects
withdrawing consent. It should also be noted that this right requires controllers to compare the
subjects' rights to "the public interest in the availability of the data" when considering such
requests.

Data Portability
GDPR introduces data portability - the right for a data subject to receive the personal data
concerning them, which they have previously provided in a 'commonly use and machine
readable format' and have the right to transmit that data to another controller.

Privacy by Design
Privacy by design as a concept has existed for years now, but it is only just becoming part of a
legal requirement with the GDPR. At its core, privacy by design calls for the inclusion of data
protection from the onset of the designing of systems, rather than an addition. More specifically -
'The controller shall implement appropriate technical and organisational measures in an
effective way. In order to meet the requirements of this Regulation and protect the rights of data
subjects'. Article 23 calls for controllers to hold and process only the data absolutely necessary
for the completion of its duties (data minimization), as well as limiting the access to personal
data to those needing to act out the processing.

Data Protection Officers


Currently, controllers are required to notify their data processing activities with local DPAs,
which, for multinationals, can be a bureaucratic nightmare with most Member States having
different notification requirements. Under GDPR it will not be necessary to submit notifications /
registrations to each local DPA of data processing activities, nor will it be a requirement to notify
/ obtain approval for transfers based on the Model Contract Clauses (MCCs). Instead, there will
be internal record keeping requirements, as further explained below, and DPO appointment will
be mandatory only for those controllers and processors whose core activities consist of
processing operations which require regular and systematic monitoring of data subjects on a
large scale or of special categories of data or data relating to criminal convictions and offences.
Importantly, the DPO:

97 | P a g e
 Must be appointed on the basis of professional qualities and, in particular, expert
knowledge on data protection law and practices
 May be a staff member or an external service provider
 Contact details must be provided to the relevant DPA
 Must be provided with appropriate resources to carry out their tasks and maintain their
expert knowledge
 Must report directly to the highest level of management
 Must not carry out any other tasks that could results in a conflict of interest.

11.1 GDPR Timeline of Events


GDPR events from proposal, amendment, approval, adoption to enforcement.

Previous Legislation

 1995 – October 24th, Data Protection Directive 95/46/EC created to regulate the
processing of personal data

Legislative Proposals

 2012 – January 25th, initial proposal for updated data protection regulation by the
European Commission

 2014 – March 12th, the European Parliament approved its own version of the regulation
in its first reading

 2015 – June 15th, the Council of the European Union approved its version in its first
reading, known as the general approach, allowing the regulation to pass into the final
stage of legislation known as the “Trilogue”

Trilogue Timeline

 2015 – June 24th, meeting covering:


o Package approach: Objective of Luxembourg Presidency for the proposed
directive
o Agreement on the overall roadmap for Trilogue negotiations
o General method and approach for delegated and implementing acts

 2015 – July 14th, meeting covering:


o Territorial scope (Article 3), Representative (Article 25)
o International transfers (Chapter V), related definitions

 2015 – September 16-17th, meeting covering:


o Data protection principles (Chapter II)

98 | P a g e
o Data subject rights (Chapter III)
o Controller and Processor (Chapter IV)

 2015 – September 29-30th, meeting covering:


o Data protection principles (Chapter II)
o Data subjects rights (Chapter III)
o Controller and Processor (Chapter IV)

 2015 – October 15th, Trilogue covering:


o Independent Supervisory Authorities (Chapter VI)
o Cooperation and consistency (Chapter VII)
o Remedies, liability and sanctions (Chapter VIII)

 2015 – October 28th, meeting covering:


o Independent Supervisory Authorities (Chapter VI)
o Cooperation and consistency (Chapter VII)
o Remedies, liability and sanctions (Chapter VIII)

 2015 – November 11-12th, meeting covering:


o Objectives and material scope (Chapter I)
o Specific regimes (Chapter IX)

 2015 – November 24th, meeting covering:


o All open issues from Chapter I to IX

 2015 – December 10th, meeting covering:


o Delegated and Implementing Acts (Chapter X)
o Final provisions (Chapter XI)
o Remaining issues

 2015 – December 15th, meeting covering:


o Delegated and Implementing Acts (Chapter X)
o Final provisions (Chapter XI)
o Remaining issues

Approval & Adoption

 2015 – December 15th, the Parliament and Council have come to an agreement, and the
text will be final as of the Official signing to take place in early January of 2016.

 2016 - January

99 | P a g e
o April 8th - Adopted by the Council of the European Union

o April 16th - Adoption by the European Parliament


o May - Regulation will enter into force 20 days after it is published in the EU
Official Journal

Enforcement
 2018 - May - Following a 2 year post-adoption grace period, the GDPR will become fully
enforceable throughout the European Union.

11.2 SOME FAQ FOR GDPR

When is the GDPR coming into effect?

The GDPR was approved and adopted by the EU Parliament in April 2016. The regulation will
take effect after a two-year transition period and, unlike a Directive it does not require any
enabling legislation to be passed by government; meaning it will be in force May 2018.

In light of a uncertain 'Brexit' - I represent a data controller in the UK and want to know if
I should still continue with GDPR planning and preparation?

If you process data about individuals in the context of selling goods or services to citizens in
other EU countries then you will need to comply with the GDPR, irrespective as to whether or
not you the UK retains the GDPR post-Brexit. If your activities are limited to the UK, then the
position (after the initial exit period) is much less clear. The UK Government has indicated it
will implement an equivalent or alternative legal mechanisms. Our expectation is that any such
legislation will largely follow the GDPR, given the support previously provided to the GDPR by
the ICO and UK Government as an effective privacy standard, together with the fact that the
GDPR provides a clear baseline against which UK business can seek continued access to the EU
digital market. (Ref: http://www.lexology.com/library/detail.aspx?g=07a6d19f-19ae-4648-9f69-
44ea289726a0)

Who does the GDPR affect?


The GDPR not only applies to organisations located within the EU but it will also apply to
organisations located outside of the EU if they offer goods or services to, or monitor the
behaviour of, EU data subjects. It applies to all companies processing and holding the personal
data of data subjects residing in the European Union, regardless of the company’s location.

What are the penalties for non-compliance?


Organizations can be fined up to 4% of annual global turnover for breaching GDPR or €20
Million. This is the maximum fine that can be imposed for the most serious infringements e.g.not
having sufficient customer consent to process data or violating the core of Privacy by Design
concepts. There is a tiered approach to fines e.g. a company can be fined 2% for not having their
records in order (article 28), not notifying the supervising authority and data subject about a
breach or not conducting impact assessment. It is important to note that these rules apply to both
controllers and processors -- meaning 'clouds' will not be exempt from GDPR enforcement.

What constitutes personal data?


Any information related to a natural person or ‘Data Subject’ that can be used to directly or

100 | P a g e
indirectly identify the person. It can be anything from a name, a photo, an email address, bank
details, and posts on social networking websites, medical information, or a computer IP address.

What is the difference between a data processor and a data controller?


A controller is the entity that determines the purposes, conditions and means of the processing of
personal data, while the processor is an entity which processes personal data on behalf of the
controller.

Do data processors need 'explicit' or 'unambiguous' data subject consent - and what is the
difference?
The conditions for consent have been strengthened, as companies will no longer be able to utilise
long illegible terms and conditions full of legalese, as the request for consent must be given in an
intelligible and easily accessible form, with the purpose for data processing attached to
that consent - meaning it must be unambiguous. Consent must be clear and distinguishable from
other matters and provided in an intelligible and easily accessible form, using clear and plain
language. It must be as easy to withdraw consent as it is to give it. Explicit consent is
required only for processing sensitive personal data - in this context, nothing short of “opt in”
will suffice. However, for non-sensitive data, “unambiguous” consent will suffice.

What about Data Subjects under the age of 16?

Parental consent will be required to process the personal data of children under the age of 16 for
online services; member states may legislate for a lower age of consent but this will not be below
the age of 13.

What is the difference between a regulation and a directive?


A regulation is a binding legislative act. It must be applied in its entirety across the EU, while a
directive is a legislative act that sets out a goal that all EU countries must achieve. However, it is
up to the individual countries to decide how. It is important to note that the GDPR is a
regulation, in contrast the the previous legislation, which is a directive.

Does my business need to appoint a Data Protection Officer (DPO)?


DPOs must be appointed in the case of: (a) public authorities, (b) organizations that engage in
large scale systematic monitoring, or (c) organizations that engage in large scale processing of
sensitive personal data (Art. 37). If your organization doesn’t fall into one of these categories,
then you do not need to appoint a DPO.

How does the GDPR affect policy surrounding data breaches?


Proposed regulations surrounding data breaches primarily relate to the notification policies of
companies that have been breached. Data breaches which may pose a risk to individuals must be
notified to the DPA within 72 hours and to affected individuals without undue delay.

Will the GDPR set up a one-stop-shop for data privacy regulation?


The discussions surrounding the one-stop-shop principle are among the most highly debated and
are still unclear as the standing positions are highly varied. The Commission text has a fairly
simple and concise ruling in favor of the principle, the Parliament also promotes a lead DPA and
adds more involvement from other concerned DPAs, the Council’s view waters down the ability
of the lead DPA even further.

101 | P a g e
Current regulations & guidance

• European Directives 95/46/EC (Data Protection) and 2002/58/EC (Electronic Communications)


led to different Regulations across EU member states

In the UK we have:
 The Data Protection Act 1998 •
 Privacy and Electronic Communications (EC Directive) Regulations 2003
 ICO Direct Marketing Guidance – this was issued to clarify ICO’s requirements for
compliance
 Other EU members have their own data protection regulations
 The current UK regulation is ‘light touch’ compared to some others regimes

Future Regulation will be totally covered under New GDPR which is the regulation will take
effect after a two-year transition period and, unlike a Directive it does not require any enabling
legislation to be passed by government; meaning it will be in force May 2018.

Survey and Findings


Survey reveals that ‘Data Security’ in banks continues to be driven by External Threats and
Regulatory Requirements whereas ‘Data Privacy’ is slowly beginning to gain relevance.
Information Security is still seen as an IT centric function with minimal coordination with Fraud
Management function. Lack of customer awareness on information security and the threat from
insecure customer end points are key challenges faced by the banks.

When you hear the word privacy, what comes to your mind?

N=100
Bodily privacy (e.g. your physical body) 32.54
Communication privacy (e.g. calls received or dialed 48.33
through telephone)
Information privacy (e.g. information exchanged on the 51.24
Internet)
Territorial privacy (e.g. your living space, working space) 31.59
All of the above 28.43
Others 0.68

Which of the following information is personal to you that you would NOT like to share?

N=100
Annual house hold income 53.64
Bank account details 64.63

102 | P a g e
Credit card number 68.18
Date of birth 5.80
Email address 6.25
Family details 14.86
Full name 2.03
Health and medical history 27.17
Landline number 8.42
Marital status 3.95
Mobile number 13.45
Passport number 64.45
Passwords 88.39
Personal income 62.77
Pictures and videos featuring self 18.67
Physical details - height, weight, eye color 8.47
Postal mailing address 5.51
Religion 2.70
All of the above 3.65
Others 2.00

Does privacy for you change with situation and context, i.e. what information you share with whom may
be different at different time and context?

N=100
Yes 52.79
No 17.83
May be 29.38

103 | P a g e
With whom would you share the following information?

Friends Family Relatives Society Banks Government Everybody Nobody


Annual house hold income 17.30 58.32 15.42 2.17 7.44 21.40 2.54 35.01

Bank Account Details 7.53 47.82 6.83 1.10 24.24 14.14 1.14 43.46

Credit card number 5.02 38.65 3.75 0.91 10.66 5.05 1.31 54.66

Date of birth 31.38 40.03 27.91 7.59 13.54 12.85 54.54 1.67
Email address 32.36 39.69 26.74 6.44 11.85 9.98 51.57 1.81
Family details) 35.47 50.46 39.22 6.47 8.17 11.57 32.24 5.54
Full name 20.74 26.45 17.93 5.92 9.75 10.09 68.98 1.17
Health and medical history 26.79 59.66 27.96 2.85 3.99 5.97 14.42 17.94

Landline number 36.45 44.41 36.57 5.63 11.38 8.74 47.34 2.11
Marital status 27.06 34.05 25.14 7.84 9.58 10.02 57.89 2.15
Mobile number 39.54 43.80 34.90 5.57 16.54 12.54 46.31 3.79
Passport number 9.00 34.49 6.85 1.35 5.36 11.22 4.2 55.73
Passwords 2.1 13.38 1.3 0.52 0.80 1.01 1.24 83.97
Personal income 12.1 41.99 9.43 1.43 5.04 6.16 2.9 51.55
Pictures and videos featuring 44.81 56.53 34.74 3.02 2.12 3.47 23.75 10.09
self
Physical details- height, weight, eye color 33.97 50.75 29.84 3.74 3.74 6.14 34.85 6.19

Postal mailing address 32.05 41.29 29.46 6.00 15.04 15.46 47.16 2.77

Religion 18.12 25.84 19.16 6.40 8.07 8.94 61.94 3.46

104 | P a g e
Imagine you are walking through a shopping mall, where you observe a camera capturing the movements
of people in the shops, what would be your reaction? (Choose one which is applicable)

I would not change my actions 49.75


I would try to avoid the camera 27.22
I would never go to the shopping mall again 7.98
If at all, a camera captures my movements, I would be curious to know the reasons for 14.25
capturing the video
Others 0.80

How much do you agree / disagree with these statements?

N=100
Strongly Agree Neutral Disagree Strongly
agree Disagree
Consumers have lost all control 23.66 52.97 15.94 5.84 1.13
over how personal information about
them is circulated and used by companies

Most businesses handle the personal 13.76 44.87 27.37 13.49 1.97
information they collect about consumers in a
proper and confidential
way
Mobile phones can be privacy invasive 14.42 55.17 21.35 7.75 0.63

Landline phones can interfere with 14.29 45.4 27.18 11.39 1.00
individuals privacy
Websites can hinder privacy by collecting 20.23 50.99 20.60 6.41 0.84
personal information
Credit cards can be privacy invasive 18.98 44.93 24.99 9.40 0.93
Phone banking can invade privacy 15.95 47.24 24.71 9.14 1.57

105 | P a g e
Consider a scenario where you visit a coffee shop which provides a free Wi-Fi connectivity for its
customers. It doesn’t ask for password for connectivity. Would you access the Wi-Fi facility to log-in
your email?

N=100
Definitely would 19.72
Probably would 33.88
Not sure 11.58
Probably not 10.56
Definitely not 24.26

Imagine for checking your results of an entrance exam you went to the institute and saw that the results
have been displayed on a notice board with your name, marks and category (general / OBC / SC)
mentioned.

N=10,402, FC = Feel Comfortable


Always Usually Sometimes Rarely Never
FC FC FC FC FC
How would you feel about 34.42 41.87 11.53 4.31 7.74
your marks being displayed
on the notice board
How would you feel about 35.22 35.53 9.63 5.9 13.05
your category (general /
OBC / SC) being displayed on the
notice board?

While traveling in long-distance trains, a reservation chart with details e.g. last name, first name, age,
gender, boarding station, destination, seat number, PNR number for each passenger is displayed on the
platform and the compartment. How would you feel about your information being displayed as in this
scenario?

N=100
Always feel comfortable 36.74
Usually feel comfortable 43.43
Sometimes feel comfortable 8.06
Rarely feel comfortable 4.45
Never feel comfortable 7.33

106 | P a g e
Section 2: Mobile Privacy

Do you save personal information in your mobile phone?


N=100
Yes 52.49
No 43.56
Don’t remember 3.94

What is the personal information which you don’t mind storing in your mobile phone?

N=100
Business related information (meeting details) 24.68
Credit card number(s) / ATM card number(s) / PIN 26.20
number(s)
Information e.g. date of birth, PAN number, ID number, 30.51
account number
Password(s) 25.00
Videos, photographs etc. 64.57
All of the above 10.40
Others 2.29

What are the reasons for which you don’t store personal information on your mobile phone?

N=5,925
Worried about phone being stolen / lost 40.08
Concerned about somebody accessing the phone at work, or 38.51
outdoors without permission
Concerned about somebody accessing the phone at home 24.86
without permission
Don’t feel the need 34.16
All of the above 9.91
Others 0.41

107 | P a g e
Imagine you visited a mobile service provider (e.g. Vodafone) to buy a new mobile connection; they asked you
to fill a form giving details e.g. name, date of birth, ID proof. Which of the information given below you
would share with them, if they are NOT mandatory fields?

N=100
Alternative address proof 42.46
Another contact number 33.71
Educational qualification 24.56
ID proof 67.18
Permanent address proof 30.12
Photograph(s) 63.51
Proof of place of work 18.19
Parents details 8.33
All of the above 8.23
None of the above 4.00
Others 0.26

How much do you agree / disagree with these statements?

Strongly Agree Neutral Disagree


agree
Mobile service providers give 13.52 49.69 20.86 13.06
reasonable protection for the
information they collect

Mobile service providers can keep a 11.40 53.92 22.24 10.61


record and can access the
information exchanged through
mobile phone
Phone conversations can be tapped 20.64 47.53 21.19 7.90
by mobile service providers in
national interest
Mobile service providers can share 20.01 47.40 19.90 9.06
the customers information with
government organization when
required, even without informing
the customers
Mobile service providers can share 14.07 33.24 22.04 19.43
private information you provide
them with third parties

108 | P a g e
Do you use phone banking services to check your balance in the account?

N=100
Yes, it is safe to use 15.73
Yes, because I don’t have a choice 8.53
No, because I fear information may be leaked through phone 21.11
tapping
No, because I am not sure of who is on the other side 33.93
Others 20.69

Would you use phone banking services to transfer money from your account?

N=100
Yes, it is safe to use 12.77
Yes, because I don’t have a choice 6.89
No, because I fear information may be leaked through phone 22.71
tapping
No, because I am not sure of who is on the other side 37.34
Others 20.29

While exchanging information on mobile phone, what according to you, is the extent of confidentiality
provided by the mobile service provider for information being exchanged?

N=100
Very high 11.49
High 37.50
Neutral 24.60
Low 9.77
Very low 1.69
I don’t know 14.96

109 | P a g e
What do you do before you sell your mobile phone?

N=100
Copy the details and other information from SIM card and 12.64
phone memory
Copy the information from SIM card and phone memory 40.40
and then delete information
Delete all information that is stored in the mobile phone 31.30
Delete only specific details e.g. phone numbers and messages 4.89
Dont do anything 6.68
Others 4.08

While exchanging information on land-line phone, what according to you, is the extent of confidentiality
provided by the land-line service provider for information being exchanged?

N=100
Very high 8.25
High 37.41
Neutral 26.51
Low 10.98
Very low 1.84
I don’ t know 15.01

While moving in a shopping mall, imagine you see somebody taking your picture using a mobile phone,
what would be your reaction?

N=100
No reaction 32.27
I don’t like a stranger taking my picture without my permission 48.94

I don’t like being photographed at all in public places 17.34


Others 1.45

110 | P a g e
While travelling (i.e. in roaming), the mobile service providers use regional languages to present
information e.g. user busy, phone switched o↵. For example, if your phone connection is from Delhi and
if you are traveling in Mumbai, the messages are presented in Marathi. Would you consider this feature as
privacy invasive?

N=100
Strongly agree 10.02
Agree 43.97
Neutral 22.94
Disagree 19.24
Strongly disagree 3.83

Section 3: Credit Cards

Do you have a credit card issued in your name?

N=100
Yes 57.18
No, but I have used them 11.94
No, I don’t use them at all 30.88

Do you lend your credit cards to others?

N= 100
Yes 34.58
No 65.42

111 | P a g e
To whom do you lend your credit card for using it?

N=100
Children 10.38
Friends 26.33
Parents 53.68
Relatives 3.94
Professional Colleagues 10.24
Spouse 29.23
None 0.88
Others 2.91

Whose (owned by whom) credit card would you also use?

N=100
Children 4.90
Friends 20.47
Parents 43.11
Professional colleagues 2.43
Relatives 5.59
Spouse 19.10
None 8.85
Others 20.02

Whose (owned by whom) credit card would you use?

N=100
Children 7.83
Friends 27.19
Parents 57.15
Professional colleagues 6.44
Relatives 9.15
Spouse 16.50
None 10.17
Others 2.82

112 | P a g e
What is true for you with respect to using credit cards in today’s world?

N=100
It is unavoidable 20.12
It is handy; use it frequently for various purposes e.g. shop- 39.70
ping, petrol pumps, and grocery shops
Use only for specific tasks e.g. online ticketing 17.04
Use as a back-up for emergency situations 22.70
Others 0.44

Imagine that you went to a restaurant to have food with your friends / family. Which of the following is
true if you make the payment of the bill through your credit card?

N=100
You would give the card to waiter, for making the payment 19.31
If you can go yourself, you would take the card yourself to 39.94
cash counter, get it swiped in front of you
If you cannot go yourself, you would give the card to waiter 14.45
and check the details of bill carefully
You would give it only to the waiter only if it’s a trustworthy 9.14
restaurant
You know it can be misused, but cannot do anything about 3.83
it
You would not like to use credit card to make the payment 13.11
Others 0.21

Do you think credit cards should display the details e.g. name, phone number, and date of birth on them?

N=100
It should not display any personal information, as it makes 32.75
information public
44.80
It should display only relevant details required for identification
14.68
It should display all details as these are required for verification
It does not bother me 7.36
Others 0.41

113 | P a g e
Do you think it is possible for anybody to steal your identity and impersonate you, using your credit card?

N=100
Yes, it is fairly easy 44.52
Yes, but it’s not easy 35.58
No, it’s not possible under any circumstance 14.53
I have never thought about it 5.00
Others 0.37

Imagine you go to withdraw money from ATM; while you are withdrawing money, you notice other
people peeping into the ATM while you enter the PIN. How would you consider entering details of your
account in this scenario?

N=100
Definitely would 17.76
Probably would 30.95
Not sure 9.25
Probably not 16.17
Definitely not 23.15
I have no other choice 2.72

Imagine you go to withdraw money from ATM; how would you consider using the ATM center if there are
two machines in the same center and someone else is using the other machine?

N=100
Definitely would 25.78
Probably would 39.09
Not sure 10.74
Probably not 7.98
Definitely not 13.09
I have no other choice 3.32

114 | P a g e
Section 4: Internet and Online Social Network

Have you ever removed cookies in your browser after using the Internet?

N=100
Often 22.24
Sometimes 33.99
Hardly ever 5.13
Never 23.36
Not familiar with cookies 7.57
Don’t know 7.71

Which of the following email services do you use?

N=100
Gmail 80.18
Hotmail 24.78
Official email 16.16
Yahoo mail 49.66
Do not use any email services 4.77
Others 2.76

Do you exchange personal information, e.g. bank account numbers, passport details through your email?

N=100
Yes, frequently 14.21
Sometimes 26.51
Only in emergency 18.26
No, not at all 38.52
I don’t remember 2.51

Do you save personal information, e.g. bank account numbers, passport details in your email for future
use?

N=100
Yes, frequently 15.36
Sometimes 24.35
Only in emergency 12.97
No, not at all 44.57
I don’t remember 2.75

115 | P a g e
What are your privacy concerns while exchanging / saving personal information through email services?

N=100
I have no concerns 18.75
I believe that the privacy of my data is maintained 40.70
I am concerned, but I do not have a choice 22.26
I am concerned so I don’t save/exchange 14.68
Dont know 3.60

Do you read the privacy policy of an e-commerce website e.g. PayPal, eBay, bank websites while creating
an account?

N=100
Yes, I do 34.00
I browse through it 33.91
Never 24.29
Don’t remember 6.99
Others 0.81

Do you read the privacy policy of an email provider while creating an account?

N=100
Yes, I do 34.48
I browse through it 31.20
Never 27.63
Don’t remember 6.29
Others 0.40

Do you have a Facebook account?

N=100
Yes 85.1
No 14.89

116 | P a g e
What privacy settings do you have for the following information on Facebook? Please provide your
response to the best of your knowledge.

Not Friends Friends Network Everyone Customized


Shared of
Friends
Age 13.24 40.64 7.65 2.44 33.14 2.89
Date of Birth 7.36 44.59 8.68 4.60 31.89 2.88
E mail ID 6.49 44.17 10.96 4.91 30.77 2.7
Gender 2.97 32.49 11.46 5.70 45.95 1.45
Location 5.77 33.13 12.48 5.70 39.83 3.09
Marital Status 8.13 32.59 12.33 4.98 39.08 2.89
Name 4.22 25.28 10.82 4.83 53.13 1.72
Other Profile 5.95 40.34 13.12 6.32 30.37 3.91
Information e.g.
education and work
details
Pictures / Photos 7.18 48.95 11.14 5.41 21.02 6.29
Religion 8.96 32.18 10.96 5.41 40.20 2.28
Videos 11.01 45.41 10.40 5.27 21.20 6.71

Do you read the permission box that appears when you first access any third party application e.g.
FarmVille, Mafia Wars?

N=100
Yes, I see but I don’t read it and just allow, otherwise I 22.22
cannot access the application
Yes, I read the permissions the application asks, but always 28.44
“allow” the application
Yes, I read the permissions the application asks, and accordingly decide to “allow” or 19.34
“not allow” the application
I do not remember seeing any such permission box ever 9.61
No, I will never allow third-party application to access my 7.92
personal information
No, I don’t use third party applications 12.47

117 | P a g e
When would you use third-party applications on an online social network?

N=100
When a friend recommends an application 40.56
When I see the application on my friend’s news feed / wall 37.66
When online social network recommends an application 18.15
When I randomly stumble on some interesting application 19.01
Others 6.07

Have you connected / inter-linked your various social networking accounts together e.g. Face-
book, Twitter, YouTube, Buzz, Orkut?

N=100
Yes 46.15
No 44.91
I do not know of any such linking service 8.94

Do you think it is possible for somebody to steal your identity on your social network website
i.e. create a profile with your name, pictures and details?

N=100
Yes, it is possible, but it has never happened to me 60.39
Yes, it is possible, and it has happened to me 24.03
No, it is not possible 10.07
Don’t know 5.51

Section 5: Government initiatives / Legal aspects of privacy

Does the Indian constitution have a provision for privacy of Indian citizens?

N=100
Yes, I know about it 34.44
Yes, but I don’t know what it is 27.61
Not sure, I assume there is a provision, but, I am not aware 22.63
of it
I do not know about this kind of a provision 12.26
No, there is no provision 3.06

118 | P a g e
Do we have privacy laws in India that protect Indian citizen’s privacy?

N=100
Yes 49.01
No 27.34
Not sure 23.65

Are you aware of Unique Identification Number (UID), a Government of India initiative for
every citizen in India?

N=100
Yes 67.23
No 21.11
Heard about it, but do not know the details 11.66

How much do you agree with the statement?

Strongly Agree Neutral Disagree Strongly


agree Disagree
Personal information and biometric 9.55 46.79 25.78 12.69 5.18
data e.g. fingerprints, iris scan
could be accessible to other private
corporate through UID with whom
you would NOT like to share this
information otherwise
12.18 43.11 26.76 15.07 2.88
Government agencies could have
access to details e.g. banking, land
records, Internet logs, phone
records, arms records, driving
license, property records,
insurance, and income tax records
which can be misused by
government agencies

119 | P a g e
CONCLUSION
The concept of privacy in India has not been investigated in detail, and also lack of
empirical data with respect to privacy perceptions among Indian citizens. Recent
developments in the Indian scenario e.g. privacy bill, UID project, signify need for
privacy awareness and understanding in Indian masses. It is also important for policy
makers to comprehend sentiment and opinion of masses for structuring executive laws
and policies for citizens of India. Our study focuses on understanding privacy
perceptions and expectations of Indian citizens. In the first phase, we conducted
interviews among 20 participants and 4 focus group discussions with 31 participants in
total, to collect qualitative data about the privacy perceptions.

In the second phase, we developed a survey questionnaire to collect quantitative data.


We collected responses from various people in India which could help in creating an
information base for masses and policy makers, showcasing the true (perceived) picture
of privacy in India on various platforms e.g. mobile phone, credit cards, online social
networks, and government related issues.

Key takeaways from this research work are stated below:

 Citizens have misinformed mental models of the privacy situation; e.g. some
portion of the participants felt that there is a law which protects them where there
is no privacy law in India, but.

 Most participants felt passwords to be the most protected Personally Identifiable


Information (PII) and then, financial information (bank, credit card details).

 In comparison to this, religion, mobile phone number, and health related


information were rated as less protected PII. • Mobile phones are becoming the
next destination for storing private information. Participants stored personal
information like passwords, credit card numbers, Permanent Account Number
(PAN), PINs, etc. Privacy seems to be the primary concern for not storing
personal information on the mobile phones for the rest.

 About 5% of the survey participants tend to accept friends request from strangers
or people whom they don’t know, but just have common friends. This behavior
seems to be same even with the third party applications.

 About 80% of the survey respondents were aware of identity theft issue through
credit cards. 48

 About 65% of the survey respondents felt comfortable to use the ATM center
with more than one machine in it.

120 | P a g e
 About 5% of the survey participants tend to accept friends requests from
strangers or people whom they don’t know, but just have common friends.

This behavior seems to be same even with the third party applications. We are in the
process writing a more academic style paper on reasons, and implications of the results
from this data.

People are increasingly making their personal information available publically. Today
there is an unprecedented amount of personal data available with Government and
Private Sector Players. We need to understand the importance of this data and India
should try and develop stringent laws for Data protection.

121 | P a g e
Glossary

http://www.eugdpr.org/

http://www.livelaw.in/data-protection-india/

http://www.isaca.org/Groups/Professional-English/privacy-data-
protection/Pages/Overview.aspx

122 | P a g e

You might also like