You are on page 1of 21

43

Security Ethics: Principled


Decision-Making in Hard Cases
Andrew A. Adams

Security always involves making judgements in which the possibilities of harm


to someone’s interests are assessed with respect to possible actions that can be
taken to reduce the likelihood of that harm occurring, and/or reducing, the
intensity of the harm when it occurs. Where the beneficiary of the reduction
in risk is also the party footing the bill (in direct terms such as paying for
the installation of locks or the wages of security guards, or indirect in terms
of reduced opportunities), a fairly standard approach to cost–benefit analy-
ses can be deployed. As other chapters in this volume discuss, though, it is
rarely that simple and security decisions very often place burdens on those
who have no voice in the decision-making processes. In this chapter, we first
take a step back from questions of security per se and present an abstract philo-
sophical framework called ‘ethics’ which provides tools for designing systems
of decision-making to take into account a broader range of goals than just
improving the immediate security of the decision-maker. This framework is
then instantiated with two sets of real-world security issues. These are used to
bring the principles to life using real-world examples (or thought experiments
drawn from one or more real-world examples but simplified and sharpened to
bring out the ethical issues in sharp relief) and demonstrate not only the com-
plexity of the ethical dilemmas posed in security, but also the benefits of using
ethical analysis approaches in making security policy decisions, and in par-
ticular from having ethical principles which inform both policy and practical
security decisions.

Introduction to ethics

‘Security’ is a slippery concept to pin down. In many languages, there is no


distinction between security and safety and even in English there is substantial
overlap between the terms. ‘Ethics’ can be a similarly difficult concept to pin

959

M. Gill (ed.), The Handbook of Security


© Palgrave Macmillan, a division of Nature America Inc. 2014
960 Part VII: Critiquing Security

down. It is formally a term used in philosophy to describe a range of topics,


including:

• Meta-ethics: The study of why it is important to have ethical theories of


behaviour;
• Political theory: How is the existence of the state and of laws justified;
• Normative theory: What general rules can be logically justified to help
analyse specific circumstances and judge the appropriate course of action.

Meta-ethics and security


Many of the other chapters in this volume present an analysis and critique of a
viewpoint on security, and often on the issues of the transfer of risk, or even the
direct application of harm, by those with the power to decide security policy
and procedures onto those without power. These critiques are the meta-ethics
of security at the sharp end, considering real-world politics, power and social
structures.
On a more abstract note, meta-ethical approaches allow us to step away from
the specifics of types of harm and types of security responses and consider the
measures by which we evaluate even the very concepts of harm and security.
Is it justifiable to consider a delusional state of mind a direct harm to the indi-
vidual who suffers from it, or must that delusion have a measurable and distinct
negative effect on their lives (or those of others) in order to be regarded as
harmful and thus potentially subject to a security intervention?
Can a financial value be placed on the life of a human being? If so, can this be
proactively decided and used to justify spending, or not, on security (or safety)
measures? If not, can any recompense be sought for the loss of a life by the
dependants of someone killed through negligence?
What is the value of human dignity, and who decides that an action required
to be performed, or allowed to be performed, is an indignity, and to whom? Are
these decisions the province of the subject of such activity, the object of the
activity, the community at large, the state or the courts?
If there is such a thing as a professional attitude to security, then this pro-
fessionalism requires a meta-ethics as part of its definition, as suggested for all
professions by Freedman (1978).

Political theory and security


Once we move beyond a dog-eat-dog world of ‘might-making right’, various
justifications have been offered to justify the creation of societies which invol-
untarily include individuals. Just by being born in a specific place, one becomes
subject to its laws. Sometimes those laws specifically exclude the right to escape
from those laws and move to another jurisdiction, assuming there is another
jurisdiction to which one can move. Religious doctrines have variously claimed
Andrew A. Adams 961

the authority for making and enforcing laws as dictated by a God or Gods,
through holy writings, direct revelations, ordination as a member of a priestly
class or by birth (as a member of a privileged caste or as a monarch with the
divine right to rule).
Western philosophical approaches provide various justifications for the exis-
tence of laws, and states to support them, grounded in observation of human
nature:

• The necessity for avoiding the chaos of ‘the war of all against all’ (Hobbes,
1660);
• The benefits of cooperation over unlimited competition for access to limited
resources (Mill, 1869);
• Striving to create justice by providing rules for action and penalties for
transgressions of those rules (Rawls, 1971).

Security is often the justification offered for infringing on liberty, and the
legitimacy of security actions whether pre-emptive, proactive or reactive are
generally backed eventually by the state, whether carried out directly by agents
of the state (police, courts, other law enforcement) or authorized by the state
(private security guards, CCTV deployment, trespass laws). In considering eth-
ical questions in security, the source (philosophically and in reality) of the
legitimacy of security activity needs to be clearly understood, else security
seems capricious and arbitrary, undermining its goals of willing compliance
by the vast majority.

Normative theory and security


Even once questions of the meta-ethical framework and the legitimacy of
authority are clearly answered (and they rarely are clear-cut), questions remain
about how to analyse specific ethical problems. In particular, when performing
cost–benefit analyses what are the costs and benefits that must be measured,
how can one type of cost be translated into an exchangeable value with
another, and how can costs and benefits be balanced?
If we follow Kant (1998), then pure reason must be used to inform out anal-
yses of situations, in terms of one’s duty, expressed in terms of the categorical
imperative: one must act as we require others to also do. In particular, we must
balance the interests of others against our own since we would expect others to
do likewise were the situations reversed.
Consequentialists, and particularly utilitarian consequentialists such as
Singer (1982), dictate that we must strive to achieve the best overall outcome
(best being defined as not best just for the person making the decision but best
in overall utility: the greatest good for the greatest number).
962 Part VII: Critiquing Security

An Aristotelian position is that the right course of action is dictated by the


effort to achieve excellence of character and lead ‘a good life’ (as judged by
others not simply ‘the good life’ expressed as the individual pleasure of the
hedonist).

Ethical questions in security

Large amounts of security involve drawing lines between permitted and forbid-
den actions, often on the basis of a categorization of both the actions and the
actor. Physical security, for example, often uses rules about who (actor) may
enter (action) certain physical spaces. The operational concerns of security pol-
icy involve minimizing risk while still allowing system liveness (a system so
completely ‘secure’ that it is unusable is as useless as a rail network that is com-
pletely ‘safe’ by virtue of lack of train movement (Rushby, 2001)). The holders
of power in the system, which includes those setting policies and those imple-
menting policies, too often only consider the benefits inside the system when
setting/implementing policies. Good security engineering should prevent them
from over-securing assets such that their utility to the system is diminished
(a less severe case than the extremes of the dead system described above), but
the resulting policy may have significant problems when viewed from outside
the system, in terms of questions of fairness and of transferred costs.
Consider an abstract example: in a population of 1,000 people say there are
10 people with a property A, while the other 990 people do not have property
A, and that this property is easily identifiable, entirely congenital and irre-
versible. Say that out of those who have property A, there is a 10% chance that
if allowed access to the system they will cause damage of X value. A system-
internal evaluation of the cost–benefit analysis of allowing access to the system
by the ten people with property A will depend on the loss of value that bar-
ring the nine non-damaging people with property A from access produces as a
result and whether that loss is greater or less than X. From outside the system,
however, those who have the property A but will not cause damage are being
deprived of access to the system on a collective basis on the basis of sharing
property A with a known to exist, but of unknown identity, member of the set
of people with property A. They have lost all potential benefit from access to
the system. To take this a step further, if all systems which might provide the
appropriate benefits to people with property A then follow the same logic, then
those nine innocent people with property A are all denied any possibility of
that benefit simply on the grounds of sharing a congenital property with a bad
actor.
Only by considering broader societal concerns alongside internal cost–benefit
analyses can appropriate security policies be deemed ethical. While such ethical
considerations are rife in many areas of security, in this chapter we will use two
Andrew A. Adams 963

examples of well-known issues in security ethics to illustrate both the problems


that exist and suitable ways of using the tools of ethics to analyse the problems.
The two questions that occupy us are:

• Poachers and gamekeepers: When someone has been previously found guilty
of violating security rules, should they be barred from acting as part of
the security apparatus; or is it foolish to ignore the knowledge they have
of the mindset and methods of their (hopefully former) peers?
• Knowledge, disclosure and security by obscurity: If someone discovers a
security flaw in a system, whom should they tell about this flaw, when and
in what detail?

Poachers and gamekeepers

There is little evidence that skilled poachers were favoured recruits to become
gamekeepers (in fact, while the policing aspect of gamekeeping might well
benefit from knowledge of poaching techniques the larger part of the job (man-
aging game sources) is a totally separate skill base (Munsche, 1981)). Despite
this the phrase poacher-turned-gamekeeper and variants thereof is often used
as shorthand for the recruitment or co-option of bad actors into the security
apparatus. There are arguments both for and against this practice, in both the
ethical and the practical sides of security systems design.
From a practical point of view the question is of whether the risks of employ-
ing someone with a history of law-breaking (and therefore it is assumed a
higher risk of doing so in future) are outweighed by the benefits of access to
their experience. In different areas of security, there are different practices and
often different justifications. It is also important to consider that from a security
point of view, all members of staff of an organization are part of the security
of the organization, not just those explicitly employed as security personnel,
and in fact many staff of low status (often the only kinds of jobs that former
convicts are considered for) end up with significant opportunities for wrong-
doing beyond those of other classes of staff. Cleaning staff, for example, are
often employed to work outside the normal operating hours of an organiza-
tion, may well be working alone and have broad physical access to premises.
From that point of view perhaps, it is reasonable that no firm should be hir-
ing former convicts in any position. However, there is significant criminology
research that shows that far from being congenitally criminal, many end up as
criminals because of a lack of (apparent, to them) other opportunities and/or
from being involved in a social circle within which criminal activity is the norm
rather than the exception (Clinard et al., 2010). Excluding former convicts from
all or most legitimate employment, from that point of view, simply creates a
criminal class with neither opportunity nor incentive to reform. Indeed, recent
964 Part VII: Critiquing Security

statistical analyses from researchers at Carnegie Mellon University have shown


that criminals who have not offended for five years after the end of incarcera-
tion pose only the same risk of future criminal activity as the general population
(Blumstein and Nakamura, 2009). In some countries, conviction of crimes (or
perhaps conviction of crimes of a certain level of seriousness) will permanently
stain someone’s character and relegate them in some forms or other to second
class citizenship. In the United States, for example, those convicted of felonies
lose the right to vote not only during their period of incarceration but for life.
Loss of franchise is a common penalty historically in many places although
not universal and is being challenged in those EU (European Union) countries
at the time of writing as a violation of human rights, and the United States
is unusual for a democracy in permanently removing the franchise. In other
countries, particularly many European ones, criminal convictions even of fairly
serious crimes have a time limit (usually relative to the seriousness of the crime)
on when they can be considered for most purposes. This has created a prob-
lem recently with local newspapers digitizing their archives and making them
available on the Internet, since those newspapers frequently report on local
criminal case verdicts, providing a permanent and universally visible record of
conviction in violation of the spirit of some laws on spent convictions.
Most of these laws on discharging of criminal status do not apply to law
enforcement positions, where a lifetime clean record may be required. There
are further debates about the practical and ethical issue for employing those
with prior convictions in various other security roles.

Law enforcement
In some places, recruits to law enforcement positions may not have a serious
criminal record. In California in the United States, for example ‘Peace Officer’
recruits cannot have a felony conviction (Commission on Peace Office Stan-
dards and Training (California), n.d.). This often extends to prison and parole
officers as well as police and other law enforcement agencies. This rule may
or may not extend to people who work in supporting roles – anything from a
cleaner to a scientific evidence processor. Other countries have less strict rules
and in the United Kingdom, for example, the Metropolitan Police Service (the
largest in the country covering most of London) states that ‘[a] conviction or
caution does not necessary bar you from joining the MPS. A candidate’s age
at the time of offence, the number of years that have elapsed (normally five
years must have elapsed for recordable offences) and the nature of the offence
is taken into account before a decision is made.’ (Metropolitan Police, n.d.) and
freedom of information requests to police forces in England and Wales in 2011
provided the information that over 900 serving officers had criminal records,
mostly for driving offences but including some more serious and even violent
offences.
Andrew A. Adams 965

The arguments given for banning or severely restricting those with prior
convictions from becoming law enforcement officers include the issue of per-
sonal character and authority, the nature of policing (which in general requires
an acceptance of its legitimacy by the populace) and the opportunities for
wrongdoing afforded to law enforcement officials. A ban on prior offenders
being allowed to take up support roles shares some of these arguments. These
include the fact that while support personnel do not have powers such as the
authority to arrest, they may have the opportunity to interfere with investi-
gations or evidence or to smuggle in illicit materials to prisoners. While all
of these things can be done by whoever undertakes a necessary role such as
cleaning an FBI office, the prior evidence of poor character on behalf of those
convicted of serious offences is offered as a reasonable basis for restraint of
opportunity.

Private security
The pay and conditions for many low-level private security operatives are
quite poor (Professional Security Magazine, 2011). The average night watchman
patrolling a warehouse containing ordinary consumer goods will be working
long antisocial hours on low pay. Former criminals are one of the few groups
who are willing to take jobs like this (Abrahams, 2013). Managers setting
security policies will be balancing the need to keep down the costs of provid-
ing security guards with the risk of employing those with previous criminal
connections.
There is also a tendency on the part of some managers to believe that employ-
ing some of the local criminal fraternity as security guards can be useful in
deterring crime against their premises in a number of ways: criminals may avoid
the premises patrolled by their friends to avoid coming into conflict with them;
convicted criminals may have a reputation for a violent response (a willingness
to go beyond what the law allows in action against intruders) which may scare
away criminal intrusion attempts; those who have committed crimes under-
stand the methods and mindset of criminals and are therefore better placed to
act as defenders against them.
These beliefs by managers represent quite insecure and unethical attitudes in
many ways. The belief that criminals as guards will scare away other criminals
is both unlikely to be true in most cases, and in particular a reliance on the
willingness of a security guard to break the law in one area (the limitations of
action against intruders) is a violation of agreement on the importance of the
rule of law in general. For most low-level security positions, good character on
behalf of the guards should be seen as a fundamental part of maintaining trust
in the security apparatus throughout the organization. Sharing the mindset
of criminals is not particularly an asset in such low-level positions and may
undermine training in suitable approaches.
966 Part VII: Critiquing Security

In certain specialist roles, however, it can be argued that there may be ben-
efits to accessing the experience and mindset of bad actors in order to better
design security systems. The ethical and practical justifications for this depend
on a number of factors. The first being whether there are legitimate ways for
those with a law-abiding mindset to nevertheless gain the relevant skills and
experience to help design better security systems. This issue is covered below in
respect of computer security in particular details. A common way to organize
the development of a security system is to engage in game theoretic approaches
and set one’s security team to working as attackers and defenders, figuring
out how to bypass the security systems put in place. Ideally, this is done in
the abstract but that is not always good enough. As stressed by many secu-
rity experts including unblemished characters such as Schneier and convicted
criminals such as Mitnick, ordinary people in an organization are a crucial part
of their security system (Schneier, 2003b; Mitnick and Simon, 2002). Hence,
in trying to figure out the vulnerability of a system to attack by, for exam-
ple, social engineering, it will be necessary to test the system in the field by
mounting social engineering attacks on it. The development of social engineer-
ing attack skills has limited legitimate usage but significant illegitimate usages
and hence the talent pool of good social engineers may well be mostly limited
to those who have misused those skills in the past. On the other hand, giv-
ing those people legitimate (and probably quite well paid) opportunities to use
their skills may well benefit everyone in the loop. Given the previously demon-
strated character issues of those convicted of crimes it would be foolhardy to
place them in charge of the development of security systems, or give them carte
blanche to attack a system. On the other hand, using their talents as part of a
group of socially responsible security experts to design better security, and lim-
iting their access to only those elements of the system that their expertise is
useful for, may well be both ethically acceptable and practically useful.

Computer security
In the early days of remotely accessible computers, access to these systems was
very limited (Levy, 2002). Those with an interest in how they worked could
often not gain legitimate access to them. In a spirit of challenge and some
mischief, they sought out machines which were visible and attempted to gain
access to them. Sometimes they caused damage to the systems in the ways
they tried to gain access or in what they did once they gained access. Often,
they only looked at the contents of the system and withdrew, perhaps leav-
ing behind a newly created backdoor back into the system for the future. The
operators of these systems, when they realized that they had been accessed
without authority, were understandably unhappy about this and in addition
to attempting to secure their systems against damage and unauthorized access,
many sought out the penetrators and attempted to prosecute them for their
Andrew A. Adams 967

activities. Where damage to systems could be demonstrated, this was sometimes


possible, although law enforcement personnel (including prosecution authori-
ties) were typically highly uninformed about computers and sometimes failed
to appreciate the problems that were being caused, particularly where infor-
mation was only read and not altered. Prosecutions often failed due to a gap
in the law covering information systems. While physical trespass on land or
inside buildings was a prosecutable offence, unauthorized reading of informa-
tion on a computer system was more difficult to either prove, or to identify a
suitable law under which to prosecute it. Some attempts to cover such activity
such as prosecuting for ‘theft of electricity’ due to the extra power usage caused
by unauthorized access were made or suggested by some as possible grounds
for prosecution, although the Republic of Ireland’s Law Reform Commission
dismissed this as a tangential approach (Clark, 1994; The Law Reform Commis-
sion (Ireland), 1992). In fact, many jurisdictions created specific new legislation
to define the parameters or legitimate and illegitimate access to computer sys-
tems, such as the United States’ Computer Fraud and Abuse Act of 1986 and the
United Kingdom’s Computer Misuse Act of 1990. Such acts made it a serious
criminal offence to access data on an electronic system without authorization
and included potentially severe prison sentences (decades) for significant viola-
tions. Those who were caught and convicted of such unauthorized access were
thus tarred with similar criminal records to major fraudsters, rapists and mur-
derers. One of the most high profile of those convicted of such offences was
Kevin Mitnick, a twice-convicted computer criminal. Having served his second
sentence from 1995 to 2000 (most of it, until 1999, pretrial, and including a
period for violation of his previous supervised release) Mitnick was subject to
strict supervised release conditions initially preventing him from accessing any
computer or network, and from profiting from any description of his crimes
(such as a book). The latter element was lifted on appeal and his first book
about social engineering (Mitnick and Simon, 2002) both appeared before the
end of his supervised release and was typeset on a (non-networked) computer.
Since the end of his supervised release in 2003, Mitnick has become a very
high-profile consultant and speaker on information security matters. Mitnick
himself admitted that his high profile was due to his misdemeanours (Gray,
2003).
These days, it is relatively easy to gain access to systems to try out one’s tech-
nical skills at cracking into them. A single machine may even be partitioned
into multiple virtual machines so that one of them can be used to attack the
other. However, there are still systems which the average computer user with an
interest in security may find difficult to access, such as large routers or modern
mainframes running hundreds or thousands of virtual servers, and which may
be vulnerable to different exploits compared to smaller machines. Mitnick’s
expertise, however, was not particularly in the technical field. He mostly relied
968 Part VII: Critiquing Security

on social engineering techniques to trick legitimate access information out of


those who legitimately held it. Such techniques are difficult to develop and
almost impossible to legally verify their success without the cooperation of
the target. Such targets, however, are often unwilling to submit to technical
or social penetration testing of their information security systems (fearing that
knowledge of their vulnerability will damage their reputations, or expose their
vulnerability to those who would exploit it).
Do the experiences of doing social engineering for real, that is, where the
risk of getting caught and prosecuted, mean that those who have worked such
scams have a different (even higher level) set of skills and insights into the
mechanisms and vulnerabilities? There are highly respected security experts
who have not committed such crimes, such as Bruce Schneier (2000, 2003b,
2012), much of whose advice matches that offered by the former (it is hoped)
criminals (Mitnick and Simon, 2002). In the computer security arena, it seems
that many companies are willing to hire former criminals as consultants, per-
haps not in the belief that their advice is necessarily any better than that offered
by those without a criminal record but perhaps in the belief that they are more
likely to be believed by the ordinary employees who are so crucial to protecting
against social engineering attacks.
In addition, as with physical security personnel, access to a legitimate trade
where their skills can be put to socially positive uses is perhaps a better solu-
tion for society than condemning them to permanent unemployability and
the temptations to turn their skills to illegal profit-making (Freeman, 2003;
Lockwood et al., 2012; Uggen, 2000).

Knowledge, disclosure and security by obscurity

Should honest people attempt to break security systems, physical ones such
as locks and informational ones such as password protection systems? If hon-
est people do succeed, what should they do with the information they have
gained? Should they only inform the producer of the vulnerable systems and
hope that they will reduce or remove the vulnerability? Is it possible to inform
honest users of the system without also allowing that knowledge to fall into the
hands of bad actors who will misuse it? What level of detail should be dissem-
inated, when and to whom? This is a major ethical dilemma for anyone who
works in security and who may spot a vulnerability in a system, but particu-
larly for security researchers who, in seeking to develop better systems, often
identify the weaknesses of current ones.
What information about a system helps a bad actor gain illegitimate access?
From one point of view, all information regarding the operations of a system,
whether formally a part of the security element or not, might be regarded as
potentially of help to the bad actor, and thus dissemination of that information
Andrew A. Adams 969

could reasonably be prohibited by a security policy. However, there are a signif-


icant number of problems with this approach, both from the point of view of
ethics and of efficacy.
Attempting to bar the dissemination of knowledge of a system is often a vain
hope: sometimes the details are self-evident to anyone paying attention and in
areas where many people have legitimate reasons to access the physical space,
they can be simply observed while going about legitimate business in the area
being secured. Even where the details are not self-evident, for example, locks
do not have to carry their brand or type visibly on their exterior and while it
may be convenient to have them display a serial number on the exterior, it is
feasible to have such information only on the part of the lock normally embed-
ded in the door. Assuming that a bad actor will be unable to gain access to the
serial numbers of the locks simply because they are not printed on the lock’s
external plate would be a mistake, however. Various people will have access
to this information legitimately and it takes only one of those to be careless
with the information, not even intending to disclose it, for the information to
be available outside the expected group. Meanwhile, having made the decision
to deploy locks without visible serial numbers and brand marks, other secu-
rity measures which reinforce lock security may be deemed unnecessary. There
are further ramifications to consider as well. Suppose a particular lock from a
particular maker is prone to jam locked under certain circumstances, say if the
ambient temperature exceeds a particular temperature. The occupants of an
office that tends to overheat may wish to enquire whether their office has such
a lock. If the lock brand and serial number is clearly visible on the lock then
they know about the problem and can seek to have the lock replaced by one
without those flaws or to ensure other safety measures are put in place. If the
lock is not easily identifiable from observation, the occupants of the office are
required to trust their employer’s word on whether or not the lock puts them
at risk.
Historically, various groups have tried (with varying levels of success) to keep
certain kinds of information secret (by which we mean limiting its distribu-
tion). Security-related information has often been one of the main targets of
such secrecy attempts. Most technological security systems rely on some level
of secrecy of some information in order to provide limitations on access.

Lock information debate


In the introduction to Hobbs and Dodd (1853: 2) the authors justify their dis-
semination of information about the mechanisms of locks, including reference
to the fact that ‘[a] commercial, and in some respects a social, doubt has been
started within the last year or two, whether or not it is right to discuss so openly
the security or insecurity of locks’. They go on to claim that criminals already
know full well the operations of locks, their vulnerabilities, weaknesses and
970 Part VII: Critiquing Security

strength, and justify their dissemination to honest lock-owners on the basis


that only when honest people have good information on the level of security
offered by security devices (in this case locks) can they make rational judge-
ments about the real value of the devices that are available. A century and a half
later, when Blaze (2003) published a detailed examination of the weaknesses of
certain types of master key systems (in particular against privilege escalation
whereby the legitimate holder of a single key with the right to use that key in
the corresponding lock may quite feasibly discern the master key for the whole
system) it is reported (Schneier, 2003) that not only had the locksmithing com-
munity known about this weakness for a century, but the community then
also complained vociferously about Blaze’s dissemination, despite the fact that
criminals as well as locksmiths are believed to have had and used this informa-
tion, and despite the availability of master key systems that do not have this
vulnerability.

Kerckhoff’s principles
Many elements of a security system (and, in particular, access control systems)
depend on a mechanism with a large possible number of keys of which only
one or a small number will provide access. The debate about information on
physical lock mechanisms is mirrored in debates about information on encryp-
tion systems. A system that relies for its robustness largely on the assumption
that the mechanism is unknown to an attacker, as opposed to relying on the
particular key to a particular message being unknown to an attacker, is a weak
system. Kerckhoff (1883) codified a set of principles regarding the evaluation
of cryptographic systems which included this observation as well as a num-
ber of others, including the need for easy methods of changing the key. At the
time, encryption relied primarily on shared secret keys, a form of encryption
in which the same ‘key’ is used for both encrypting and decrypting a mes-
sage. While such systems are often fast and secure they require a secure method
for sharing the key and each party must rely on the other to keep it secret.
As far back as 1884 the idea that some kinds of mathematical operations might
allow for two separate keys to be generated, each of which is the opposite of
the other (so key A can be used to encrypt a message which can then only be
decrypted by key B while key B can be used to encrypt a message that can then
only be decrypted by key A) was proposed by Jevons (1874). However, suitable
mathematical operations were not discovered until almost a century later.
Kerckhoff’s principles include the concept that a strong communications
security system minimizes the information that is necessarily secret to the
actual clear-text message and the key needed to decrypt it. A good lock is hard
to open without the matching key (that is, hard to pick) whether one knows
about its mode of operation or not. A message encrypted with a good encryp-
tion system is hard to read without the matching (encryption key) whether one
knows about its mathematical encryption process or not.
Andrew A. Adams 971

Cryptography suppression
As discussed on the previous page, the basic idea of one-way functions and
their possible use for cryptographic purposes was extant in the 1870s, but
so far as it is known, it was not until the 1970s that feasible processes for
using asymmetric encryption where one key encrypts a message and a match-
ing one decrypts it (and vice versa) were developed. As Levy (2002) discusses
at length, not only was public key encryption developed but kept secret by
UK government communications surveillance operatives but also computer
cryptography researchers in the United Kingdom and the United States were
pressured (by various means) to avoid working on cryptography, or to only
submit such work to the relevant agencies (the NSA (National Security Agency)
in the United States and GCHQ (Government Communications Headquarters)
in the United Kingdom). By the end of the 1970s, however, cryptographers
were working openly outside government agencies. Various attempts by the
US government in particular were made to allow some use of cryptography
commercially but to prevent its spread becoming too wide, and in particu-
lar to prevent its spread from interfering with the abilities of the NSA and
GCHQ to read communications. In particular, encryption algorithms and their
implementation as software required an armaments export licence for many
years. Such licences, which required each individual item to be granted a
licence, were not a suitable mechanism for commercial software such as the
Lotus Notes suite of office communications tools which relied on commer-
cial retail sales. These export licences were removed in 1996, although some
restrictions still remain, requiring a national security review of commercial
products to be sold overseas, and a requirement for US organizations and cit-
izens releasing open source software with strong encryption keys (of 64 bits
or longer) to notify the Department of Commerce’s Bureau of Industry and
Security.
Strong cryptography protocols underlie the confidence of users for ecom-
merce (Thanh, 2000) and yet for decades the NSA and GCHQ strove to prevent
independent discovery or dissemination of such systems. Was this justified in
terms of the national security benefits?

Secrecy of surveillance operations


In 2013, the release by Edward Snowden of materials from the NSA in which
the extent of NSA and GCHQ monitoring of communications raised an ongoing
storm of controversy about the role of such monitoring in democratic societies,
including

• collaboration between the intelligence services of allied nations (mostly the


United States, United Kingdom and other EU countries) to collect data on
each other’s citizens and make it available on request in order to bypass
972 Part VII: Critiquing Security

many countries’ restrictions on blanket collection of citizens’ or residents’


communications;
• the lack of judicial oversight, and even the lack of observance of the few
laws that exist; and
• the overreaction of governments against those revealing the information,
including journalists (and their partners) and even the grounding of a plane
carrying a South American head of state on its way from Russia to South
America, while in EU airspace, because of suspicions it might be carrying
Edward Snowden.

Although the Snowden materials have sparked this debate into a major political
issue, it has been building for many years. In 2000 in the United Kingdom, a
submission by the National Crime and Intelligence Service (NCIS) to the Home
Office suggested that communications service providers should be required to
keep communications metadata (data about who contacted whom, when, but
not the exact contents of communications) on all forms of electronic com-
munications (Gaspar, 2000). This proposal generated significant opposition
and was publicly rejected by ministers at the time (McCarthy, 2000). How-
ever, those ministers and the government of which they were a part later
endorsed not only the requests from the NCIS for the United Kingdom but
pushed through a European Directive with significantly broader access justifi-
cations. In addition to the provisions of the USA-PATRIOT Act which provided
US government agencies with sweeping new powers of surveillance and lim-
ited oversight, the much lesser known Foreign Intelligence Surveillance Act of
1978 (FISA) had already provided for close to blanket authorization for US gov-
ernment surveillance of non-US citizens’/residents’ communications (Wilson,
2013). It became clear by 2005 that the administration of George W. Bush had
been violating even the restrictions in FISA and other US laws against target-
ing US citizens and residents with blanket surveillance (Risen and Lichtblau,
2005).
It is claimed by senior members of the intelligence services that revela-
tions about their surveillance capabilities and practices undermine their role
in protecting national security. For example, the head of the United Kingdom’s
‘MI5: The Security Service’ (https://www.mi5.gov.uk/) said in October 2013 that
‘[i]t causes enormous damage to make public the reach and limits of GCHQ
techniques. Such information hands the advantage to the terrorists’ (Hopkins,
2013). Huhne (2013) claimed that even the body of elected representatives who
are supposed to oversee the work of these security agencies had been kept in
the dark about the extent of the capabilities and their actual use. In the United
States, Senator Wyden (D-OR) has been pushing for better disclosure both to
the US legislature and the US public about the capabilities and practices of the
NSA (Epps, 2013). It is clear that this debate about the authority, the capability,
Andrew A. Adams 973

the practices and the public knowledge of security services mass surveillance
programmes will continue for a significant period of time.

The vulnerabilities disclosure debate


Above, the furor surrounding Blaze’s (2003) publication of the vulnerability
of a number of the most common forms of master key system was discussed.
A debate around similar issues regarding security vulnerabilities in computer
software has been running for years. Such security vulnerabilities may come to
light either through random chance or through systematic vulnerability test-
ing. The question is then what those with knowledge do with that information.
Assuming that the possessor of the knowledge is an ethical person (and thus
will neither exploit the vulnerability themselves nor sell it on to others wishing
to exploit it) the most obvious first step is to report the vulnerability confi-
dentially to the producer/maintainer of the software. In some cases, however,
this is difficult to do. If the software is relatively old, the producer may have
gone out of business and there may be no current maintainer. There is a sig-
nificant amount of such legacy software running in businesses, and the older
the software the more likely it is to be vulnerable to modern attacks, at least
partly because older software was often developed with the assumption that
the machine on which it is running will not be networked, or will only be
accessible on a trusted network. If the vulnerability is detected in a programme
which was developed in a bespoke manner, and if the producer of the software
is no longer in business, the situation may be even worse. Such bespoke devel-
opments often use elements which are reused in multiple other related but far
from identical programmes. in such a case the discoverer has almost no chance
of identifying which other companies may be running programs containing
the flaw. For Free Software (aka Open Source Software), it may be difficult to
report vulnerabilities confidentially. Many Free Software projects maintain only
fully visible lines of communication, with bug reports automatically entering
fully visible online databases, or with developer discussion fora open to all to
read.
Even supposing that the initial report of a vulnerability can be made confi-
dentially, how long does the discoverer wait for a fix to emerge before informing
other users of their risk? What should they do if the maintainer either actively
responds that the vulnerability is too minor to be fixed, or does not respond
at all (beyond perhaps acknowledging the report)? Bret McDanel was faced
with just such a circumstance in 2000 when the company for which he had
formerly worked, and which offered an email system that they claimed was
secure, had not fixed a vulnerability of which he had informed them more
than six months before (Rasch, 2003). McDanel sent an email to 5,600 of the
company’s customers informing them of the problem. For his pains, McDanel
was prosecuted for computer misuse under a bizarre reading of US computer
974 Part VII: Critiquing Security

misuse law in which prosecutors claimed that using a computer system to


disseminate information about how to bypass the security settings of a system
was the same as sending information designed to cause the system to malfunc-
tion. Although initially convicted McDanel was later cleared after prosecutors,
in a highly unusual move, petitioned the court to reverse the decision on the
grounds that their own prosecution argument was invalid. In fact, the vulner-
ability that McDanel highlighted was a well-understood generic one that was
obvious to anyone knowledgeable about web addresses (the ID of the customer
was embedded in the URL and could be manually changed to access another
customer’s account). Not all vulnerabilities are so obvious as to how they can
be exploited and this makes decisions about whether and how much to dis-
close an even trickier point. Having identified a generic type of weakness in a
security system, the creator of the system may well claim that this is a purely
theoretical weakness and that there is no real risk involved because it has not
been demonstrated that there is a practical mechanism that can exploit the
weakness to gain unauthorized access. To avoid this, and sometimes for the
challenge, security researchers will often develop real processes to exploit a vul-
nerability. The question then arises that if they decide to publicly announce the
existence of the vulnerability, should they also announce the fact, but not the
details, of their practical exploitation or should they release details of the prac-
tical exploit itself? There are again arguments supporting and opposing each of
these approaches.
A credible claim of the existence of a practical exploit of a vulnerability (any-
one can make a claim but without revealing the details whether others believe
it depends on the reputation of those making the claim) directs the attention of
bad actors to a vulnerability which has a practical, not just a theoretical, weak-
ness. Furthermore, the types of approaches used by those making the claim
may be known from their earlier work and used to reduce the search space for
bad actors seeking to develop practical exploits of vulnerabilities. Of course,
releasing the details of the method of exploiting a vulnerability, or even (for
information-based security systems) releasing examples of code or schematics
of the hardware needed, can be seen as providing the tools for the bad actors to
use. To return to locks as a physical world example, in many countries it is ille-
gal to sell sets of tools which clearly have the sole or primary purpose of illegally
picking locks. However, there is a counter-argument. Those using the security
system, who it is assumed are the principle targets of the release of such infor-
mation, may be able to make good use of it to protect themselves. For example,
other elements of their system can be set up to filter out the attack vectors while
not undermining the utility of the system. Even if a filter system cannot auto-
matically distinguish between legitimate and illegitimate access attempts and
filter out the illegitimate, then system monitors may be set up to track all such
requests, with these regularly or even constantly reviewed looking for patterns
Andrew A. Adams 975

of access which indicate illegitimate rather than legitimate activity. By demon-


strating to all users of the system that it has these vulnerabilities, the discloser is
also putting pressure on the system creator to close the loopholes or risk losing
their customers to more secure systems.
Legal proceedings to attempt to stifle disclosures did not stop with the Bret
McDanel affair. In 2013 (BBC, 2013), the United Kingdom’s High Court issued
an injunction preventing researchers from the University of Birmingham (in
the United Kingdom) and Radboud University Nijmegen (in The Netherlands)
from revealing a security flaw in one of the widespread remote car unlocking
systems in use on many cars (including many high-value luxury brands). This
is a temporary injunction still in force at the time of writing, pending the full
trial. The judgement of the interim injunction is quite detailed and in fact does
not hinge on whether the revelations are damaging to either public security
(that is, whether publication would help car thieves) or the reputation of the
manufacturers of the system. The arguments centre on how the researchers
obtained the internal details of the encryption algorithms in use and whether
they should be allowed to disseminate that information as part of disclosing
its weakness. The judgement granting the interim injunction contains some
useful guidance on the legal thinking surrounding responsible disclosure, with
discussion on the necessity of providing time for the effected manufacturers to
fix the problem and on the scope of possible security risks on behalf of own-
ers of the vehicles. However, the judge admits that the core point of law on
which the injunction has been granted is solely in the method of obtaining
the algorithm. The judgement acknowledges that there exists a legitimate but
expensive way to obtain the algorithm and that there is evidence that this algo-
rithm has been obtained by some parties in this way. However, the claim is that
the algorithm was obtained illegitimately and that this prevents its disclosure
by the researchers.

Conclusions

Should supposedly democratic states be permitted to gather enormous amounts


of data on their citizens and those of friendly democratic countries? If it is
permitted, how secret should the fact and the extent of this surveillance be
allowed to be? The claim of the intelligence community is that without blan-
ket surveillance they cannot protect the public against terrorists. Civil liberties
campaigners, however, point to the chilling effects of knowing that one’s every
communication is collected, recorded and mined, and that it is incompati-
ble with fundamental rights of freedom of speech and assembly. They also
point out that the existing oversight regimes have been unable to restrain the
surveillance within current legal limits. Revelations that, for example, an illegal
employment blacklisting agency in the United Kingdom had information in its
976 Part VII: Critiquing Security

databases that could only have come from police intelligence files, undermines
the claims that information gathered by intelligence and law enforcement is
strictly controlled and never passed to non-state actors. The role, limits, con-
trol and visibility of (particularly blanket) state surveillance of the populace are
some of the key emerging questions of political theory in the early 21st cen-
tury. It also links in to the meta-ethical question of security ethics, which is
how much we can trust those who make decisions about security policy, and
those who implement it. Should we require all those enforcing or making the
law to be as above reproach as Caesar’s wife, or can individuals be reformed and
trusted, having once broken society’s rules? How far can we judge current char-
acter based on only the negative aspects of prior actions? What are the limits of
our trust in those who demand transparency from everyone else, but demand
opacity in their own operations?
It is clear that information is now easier to disseminate than ever before,
although states and powerful private organizations are still seeking to close
down certain types of information transmission. Ethical security researchers
struggle with the questions of balance between revealing too much and help-
ing bad actors and revealing too little and preventing honest people from
protecting themselves.
Security is always connected with issues of power. When properly applied,
security balances the benefits to all against the costs to some. Without eth-
ical guidance, security can too often end up costing too much while often
delivering too little.

Recommended readings

To understand the modern philosophical underpinnings of ethics, and its appli-


cation to questions of security, the classic essay by John Stuart Mill, On Liberty
(1869) (accessible online from http://www.bartleby.com/130) is an excellent
starting point. Mill explains both the individual and the societal benefits of lib-
erty for all as a default setting and provides an early guide to the necessary risks
involved in providing liberty, but thereby also provides the framework within
which liberty may be curtailed in order to provide security. A century later and
the advent of digital computing processing necessitated a close look at the eth-
ical implications of this new technology, and in particular its potential use for
social control. Norbert Wiener’s The Human Use of Human Beings: Cybernetics
and Society is regarded as the founding text of information ethics and remains
a highly relevant exploration of the nature and limits of communications and
control, the two vital elements in any security system, and the ethical questions
raised by the abilities for control that communication technologies provide.
Steven Levy is one of the chroniclers providing an accessible account of the
otherwise opaque world of the computer scientists and mathematicians who
Andrew A. Adams 977

have developed the technology that has shaped the 21st century. His Crypto:
Secrecy and Privacy in the New Code War provides a good explanation of the
basics of modern cryptography and its importance in areas such as e-commerce
and e-banking, alongside the story of the people who developed usable public
key cryptography approaches and their battles with the NSA over publica-
tion of their mathematics and dissemination of implementations in computer
programmes.
Bruce H. Kobayashi provides a solid and accessible analysis of the differences
between individual and social evaluations of risks, threats and security actions
and their consequences in the area of computer security in Private Versus Social
Incentives in Cybersecurity: Law and Economics. His approach can equally well be
applied to the world of physical security questions. Beatrice von Silva-Tarouca
Larsen tackled in depth the ethical questions surrounding the ongoing growth
in deployment of CCTV cameras in Setting the Watch: Privacy and the Ethics of
CCTV Surveillance, concluding that there is far too much deployment of CCTV
systems with both dubious ethical justifications and a clear lack of valid cost–
benefit analysis.
Bruce Schneier, called a ‘security guru’ by The Economist, tackled in Liars and
Outliers the broad question of the necessary place played by trust in main-
taining a civilization, in particular the issue of taking the risk of trusting a
stranger. This is a brilliant exposition of not only the nature and necessity
of trust and how to deal with its betrayals, but also the utility of those who
break society’s rules. A must-read for anyone interested in how to design secu-
rity policies which maintain maximum individual liberty and dignity while
offering minimal opportunities for bad actors to exploit the trust of their
peers.

References
Abrahams, J. (2013). What Can’t You Do with a Criminal Record? Prospect Maga-
zine. May 22nd, http://www.prospectmagazine.co.uk/magazine/criminal-record-chris
-huhne-vicky-pryce.
BBC (2013). Car Key Immobiliser Hack Revelations Blocked by UK Court. July 29th, http://
www.bbc.co.uk/news/technology-23487928.
Blaze, M. (2003). Rights Amplification in Master-Keyed Mechanical Locks. IEEE Security
and Privacy, 1(2), 24–32.
Blumstein, A. and Nakamura, K. (2009). Redemption in the Presence of Widespread
Criminal Background Checks. Criminology, 47(2), 327–359.
Clark, R. (1994). Computer Related Crime in Ireland. European Journal of Crime, Criminal
Law and Criminal Justice, 2(2), 252–277.
Clinard, M.B., Quinney, R. and Wildeman, J. (2010). Criminal Behavior Systems: A Typology.
Cincinnati, OH: Anderson.
Commission on Peace Office Standards and Training (California) (n.d.). General Questions.
https://www.post.ca.gov/general-questions.aspx.
978 Part VII: Critiquing Security

Epps, G. (2013). Ron Wyden: The Lonely Hero of the Battle Against the Surveillance State.
The Atlantic Online. http://www.theatlantic.com/politics/archive/2013/10/ron-wyden
-the-lonely-hero-of-the-battle-against-the-surveillance-state/280782/.
Freedman, B. (1978). A Meta-Ethics for Professional Morality. Ethics, 89(1), 1–19.
Freeman, R.B. (2003). Can We Close the Revolving Door?: Recidivism vs. Employment of
Ex-Offenders in the US. http://www.urban.org/publications/410857.html.
Gaspar, R. (2000). NCIS Submission on Communications Data Retention Law. http://
cryptome.org/ncis-carnivore.htm.
Gray, P. (2003). Mitnick Calls for Hackers’ War Stories. ZDNet.com. http://www.zdnet.
com/mitnick-calls-for-hackers-war-stories-3039118685/.
Hobbs, A.C. and Dodd, G. (1853). Rudimentary Treatise on the Construction of Locks.
London: J. Weale.
Hobbes, T. (1660). Leviathan: Or the Matter, Forme and Power of a Commonwealth Eccle-
siasticall and Civil. New Haven, CT: Yale University Press (Modern Edition Published
1960).
Hopkins, N. (2013). MI5 Chief: GCHQ Surveillance Plays Vital Role in Fight Against
Terrorism. Guardian, October 9. http://www.theguardian.com/uk-news/2013/oct/08/
gchq-surveillance-new-mi5-chief.
Huhne, C. (2013). Prism and Tempora: The Cabinet was Told Nothing of the Surveillance
State’s Excesses. Guardian, October 6. http://www.theguardian.com/commentisfree/
2013/oct/06/prism-tempora-cabinet-surveillance-state.
Jevons, W.S. (1874). Principles of Science. Macmillan & Co. https://www.archive.org/
stream/principlesofscie00jevorich#page/n165/mode/2up.
Kant, I. (1998). Critique of Pure Reason. Edited by P. Guyer and A.W. Wood. Cambridge:
Cambridge University Press.
Kerckhoff, A. (1883). La Cryptographie Militaire. IX Journal Des Science Militaires, 5–38
(January) and 161–191 (February). http://www.petitcolas.net/fabien/kerckhoffs/.
Kobayashi, B.H. (2006). Private versus Social Incentives in Cybersecurity: Law and
Economics. Grady & Parisi, 1, 13–28.
The Law Reform Commission (Ireland) (1992). Report on the Law Relating to Dishonesty.
http://www.lawreform.ie/_fileupload/Reports/rDishonesty.htm.
Levy, S. (2002). Crypto: Secrecy and Privacy in the New Code War. London: Penguin.
Lockwood, S., Nally, J.M., Ho, T. and Knutson, K. (2012). The Effect of
Correctional Education on Postrelease Employment and Recidivism A 5-
Year Follow-Up Study in the State of Indiana. Crime & Delinquency, 58(3),
380–396.
McCarthy, K. (2000). Govt Ministers Distance Themselves from Email Spy Plan. The Reg-
ister. December 5, http://www.theregister.co.uk/2000/12/05/govt_ministers_distance
_themselves.
Metropolitan Police (UK) (n.d.). Metropolitan Police Careers – Frequently Asked Questions.
http://www.metpolicecareers.co.uk/faq.html.
Mill, J.S. (1869). Utilitarianism in Crisp, R. (ed.) Oxford: Oxford University Press. Modern
Edition Published 1998. See http://ukcatalogue.oup.com/product/9780198751632.
Mill, J.S. (1869). On Liberty. http://www.bartleby.com/130.
Mitnick, K. and Simon, W.L. (2002). The Art of Deception. Indianapolis, IN: Wiley.
Munsche, P.B. (1981). The Gamekeeper and English Rural Society, 1660–1830. Journal of
British Studies, 20(2), 82–105.
Professional Security Magazine (2011). Pay Survey: Still A Minimum Wage Sector. Jan-
uary 26. http://www.professionalsecurity.co.uk/news/news-archive/pay-survey-still-a
-minimum-wage-sector.
Andrew A. Adams 979

Rasch, M. (2003). The Sad Tale of a Security Whistleblower. Security Focus. August 18.
http://www.securityfocus.com/columnists/179.
Rawls, J. (1971). A Theory of Justice. Cambridge, MA: Harvard University Press.
Risen, J. and Lichtblau, E. (2005). Bush Lets U.S. Spy on Callers Without Courts.
The New York Times. December 16. http://www.nytimes.com/2005/12/16/politics/
16program.html.
Rushby, J. (2001). Security Requirements Specifications: How and What. Symposium on
Requirements Engineering for Information Security (SREIS).
Schneier, B. (2000). Secrets and Lies: Digital Security in a Networked World. Wiley.
Schneier, B. (2003a). Locks and Full Disclosure. IEEE Security and Privacy, 1(2), 88.
Schneier, B. (2003b). Beyond Fear: Thinking Sensibly About Security in an Uncertain World.
Springer.
Schneier, B. (2012). Liars and Outliers: Enabling the Trust that Society Needs to Thrive. Wiley.
Singer, P. (1982). Practical Ethics. Cambridge University Press: Cambridge.
Thanh, D.V. (2000). Security Issues in Mobile Ecommerce, in Bauknecht, K., Kumar M.,
Sanjay and Pernul, G. (eds.) Proceedings of the First International Conference on Electronic
Commerce and Web Technologies. Springer-Verlag. pp. 467–476.
Uggen, C. (2000). Work as a Turning Point in the Life Course of Criminals: A Duration
Model of Age, Employment, and Recidivism. American Sociological Review, 529–546.
Wiener, N. (1954). The Human Use of Human Beings: Cybernetics and Society. Boston, MA:
Da Capo.
Wilson, C. (2013). A Guide to FISA §1881a: The Law Behind It All. Privacy International
Blog, June 13, https://www.privacyinternational.org/blog/a-guide-to-fisa-ss1881a-the
-law-behind-it-all.

You might also like